Your OpenClaw agent is brilliant at doing individual things. Ask it to summarise an article? Done. Send an email? Easy. Check the weather? Child's play. But here is the thing: you are using a jet engine to power a bicycle.
The real power of an agentic AI isn't in single tasks. It's in chaining those tasks together into automated pipelines that run continuously, react to the world, and handle entire workflows from start to finish—without you ever typing a prompt.
In this guide, we are going to build a real, working automation pipeline from scratch. By the end, you will have an OpenClaw system that monitors RSS feeds for breaking news, summarises the articles, drafts a personalised briefing email, and sends it to your inbox every morning. And once you understand the pattern, you will be able to build pipelines for virtually anything.
What Is an Automation Pipeline?
Think of a pipeline as an assembly line in a factory. Raw materials enter on one end, pass through a series of stations where each one performs a specific operation, and a finished product rolls out the other end. No station knows about the big picture. Each one just takes input, does its thing, and passes the result to the next.
In OpenClaw, each "station" is a skill or a script. The pipeline connects them with a simple trigger-action pattern:
Trigger → Step 1 → Step 2 → Step 3 → Output
Here is the pipeline we will build today:
[Cron: 6:00 AM] → [Fetch RSS Feeds] → [Filter & Rank] → [Summarise Articles] → [Draft Email] → [Send Email]
Six stages. Zero human intervention. Let's build it.
Prerequisites
Before we start, make sure you have the following skills installed. If you have been following our Ultimate Skills Guide, you probably have most of these already.
# Install the skills we'll need
openclaw skills install rss-reader
openclaw skills install agentmail
openclaw skills install scheduler
You will also need:
- OpenClaw v3.2+ (pipelines require the orchestration engine introduced in 3.2).
- A configured AgentMail account (see the skills guide for setup instructions).
- At least one RSS feed URL. We will use a few tech news feeds in our examples.
Step 1: Create the Pipeline File
OpenClaw pipelines are defined in YAML files stored in your ~/.openclaw/pipelines/ directory. Each file represents one pipeline.
Create a new file called morning-briefing.yaml:
# ~/.openclaw/pipelines/morning-briefing.yaml
name: "Morning News Briefing"
description: "Fetches, summarises, and emails a daily news digest"
version: "1.0.0"
trigger:
type: cron
schedule: "0 6 * * *" # Every day at 6:00 AM
timezone: "Australia/Adelaide"
steps:
- id: fetch_feeds
skill: rss-reader
config:
feeds:
- url: "https://feeds.arstechnica.com/arstechnica/technology-lab"
label: "Ars Technica"
- url: "https://hnrss.org/frontpage"
label: "Hacker News"
- url: "https://www.theverge.com/rss/index.xml"
label: "The Verge"
max_items_per_feed: 10
since: "24h"
output: raw_articles
- id: filter_and_rank
type: ai_reason
prompt: |
Here are today's articles from various tech news sources:
{{ raw_articles }}
Please do the following:
1. Remove duplicates (same story from different sources).
2. Remove clickbait or low-substance articles.
3. Rank the remaining articles by importance and relevance.
4. Return the top 8 articles as a JSON array with fields:
title, source, url, and a one-sentence reason for inclusion.
output: ranked_articles
- id: summarise
type: ai_reason
prompt: |
For each of the following articles, write a concise 2-3 sentence
summary that captures the key facts and why it matters.
Articles:
{{ ranked_articles }}
Format each summary as:
### [Title]
**Source:** [source] | **Link:** [url]
[Your summary here]
output: summaries
- id: draft_email
type: ai_reason
prompt: |
Compose a professional but friendly daily briefing email using
the following article summaries. The email should:
- Have the subject line: "Your Morning Tech Briefing — {{ date }}"
- Start with a brief, witty one-line greeting.
- Include all the article summaries in a clean, scannable format.
- End with "Have a great day! — Your OpenClaw Agent 🐾"
Summaries:
{{ summaries }}
output: email_content
- id: send_email
skill: agentmail
config:
to: "you@example.com"
subject: "Your Morning Tech Briefing — {{ date }}"
body: "{{ email_content }}"
format: html
output: send_result
on_success:
log: "Morning briefing sent successfully."
on_failure:
notify: "you@example.com"
message: "Your morning briefing pipeline failed at step {{ failed_step }}."
Let's break down what is happening here.
Step 2: Understanding Triggers
The trigger block defines when the pipeline runs. We are using a cron trigger, which follows the standard cron syntax. 0 6 * * * means "at minute 0 of hour 6, every day."
OpenClaw supports several trigger types:
| Trigger Type | Description | Example Use Case |
|---|---|---|
cron |
Time-based schedule | Daily briefings, weekly reports |
webhook |
HTTP endpoint receives data | GitHub push events, Stripe payments |
file_watch |
A file or directory changes | New photos in a folder, log file updates |
email |
An email is received | Support ticket processing |
manual |
Run on demand via CLI | Ad-hoc data processing |
event |
Another pipeline completes | Multi-stage workflows |
You can even combine triggers. For example, you might want your briefing to run at 6 AM and whenever you send an email with the subject "briefing now":
trigger:
type: any_of
triggers:
- type: cron
schedule: "0 6 * * *"
- type: email
subject_contains: "briefing now"
Step 3: Understanding Steps
Each step in the pipeline is one of two types:
Skill Steps
These invoke an installed OpenClaw skill directly. The skill field references the skill name, and config passes parameters to it. Skill steps are deterministic—they do exactly what the skill is programmed to do.
AI Reasoning Steps
These are marked with type: ai_reason and are where the magic happens. Instead of running code, these steps send a prompt to your local AI model and use the response as the output. This is what makes OpenClaw pipelines fundamentally different from traditional automation tools like Zapier or n8n.
The AI doesn't just move data—it thinks about it. It can filter noise, prioritise by relevance, rewrite text for a specific audience, and make judgment calls that would be impossible with a fixed set of rules.
The {{ variable }} syntax is template interpolation. Each step's output field names the variable that holds its result, and subsequent steps can reference it.
Step 4: Testing Your Pipeline
Before you trust a pipeline to run while you sleep, test it manually:
# Dry run — executes the pipeline but doesn't send the email
openclaw pipeline run morning-briefing --dry-run
# Run with verbose logging to see every step's input and output
openclaw pipeline run morning-briefing --verbose
# Run a single step in isolation
openclaw pipeline run morning-briefing --step fetch_feeds
The --dry-run flag is your best friend. It replaces any "send" or "write" actions with a preview so you can inspect the output without side effects.
When testing, pay close attention to the filter_and_rank step. This is where the AI makes judgment calls, and you may need to tweak the prompt to match your preferences. Do you want more AI/ML news? Add that to the prompt. Fewer opinion pieces? Say so. The prompt is your control surface.
Step 5: Error Handling and Retries
Pipelines run unattended, which means they will eventually fail. An RSS feed might be down. Your email quota might be exhausted. The AI might generate an unexpectedly formatted response.
OpenClaw has built-in resilience features:
steps:
- id: fetch_feeds
skill: rss-reader
config:
feeds:
- url: "https://feeds.arstechnica.com/arstechnica/technology-lab"
retry:
max_attempts: 3
delay: "30s"
backoff: exponential
timeout: "60s"
on_error: skip # Options: skip, abort, fallback
The on_error field is important:
abort— stops the entire pipeline immediately (default).skip— logs the error and continues to the next step.fallback— runs an alternative step instead.
For our RSS reader, skip is a good choice. If one feed is down, we still want the briefing to go out with the others.
Beyond the Briefing: Pipeline Ideas
The morning briefing is just the beginning. Once you understand the trigger–step–output pattern, you can build pipelines for almost anything. Here are some ideas to get you started:
Competitive Intelligence Monitor
[Cron: hourly] → [Scrape competitor pricing pages] →
[Compare to your prices] → [Flag significant changes] →
[Post alert to Slack]
Automated Meeting Prep
[Calendar event in 30 min] → [Fetch attendee LinkedIn profiles] →
[Check recent emails from attendees] → [Draft a briefing doc] →
[Push to Obsidian]
Code Review Digest
[GitHub webhook: PR opened] → [Fetch diff and PR description] →
[AI analysis: flag potential issues] → [Post summary comment on PR]
Personal Finance Tracker
[Email trigger: bank notification] → [Parse transaction details] →
[Categorise spending] → [Update spreadsheet] →
[Weekly summary on Sunday]
Each of these follows the exact same pattern. Define a trigger. Chain your steps. Let the AI reason through the messy middle parts. Output the result.
Best Practices for Pipeline Design
After building dozens of pipelines, here are the hard-won lessons:
-
Start small, then grow. Build a two-step pipeline first. Get it working. Then add steps one at a time. Debugging a six-step pipeline from scratch is painful.
-
Use
--dry-runreligiously. Never deploy a pipeline to production without testing it in dry-run mode first. You don't want to accidentally email your entire contact list at 3 AM. -
Be specific in AI prompts. Vague prompts produce vague results. Instead of "summarise this," write "summarise this in exactly two sentences, focusing on the factual claims, not opinions."
-
Pin your skill versions. Just like in our skills guide, always pin skill versions in your pipeline config so an update doesn't silently break everything.
-
Set timeouts on every step. An AI reasoning step could hang forever if the model gets confused. Always set a reasonable timeout.
-
Log everything. Use the
on_successandon_failurehooks to log pipeline completions. Future-you will thank present-you when something goes wrong at 2 AM. -
Version your pipeline files. Store your
~/.openclaw/pipelines/directory in Git. Treat pipelines as code—because they are.
The Bigger Picture
The shift from "AI as chatbot" to "AI as automated worker" is the defining trend of 2026. Chatbots wait for you to ask a question. Pipelines work while you sleep.
OpenClaw's pipeline system sits at the perfect intersection of power and simplicity. You don't need to learn a programming language. You don't need to set up cloud infrastructure. You write a YAML file, describe what you want in plain English, and your local AI handles the rest.
That morning briefing pipeline we built? It takes about ten minutes to set up. And from that point forward, you wake up every single day to a perfectly curated, AI-summarised news digest—tailored to your interests, delivered before your first cup of coffee.
That's not just automation. That's the future.
Now go build something. 🐾




