Caelan's Domain

Part 5 — The Pipeline: Wiring It Together

aiclaudecoworkpipelineintegrationskillsagentscowork-skillscowork-agents

Created: April 17, 2026 | Modified: April 22, 2026

In Part 3 and Part 4 you built the first two Skills and the first two Agents your role's pipeline plan called for. Each is competent in isolation. None is talking to the others.

Part 5 fixes that. You stop running each tool in its own conversation and start running them as a workflow where the output of one becomes the input of the next. The sequence — which stages in which order, who owns each artifact, where the human in the loop sits — is not invented here; it is declared in the pipeline section of the Instructions (saved in CLAUDE.md, loaded on every prompt). Part 5 explains the concept; the Instructions supply the specifics for your role.

Jumping in from mid-series? See Part 3's Pick Up From Here for the starter config. This part assumes you have a Cowork project with the standards, Skills, and Agents you already wired into the Instructions during Parts 2–4.

What a Pipeline Is

A pipeline is a sequence of steps where each step's output becomes the next step's input, with a human in the loop between each pair.

A team does this with people. One person drafts, another reviews, a third plans, a fourth produces. Each handoff has a defined input, a defined output, and a quality check between. A pipeline replaces those handoffs with the Skills and Agents from Parts 3-4 — each stage passes its work to the next, and you stay in the loop at every checkpoint.

The stages are function-specific. A content workspace might run Brief → Voice Check → Plan → Production → Distribution → Measurement. A revenue workspace might run Inbound Capture → Discovery → Proposal → Procurement Review → Close → Post-Close Feedback. A response workspace might run Triage → Investigation → Drafting → QA Review → Send. Same shape, different stage names.

What stages your function has is documented in the Instructions. What each stage owns, what artifact it emits, and where the human in the loop sits at its boundary are documented there too.

A closer look — Pipelines (community term)
Pipelines is a community term, not an official Cowork feature shipped under that name. To inspect a pipeline, read the Skill and Agent files in the chain and the Scheduled Task (if any) that triggers it; to undo or change one, edit those underlying Skills, Agents, or Scheduled Tasks — there is nothing Pipelines-specific to undo.

There is no Pipelines tab in the sidebar and no dedicated Pipelines section in the product. The closest official feature is scheduled tasks in Claude Cowork, covered in Part 6 — Scaling. Scheduled Tasks automate when a pipeline runs, and the pipeline itself remains the pattern.

Gotcha. A scheduled run is only as good as the pieces underneath it. If your Skill is shaky or your Agent brief is loose, putting it on a timer does not improve it — it makes it wrong on a schedule instead of once.

Here is the generic flow — the Instructions fill in the stage names:

INPUT (topic, lead, ticket, incident, a new vendor agreement,
       a quarterly review request — whatever your function consumes)
  |
  v
[Stage 1 owner]  -- Skill or Agent: produces the first structured artifact
  |
  v
[Stage 2 owner]  -- audits or gates the artifact against standing rules
  |
  v
[Stage 3 owner]  -- turns the audited artifact into a plan or decision
  |
  v
[Stage 4 owner]  -- produces the delivered work the plan called for
  |
  v
OUTPUT (assets, proposal, response, remediation — whatever your
        function ships)

Each arrow is a handoff the Instructions name, each stage has a documented artifact location, and each boundary is a checkpoint where you — the human in the loop — approve or send the work back. You are not building anything new at this step — you are wiring what exists into a single connected workflow with defined inputs, defined checkpoints, and predictable output every run.


Wire It Together

Run the full pipeline in a single Cowork session — one conversation, every stage the Instructions document, one input carried all the way through. There is no built-in way for one stage to pass working context to the next; the conversation itself is the shared memory, and the moment you leave it that shared memory is gone. If you run Stage 1 in one chat and try to run Stage 2 in a separate chat the next day, the second chat will not see the Stage 1 output unless you paste it in or the relevant facts made it into Memory. For a pipeline you run often, keep it inside one long-running conversation, or save the in-between outputs to files the next stage reads explicitly. The shape below is generic; substitute your own stage names and artifacts from the pipeline section of the Instructions.

Stage 1: Produce the First Artifact

Open your Cowork project and start a new conversation. Invoke the Skill or Agent the Instructions name as the Stage 1 owner, and hand it the input your function consumes — a topic, a lead record, a ticket, an incident. The Stage 1 owner emits the first structured artifact: a brief, a qualified lead record, a triage summary, whichever the Instructions document.

Read it. Does it capture the actual goal? Are the facts right? Is the classification (audience, segment, severity) aligned with what you know about the input? If the Stage 1 artifact is off, correct it now. A bad first artifact produces a bad second, a bad second produces a bad third, and every error compounds downstream. It is cheaper to fix at Stage 1 than to rework the output of Stage 4.

Stage 2: Human in the Loop

Take the Stage 1 output and run it through the Stage 2 owner the Instructions name — the Skill or Agent whose job is to audit, score, or qualify the Stage 1 artifact before it advances.

Type a message like this into the conversation — the next two blocks show the same shape for the following stages.

Run the [Stage 2 Skill or Agent] on the artifact above.
Apply the rubric the Instructions point at for Stage 2 — whatever path
you chose for that rule file during setup.

Stage 2 is where the rubric lives. A voice-check stage reads a voice rule file and flags drift; a qualification stage reads an ICP (ideal customer profile) declaration and scores a lead; a QA-review stage reads a response-quality rule and grades a draft. Some flags are valid catches. Others are false positives. Accept or reject each, and produce the audited artifact that Stage 3 will consume.

Stage 3: Build the Plan or Decision

Feed the audited artifact to the Stage 3 owner. This is usually the stage where judgment enters — the stage that produces a plan, a proposal, a remediation sequence, whatever your function's "decide what we are going to do" step looks like.

Run the [Stage 3 Agent] on the audited artifact above.
Respect the constraints in the Instructions and any rule files the
Instructions point at for this stage.

Check the output against reality. Does the plan fit your capacity? Do the named resources — channels, segments, teams, tools — actually exist in your workspace? Adjust before moving on. Stage 3 outputs are the most expensive to produce and the most expensive to rework later; the human in the loop before Stage 4 is the one to take seriously.

Stage 4: Produce the Delivered Work

Hand the approved plan to the Stage 4 owner. This is the stage that produces the actual thing that ships — the assets, the proposal document, the customer-facing response, the remediation commits.

Run the [Stage 4 Agent] on the plan above. Produce each deliverable the
plan names, applying the voice, structure, and format rules your
Instructions point at.

Every deliverable should trace back to a specific line in the Stage 3 plan, inherit voice and structure from the standing rules, and carry whatever cross-reference the Instructions require. Review each, approve or revise, and the pipeline has produced its run of output.

Later stages

Some functions stop at four stages. Some document more — a Distribution stage, a Procurement Review stage, a Post-Close Feedback stage, a Measurement stage. The pattern does not change as you add stages: each one names an owner, documents an artifact, and sits behind a human in the loop. The Instructions are the source of truth for your pipeline shape and what each stage produces.


Troubleshooting and Guardrails

The Stage 2 rubric flags too much. Your rule file is too strict. Open the rule the Stage 2 owner reads (whichever path the Instructions point at for that rule) and loosen — if every adjective is banned, allow role-specific ones; if every sentence must be under 15 words, raise the limit. Stage 2 enforces whatever you wrote, and the human in the loop is the one who has to work through every flag.

Stage 3 produces generic plans. Segments vague, recommendations bland, next steps interchangeable. The agent pulls from the Instructions, and the Instructions do not have enough specifics. Add your actual product, real customer pain points, real operational constraints with concrete details.

Stage 4 loses voice or structure. One deliverable sounds right, another sounds like a template. Strengthen the relevant rule file with format-specific guidance — "In this format, write this way. In that format, write that way."

Stage 1 is fine but Stage 3 contradicts it. Add explicit constraints when you hand work to the Stage 3 owner. A constraint declared in the Instructions ("this is a retention motion, not an acquisition motion" or "this is enterprise, not SMB") should carry through every stage; if it is not, the Stage 3 prompt needs to restate it.

When to stop and fix. Not every problem needs to cascade through the full pipeline:

  • After Stage 1: If the first artifact targets the wrong audience or misses the goal, stop. Do not send an artifact pointed in the wrong direction to the next human in the loop.
  • After Stage 2: If Stage 2 finds more than three critical issues, the artifact needs a rewrite, not a patch.
  • After Stage 3: If the plan references resources or segments that do not match reality, the plan is theoretical. Fix it before producing deliverables.
  • After Stage 4: If the deliverables do not match the plan, tighten the Stage 4 owner's definition, not the individual pieces.

Extending: Adding a Skill to the Pipeline

Once the core pipeline runs clean, extensions arrive. A last-mile format you need but no current stage produces. A specialized output your role delivers once a quarter. A new rule file one stage should consult.

The mechanics are Part 3 mechanics — you already know how to build a Skill. The pipeline question is where the Skill slots in. Three placements cover most cases:

  • Between two existing stages. The new Skill consumes the prior stage's artifact and emits a transformed artifact the next stage reads. Insert it into the pipeline section of the Instructions and rebuild the conversation flow.
  • In parallel with an existing stage. The new Skill runs against the same input as an existing stage and produces a complementary artifact — a second rubric on the same draft, a second classification on the same record.
  • As a post-processor on a stage output. The new Skill runs only after Stage N succeeds and produces a derived output the next stage would not otherwise produce — a format-specific variant, a summary, a translation.
Keep specs in a separate rule file
When the Skill depends on specs that drift (platform character limits, vendor SLAs, regulatory cadences), put them in a rule file rather than the Skill prompt. The Skill reads the rule file on every run, so you update one file instead of editing the Skill. This is the same rule-file discipline from Part 2 applied inside the pipeline.

The easy path: /skill-creator. Cowork's built-in Skill-building Skill is the canonical route. Open a new conversation, type /skill-creator, and describe the stage the new Skill sits in, what it consumes, and what it emits. When the Skill file is ready, the prompt will ask where you want it stored — pick a location that fits the rest of your Skills, then add an "ALWAYS read" pointer for it to the Instructions and update the pipeline section so the next run invokes it at the right point.


Extending: Adding an Agent to the Pipeline

Skills handle one-step transformations. When the new stage needs judgment across multiple sub-steps — select among options, sequence work, decide what to include and what to drop — the primitive is an Agent, not a Skill, and Part 4's mechanics apply.

Three recurring patterns for Agent-shaped stages:

  • A scheduling or orchestration stage. Takes the prior stage's deliverables and decides when, in what order, and through which channels they reach the next destination. The sales pipeline's procurement-review stage, the content pipeline's distribution stage, the support pipeline's escalation-routing stage all fit this shape.
  • A gating stage with discretion. A rubric alone is not enough; the stage needs to decide whether an edge case clears or escalates. A human-in-the-loop approval checkpoint sits on top, but the Agent does the structured read that feeds the decision.
  • A measurement or feedback stage. Reads closed work, clusters drivers, writes structured records into Memory the next pipeline run will read. A role that needs this stage adds it alongside the others — same shape, same human in the loop, same rule-file-backed criteria.

The placement logic is the same as for Skills — name the stage in the pipeline section of the Instructions, document its owner, document its artifact, and insert it at the right boundary.

The Agents section of the Instructions names the Cowork agent type (e.g., cagents:content-marketing-manager, cagents:business-development-manager) each pattern maps to. The pipeline stage cites the pattern by name; the agent type is an implementation detail. If you change the underlying agent type later, update the Agents section of the Instructions and the pipeline keeps working.

When to Promote a Pipeline to a Scheduled Task

Once the manual pipeline feels boring — same inputs, same review points, same outputs — that is the signal to let Cowork run it on a cadence. Scheduled Tasks fire the whole sequence on a schedule you define, using the same Instructions, standards, Skills, and Agents you built here. Part 6 — Scaling covers the automation step in detail.

Only promote a pipeline to a schedule once the one-off version has proven it works. A flaky pipeline on a cadence just produces flaky output on a cadence.


Off-Ramp 2 — What You Have Built
What you have built: A complete role-scoped pipeline — from raw input to delivered output — with a rubric check between stages, a human in the loop at every boundary, and a saved artifact at each stage so the next run can read the last one. This system works as-is and will produce results today.

What is ahead: Part 6 adds automation so the pipeline runs on a schedule, then extends the same pattern into an adjacent role to prove it generalizes. Worth doing when you are ready — but what you have now is already working for you.

With the pipeline, you hand in the input and walk through the stages. Each stage has defined inputs, defined outputs, and a human in the loop before the next one starts. Your fifth run goes through the same pipeline as your first. You can stop here; the combination of a role-scoped Project, standing rules, named Skills, and sequenced Agents is already a serious operation.


What Just Changed

No new file for the pipeline itself — it is conversation flow plus the pipeline section of the Instructions, not a dedicated pipelines folder. Future turns chain the anchor tools from Parts 3-4 through prompts you now know how to write.

Any extension Skills and Agents you added for your role land wherever the Instructions point at for Skills and Agents — the storage location is whatever you picked during setup, and the Instructions hold the "ALWAYS read" pointers that make those files findable on every run. New rule files added for the extensions follow the same pattern: the prompt asks where you want them, you pick, the pointer goes into the Instructions. The pipeline's first run also writes a Memory entry capturing what just happened (the receipt) — and that Memory entry lives wherever the Instructions declare for Memory, not at any fixed convention. Whatever paths you chose, the Instructions are what makes them work: Cowork loads the Instructions on every turn, the pointers route reads and writes to the files you named, and the pipeline keeps running. Move a file later and update the pointer; nothing else has to change.


What Is Next

Running the pipeline manually feels productive. Typing the prompt, walking the stages, reviewing each artifact — the system works. But it only works when you show up. The inputs arrive on their own schedule; the pipeline waits for you to notice.

In Part 6 — Scale, you promote the pipeline to a Scheduled Task, audit the full system against what the Instructions declared at the start, and prove the approach generalizes by extending it into an adjacent role. The pipeline you wired here is the input. The capstone is where it becomes something the business runs on, not something you run.