Part 6 — Scale: Automation, Cross-Function Extension, and the Complete System
Created: April 17, 2026 | Modified: April 22, 2026
Standing meetings and a pattern library
Your workspace is running the function. Structured artifacts appear when a Skill is invoked. Quality checks fire on every draft. The pipeline turns an input into a staged output set, and measurement reads last week's results before this week's plan is drafted.
You are still showing up to start every one of those jobs.
This part changes two things at once. First, you put the recurring work on the calendar — Scheduled Tasks handle production, monitoring, and reporting on a schedule you set, not whenever you remember. Second, you test whether the stack you built generalizes. You stand up the same architecture for a second function to confirm the playbook is not domain-bound. If the stack extends across unrelated functions without redesign, the pattern is real. Not a demo. Then you step back, name the pattern, and audit the complete system against what the Instructions (saved in CLAUDE.md, loaded on every prompt) declared at the start.
By the end of this part, your workspace stops being a freshly configured single-function setup and becomes what it was always meant to be — a pattern library whose instances you can stand up for any function the business needs.
Section 1 — Automation: running on autopilot
For five parts, you have been opening the Project, typing a prompt or invoking a Skill, reviewing the output, and moving on. The system works. But it only works when you show up.
Scheduled Tasks change that. A Scheduled Task is a prompt that runs on a schedule you define — daily, weekly, monthly — without you being present. Cowork opens the Project, runs the task, and saves the output for you to review later. The work happens whether you are at your desk or not.
Think of it this way. Until now, you have been running the workspace through drop-in meetings — walk over, hand off the input, wait for the result. Scheduled Tasks are standing meetings on the calendar. Monday 9am, produce this week's structured brief. Friday 5pm, compile the weekly performance digest. First of the month, run a monitoring scan. The work happens on schedule because you put it on the calendar, not because you remembered to ask.
This is the payoff for the infrastructure you built across Parts 1 through 6. The Instructions carry context. Rules constrain quality. Skills produce consistent outputs. Agents handle multi-step work. Scheduled Tasks take all of that and put it on autopilot.
Creating a Scheduled Task
Open your Cowork Project. In the left sidebar, click Scheduled Tasks. You will see an empty list — no tasks have been scheduled yet.
Click New Task. Cowork shows you a form with four fields:
Name. A label for the task. Pick something you will recognize in a list.
Schedule. How often the task runs. Presets cover daily, weekly, monthly; a custom option lets you pick specific days and times.
Prompt. The instructions the session follows when the task fires. It can invoke Skills, reference Agents, or give direct instructions. Anything available in an interactive session is available in a scheduled one.
Notifications. Whether Cowork notifies you when the task completes. Turn this on.
Fill in those four fields and click Save. The task appears in your Scheduled Tasks list with its next run time displayed.
- What file. The Scheduled Tasks list is stored inside your Project and shown in the Cowork UI.
- When written. Entries save when you click Save in the editor, and when Cowork records each run.
- What format. UI-managed records — a prompt, a schedule, and a run history stored per task.
- How to inspect. Open the Scheduled Tasks list in the Cowork sidebar, then click a task for history.
- How to undo. Open the task in the list and pause, edit, or delete it before the next run.
Gotcha. A Scheduled Task reuses the entire Cowork session the same way a manual chat does. The Instructions, Rules, Skills, Agents, and Memory all load into every scheduled run. Scheduled Tasks run in the cloud, only when Claude is open on your computer, or both — depending on your setup. Check your Claude settings for the behaviour you want. A confusing output is not a scheduling problem — it is a prompt problem, and you fix it the same way you fix any prompt.
That is the entire mechanism. You write a prompt, set a schedule, and Cowork handles the rest. The complexity is not in the tool. It is in choosing what to automate and writing prompts that produce useful output without a human in the loop.
What to put on the calendar
The scheduled tasks you write are function-specific, but the shape of the calendar is not. Every workspace has a weekly production cadence, a weekly or monthly review rhythm, and a monthly audit interval. What changes is what those rhythms produce.
- Weekly production. A short-cadence job that pulls whatever inputs are current — topics, queue items, pipeline state, incoming requests — and produces the structured artifacts the function owes the business this week.
- Weekly or monthly review. A job that reads what just happened — last week's output, the queue health, the pipeline state, the quality of responses — and surfaces the top items that need attention.
- Monthly audit. A longer-cadence job that compares documented expectations (rubrics, SLAs, renewal schedules, compliance thresholds) against actual practice and flags drift.
Same calendar shape regardless of function. You register each one in Cowork's Scheduled Tasks panel and document the set in the Instructions so the configuration stays legible to future-you.
Monitoring on a fixed interval
In Part 3 — Skills, the workspace ran a one-time scan against a set of named comparables. A one-time scan is a historical document, not a standing capability. The monthly monitoring cadence is how that scan becomes durable.
The shape is the same regardless of function. A fixed set of named targets declared in the Instructions. A point-by-point comparison against the prior scan. A relevance rating per target (HIGH / MEDIUM / LOW). A summary of the shifts that opened or closed opportunities since the last run. The output gets saved to a dated file at whatever Memory location the Instructions point at, and shows up in the next planning cycle.
Most months the output is routine — minor changes, a few MEDIUM flags, no action required. You scan it in five minutes and file it. That is fine. The value of monitoring is not in any single report. It shows up in the pattern across three, six, twelve months — a gradual shift in a named target, a new activity cluster, a pricing or positioning move caught before stakeholders ask about it.
When a HIGH flag appears, you pay attention. That is the month the automated rhythm earns its keep. You did not have to remember to check. The workspace checked for you.
Automated reporting
You built a measurement framework earlier in the course. Until now, reporting has been a manual task that happens when you remember, and does not happen when you are busy. Which means it does not happen in the weeks you need it most.
Put the weekly performance report on a schedule. Register a Friday 4pm Scheduled Task in Cowork's UI. The prompt pulls this week's metrics against last week's, flags anything that moved more than 15%, names channel/activity/content performance where applicable, and closes with flags and one concrete recommendation for next week. Keep the report under 500 words. Lead with what changed, not with what stayed the same.
The value shows up in the weeks something moves. A metric spiked. A channel went quiet. An activity that was producing consistent output dropped. The scheduled report flags the change, offers a likely cause, and recommends a specific action. You still decide. But you decide with the analysis already done.
Add a second monthly summary that connects week-over-week performance to the quarterly targets in the Instructions. Month-over-month trends, best and worst performers with data, recommendations tied to the targets rather than to the numbers in isolation.
Review cadence
Three scheduled tasks now run in your Cowork folder. The workspace handles the work. But automation is not abdication. It is still your work. A scheduled task nobody reviews is not automation — it is waste.
Weekly — 15 minutes, Monday or Tuesday. Open Monday's production artifact. Scan the inputs, check flagged items, approve or adjust priority. Check Friday's performance report. If something moved, decide whether it needs a response this week.
Monthly — 30 minutes, first week of the month. Read the monitoring scan and decide on responses to HIGH flags. Read the monthly performance summary. Compare progress against the quarterly targets in the Instructions. Update the Instructions if priorities have shifted. Changes flow into every scheduled task on its next run.
Quarterly — 60 minutes. Step back. Are the scheduled tasks producing useful output? Are you actually reading the reports? Has the business changed enough that the prompts need updating? Rewrite prompts, add new tasks, retire old ones, adjust schedules. Automation runs the same prompt every time. The business does not stay the same.
Section 2 — The pattern library: extending across functions
Your production pipeline now runs on its own schedule. Artifacts appear when the calendar fires them. Performance reports land on a fixed schedule. Monitoring intelligence arrives on the first of every month. That is the end of the single-function build.
It is also the beginning of something more interesting. The approach you used to build this workspace — context, constraints, tools, pipeline, measurement, automation — was never really about that one function. It was a system-design approach that happened to produce one configured workspace. Other functions in the business have the same shape: recurring tasks, quality standards, multi-step workflows, outcomes worth measuring. The question is whether the same approach extends without redesign.
If the stack extends into functions it was not designed for, the approach holds. This section is the proof. You do not rebuild anything. You stand up a new Cowork Project with the same building blocks — a new Instructions file pointing at a fresh set of standards, Skills, and Agents (you pick where these live for the new function, same pattern as the original) and Scheduled Tasks registered in Cowork's UI. Every surface from the first build transfers — no new feature is introduced.
The shape transfers, the vocabulary tracks the function
Whether the new function is revenue-facing, response-facing, or back-office, the architecture is the same. Two or three anchor Skills. A few Agents — a qualifier, a planner, a producer, a measurement agent. Four or five standards files — voice or output standards, process, handoff, authority, refusal. A calendar that pairs weekly production with weekly or monthly review and a monthly audit. Pipeline stages that move from intake through review through plan through produced work. Measurement that tracks whatever metric the function actually owes the business.
Point at any of those configurations and the architecture is the same. The contents of the files change. Standards, Skills, Agents, Schedules — each populated with function-specific content. The Cowork building blocks — a Project, the Instructions, Rules, Skills, Agents, Memory, Scheduled Tasks — do not care which function they are running. They hold the shape. You fill it in.
You did not build one workspace. You built a template for any function. Hiring, finance reporting, compliance, product operations, partnerships, internal enablement — each one applies the same six-step pattern with its own vocabulary.
The pattern extends across domains it was not designed for. The system is real. Not a demo.
Now name what you just did.
Section 3 — Capstone: the pattern and the complete system
Everything you built for one function follows a single pattern. Every adjacent-function sketch above follows the same pattern. The contrasts in Section 2 earned the right to this retrospective — without at least one second application, the "pattern" would be a claim. The sketches made it a demonstration.
The pattern
Five steps:
1. Identify the need. What recurring function takes too much time, produces inconsistent results, or falls through the cracks? Scope it to one function.
2. Write standards. Codify the standards as files at locations you choose, then add "ALWAYS read X for Y" pointers in the Instructions. Standards turn subjective quality ("does this sound right?") into checkable criteria the model loads on every prompt.
3. Build a Skill or Agent. Structured input/output tasks become Skills. Multi-step decision-making tasks become Agents. A scorecard-shaped tool is a Skill; a multi-step review that produces a plan is an Agent. A named qualifier with fixed fields is a Skill; a weekly hygiene review that surfaces the top risks is an Agent.
4. Wire into the pipeline. Each tool's output is the next tool's input. The Instructions record the wiring explicitly so the next session reads the same sequence you do.
5. Automate. Schedule recurring execution so the work happens without you starting it. Register Scheduled Tasks in Cowork's UI — schedule, prompt, output artifact, and whether the task requires your approval before writing anything outside the Memory location the Instructions point at. Document the set in the Instructions.
Here is how that pattern played out across the course:
Need: a single recurring function scoped to one workspace
→ Standards: function-specific rules (Part 2)
→ Skill: the workspace's first structured tool (Part 3)
→ Agent: the workspace's first multi-step composition (Part 4)
→ Pipeline: stage artifacts wired in sequence (Part 5)
→ Automate: Scheduled Tasks (this part)
The pattern works for anything. The need changes — qualification that gets improvised every time, reply quality that is uneven, vendor evaluations that happen in one-offs, candidate screens that depend on who sat in the room. The standards, Skill, Agent, pipeline, and automation line up differently for each one. The architecture is identical.
That pattern is the real product of this course. The function you built is one application. You can stand up another configuration for any repeatable function in the business using the same framework. The skills transfer because the architecture transfers.
Where you could have stopped
- End of Part 1: Project, Instructions, review framework in place. A disciplined drafting partner with context.
- End of Part 2: add Rules. Function-specific standards enforced on any task. Enough for many small operations — a drafting partner with your standards encoded.
- End of Part 5: Skills, Agents, and Pipeline running. Type an input, walk through stages, get the structured output. This is where the stack stops feeling like "a chat tab with rules" and starts feeling like a configured operating system for the function.
- End of Part 6 (here): full stack, automated, generalized across functions. The pattern is named. Apply it to any repeatable function. The weekly performance report and monthly summary scheduled in this part are the closing loop: findings land in Memory, Memory feeds the next plan.
The complete system
You started with an empty Cowork Project, an Instructions file at the folder root, and a prompt that said "interview me about this function." Six parts later, the workspace produces structured artifacts, checks them against standards, composes Agents into pipeline stages, runs on a schedule, and maps cleanly onto any adjacent function you might want to configure next. That is not a chatbot. That is an operating configuration.
The complete system as a flow:
The Instructions (business context + declared structure)
+ Memory (learned decisions)
+ Rules (standards and constraints)
→ Skills (structured tools)
→ Agents (multi-step compositions)
→ Pipeline (wired workflow)
→ Scheduled Tasks (automation)
→ Memory (results feed back in)
Notice the loop. Memory feeds back into the system. When a Scheduled Task posts the weekly report, that finding enters Memory. The next time an Agent drafts a plan, it reads the memory entry. The system does not just execute — it keeps a record of past decisions the next session reads before drafting.
Trace the journey back through the waypoints. You started in Part 1 with an empty Project and the Instructions. You wrote standards into Rules in Part 2 — The Playbook. You built Skills in Part 3 and Agents in Part 4, then wired them into a working choreography in Part 5 — The Pipeline. You scheduled the weekly report and monthly summary in this part to keep the loop running on its own. Each was an off-ramp where a reasonable operator could have stopped and kept a working system. You did not stop. That is why the loop closes here.
Your final architecture
Every file in your Cowork folder, every surface Cowork manages for you. Point at any node below and know which part wrote it — regardless of which function the workspace is configured for.
The Instructions are a branching-tree index of explicit pointers. The model loads CLAUDE.md on every prompt, sees the pointers, and follows them to whatever files you stored your Rules, Skills, and Agents in. The tree below is conceptual, not a fixed directory layout — file names and folder locations are whatever you chose during the Part 2–4 setup interviews:
your-cowork-folder/
└── CLAUDE.md (the Instructions — loaded on every prompt)
│
├── "For tone, ALWAYS read <voice-standard-file>."
├── "For format, ALWAYS read <output-standard-file>."
├── "For handoff steps, ALWAYS read <process-standard-file>."
│
├── "Use Skill: <first-skill-name>" → <skill-file-or-folder>
├── "Use Skill: <second-skill-name>" → <skill-file-or-folder>
│
├── "Use Agent: <planner-agent-name>" → <agent-file>
├── "Use Agent: <composer-agent-name>" → <agent-file>
├── "Use Agent: <measurement-agent-name>" → <agent-file>
│
├── Pipeline stages → dated artifacts at <memory-location>
└── Scheduled Tasks → registered in Cowork's UI, listed here for reference
+ Cowork Memory (external) — decisions Cowork retains across sessions
+ Scheduled Tasks (Cowork) — recurring runs registered in the Cowork UI
The Instructions are the mechanism that makes the tree real. Cowork reads CLAUDE.md on every prompt and follows each "ALWAYS read" pointer to the file you put there. Move a standards file, update the pointer, and the tree follows. Nothing is hidden in a fixed convention — every file in your configuration has a pointer line you can read with your eyes.
| File / Surface | Introduced in | Purpose |
|---|---|---|
CLAUDE.md (the Instructions) | Part 1 | Persistent business context Cowork reads on every prompt. The branching-tree index of "ALWAYS read X for Y" pointers to the standards, Skills, Agents, Pipeline section, and Scheduled Tasks you have configured. |
| Standards files | Part 2 | One file per standard — voice, output format, process, approval criteria. Stored where you chose during setup, referenced from the Instructions by explicit pointer. |
| Skill files / folders | Part 3 | One file or folder per structured input/output tool. Stored where you chose, pointed at from the Instructions. |
| Agent files | Part 4 | One file per multi-step composition. Stored where you chose, pointed at from the Instructions. |
| Pipeline stage artifacts | Part 5 | One artifact per pipeline stage, written to whatever Memory location the Instructions declare. |
| Cowork Memory | Part 1 | Decisions, results, and learned context Cowork retains across sessions. |
| Scheduled Tasks | Part 6 | One entry per recurring cadence, registered in the Cowork UI and listed in the Instructions. |
Point at any file in the configuration, read the row that names it, and know exactly which part wrote it. When a behaviour surprises you, open the Instructions, follow the pointer to the file in question, and read it. The method works because every piece of the configuration is visible, named, and reachable through the tree of pointers.
The capstone audit
Before you close this course, run the capstone audit. The final step in this part's prompts sidebar produces a dated capstone-audit artifact at the Memory location the Instructions point at — a structured report that walks the configuration and verifies every declared building block exists where the Instructions say it does.
The audit is deliberately generic. It does not care which function the workspace is configured for. It iterates:
- The Instructions are present.
CLAUDE.mdexists at the Cowork folder root and names the function, the standards, Skills, Agents, and Scheduled Tasks that make up the configuration — each as an "ALWAYS read X" pointer or a Cowork UI reference. - Every standards pointer resolves. For each "ALWAYS read X for Y" pointer in the Instructions, confirm the file at the pointed-to path exists and is not empty.
- Every declared Skill exists. For each Skill named in the Instructions, confirm the file or folder it points at holds a Skill definition.
- Every declared Agent exists. For each Agent named in the Instructions, confirm the file it points at is present.
- Every declared schedule is registered. For each Scheduled Task listed in the Instructions, confirm it appears in the Cowork Scheduled Tasks panel with the schedule you set, the correct prompt, an output artifact path, and the right approval-gate setting.
- Every pipeline stage has at least one saved file. For each stage in the Pipeline section of the Instructions, confirm the Memory location it writes to holds at least one dated file.
The audit output lists each check with pass / fail / skipped and — for every failure — the specific pointer or registration that did not resolve. That report is the closer. It is the honest measurement of whether you built a configured workspace or just read about one.
What just changed
You registered three Scheduled Tasks in the Cowork sidebar — a weekly production run, a monthly monitoring scan, a weekly performance report — plus a monthly summary, and you documented the set in the Instructions. You sketched three adjacent functions against the same six Cowork building blocks, confirming each maps onto the same six-step pattern without redesign. You ran the capstone audit and verified every item declared in the Instructions exists as a file or a Cowork UI entry. The Instructions tree above shows the final architecture. The table above names every surface. The pattern named in Section 3 is the approach that produced all of it.
What is next
Six parts ago, you opened an empty Project and started writing context. You authored the Instructions. You wrote the standards. You built the Skills, composed the Agents, wired the pipeline, put the work on the calendar, and audited the complete system against what the Instructions declared. The configuration is not a new setup anymore. It is a pattern library whose instances you can stand up for any function the business needs.
Thank you for building this.
Delegate execution. Never abdicate accountability.