Caelan's Domain

Part 1 — Setting Up Your Workspace

Why configure a workspace at all

A chat tab produces one post, one proposal, one reply — disposable. A configured workspace produces outputs that behave consistently across sessions because the behaviour is pinned to files the workspace reads every turn. This series builds that consistent identity on Claude Cowork: a briefing, standing rules, named skills, and scoped agents. Set it up once and the next request lands against the same rules the last one did.


Set up the Cowork workspace

Step 1 — Create a Project

In the top-left of Cowork, select Projects, then click New Project. Point it at an existing folder if you already have content you want the workspace to use, or create a new one. Leave the instructions field alone for now; we come back to it. Make sure memory is turned off.

  • Why memory off. Project memory isn't shareable (yet), so anything stored there is locked to the individual chat owner. Bake the rules we want to persist into the Instructions instead — that way the behaviour is portable to anyone using the project, and it's also easier to read and audit.
  • Why Projects matter. A Project is a scoped workspace with an instructions file loaded on every turn. Combine the Project with a shared folder and you get a consistent experience across staff, easier discovery, and cleanly separated conversations per workstream.
  • What's shared vs. personal. In Cowork, Projects themselves are not shared between staff — only what's in the shared folder is. That means anything you put in the Project's Instructions field stays yours alone. Useful when you want a little personalization layered on top of the shared environment without affecting anyone else.

Step 2 — Write the Instructions

Every workspace needs the Instructions (saved in CLAUDE.md, loaded on every prompt). This is the briefing the model re-reads at the start of every turn — your durable facts about who you are, what you do, and how you want the work done.

Keep the Instructions lean. Use a branching-tree approach: the Instructions are the root index that points the model to topic-specific files when a topic comes up, rather than holding everything itself. This matters because every line in the Instructions rides along on every prompt — if it's bloated, you burn through your context budget before the model does any real work.

A branching-tree example
A lean Instructions file points outward to the files that hold the detail. The root stays short; the pointers do the heavy lifting. The exact filenames and folders are whatever you chose during setup — the prompt asks you and writes the pointers for you.

# <Role> — <Company>

This is the operating brief for all <role> work in this workspace.

## Business basics
Two or three sentences: what the business does, who it serves,
what makes it different.

## Pointers — always read these when relevant
- For voice and tone, ALWAYS read <your-voice-file> before drafting
  anything customer-facing.
- For approval rules, ALWAYS read <your-approval-file> before
  sending anything to a real person.
- For our running notes on what's working and what isn't,
  ALWAYS read <your-notes-file> at the start of any new strategy
  conversation.

The placeholders (<your-voice-file> etc.) are filled in by the setup prompt with whatever names and locations you pick. The point is the shape: a short index plus a list of explicit "ALWAYS read X for Y" pointers. Each pointer is a branch the model follows when the topic comes up.

Use the prompt listed in the prompts sidebar to have Claude interview you and draft the Instructions with you. The prompt walks through the questions one at a time, pauses for your review of the draft, and writes the file only on your explicit approval. It also asks you where you want to keep future reference files — voice rules, approval rules, running notes — and adds the pointers to the Instructions for you as the series progresses.

The onboarding interview

The prompt in the sidebar walks you through the questions and saves the file under your approval. A few things worth knowing before you run it:

  • Nothing is saved until you say yes. Read the generated draft before approving — corrections are cheap now and expensive every turn after.
  • If your answer to a question is thin, the prompt will ask follow-ups until it has enough. You don't need to know what "enough detail" looks like — the interview pulls it out of you.

The interview takes five to ten minutes plus review time. When it's done, your Cowork folder holds the Instructions saved as ./CLAUDE.md at the root.


Where everything lives

The workspace is just files on disk. The Instructions at the root; anything else the workspace needs sits alongside it, wherever you decided to put it during setup.

  • The Instructions — the briefing you author, saved in CLAUDE.md. Cowork loads it at the start of every turn. You write it, you edit it, you own it.
  • Reference files — the things the Instructions point to. Voice rules, approval rules, running notes, anything else the model should read when the topic comes up. The setup prompt asks you where you want them, creates them at the location you pick, and adds an "ALWAYS read X for Y" pointer to the Instructions. They live wherever you put them — there's no fixed folder.

Everything is a file. Open it, read it, edit it, back it up, hand it off. Nothing is trapped in a UI.


It's still your work

Before the next part introduces rules, skills, and agents, there's a mindset to establish. You are accountable for everything the workspace produces.

When the model drafts something and you publish it, you are the one making that claim. Your name is on it.

The drafter has no track record to lose and no instincts built from consequences. It produces plausible text — sometimes correct, sometimes not — and stops. Reviewing the work before it goes out is the part you do not delegate. The drafter changes; the accountability does not.

When the model gets it wrong

Generative AI is fundamentally predictive, not analytical. It pattern-matches against its training and produces the next likely token — it does not understand what it is saying. That means it will cheerfully connect dots that should not be connected, produce confident-sounding claims with no grounding, and stitch together plausible-looking references that do not exist. Its failures are not bugs; they are what you get when a pattern-matcher is asked to reason.

The failure modes below all share that root cause. Every one of them is the model filling a shape with whatever pattern fits, without the analytical check that would catch the error.

  • Hallucinated statistics. "Studies show 73% of buyers prefer..." with no study behind it. The most common and most dangerous failure. If a number appears, ask for the source. If it cannot produce one, cut the number.
  • Made-up references. Companies, case studies, articles, or research that do not exist. A "2024 Forrester report on SMB trends" that sounds real but was never published. Always verify external references.
  • Tone that doesn't fit the context. Tone that works in one setting can be wrong in another. A funeral home doesn't want cheerful energy. A children's education product doesn't want adult-coded humor. A security tool doesn't want a casual register — the reader is worried, and casualness reads as not taking the worry seriously. The model doesn't notice; you have to.
  • Legal claims you cannot support. "Best in category." "Guaranteed results." "Clinically proven." These are legal claims, not flourishes — each requires specific evidence.
  • Factual errors about your own work. If anything has changed since you wrote the Instructions, the model does not know. Keep the brief current.
  • Outdated tactical knowledge. Tactics that worked three years ago often don't anymore. The model doesn't know what's still working in your field today; that's on you to keep current.

None of these mean the model is bad at its work. It means the model needs a reviewer. That is you.

When NOT to trust AI

Some outputs should never go through an AI drafting step at all. Human-written, human-reviewed, human-approved.

  • Legal statements. Terms of service, privacy policies, warranty language, regulatory disclosures. The model can help brainstorm coverage; the language comes from you or your attorney.
  • Financial projections. Revenue forecasts, ROI claims, pricing commitments. A predictive text system will happily pattern-match 30% year-over-year growth based on nothing. Financial claims need real numbers from your real situation.
  • Named-competitor comparisons. "Our product is faster than CompetitorX" is a factual claim. If wrong, potentially defamatory. Comparative work needs careful framing and evidence the model cannot validate.
  • Crisis communications. When something breaks — a recall, a breach, a public complaint — your response shapes how people see you for years. Judgment call on tone, timing, transparency.
  • Anything involving personal data. Testimonials, case studies with named individuals, outputs referencing specific people. Privacy rules vary by jurisdiction.
  • Final pricing decisions. The model can draft pricing page copy. It does not decide what the prices are.

The boundary is delegation versus abdication. Delegation means you reviewed it. Abdication means you did not.

If getting this wrong could produce a lawsuit, a regulatory fine, or a public apology, write it yourself. Use the workspace for the other 90% of the work.

Your review checklist

Frameworks are abstract. Checklists are actionable. Start with this; extend it whenever you catch a failure mode it didn't cover.

  • Factual claims verified — every statistic, percentage, and data point traceable to a real source.
  • Voice consistent — the output sounds like you, not a generic output.
  • Legal and compliance reviewed — no unsupported claims, no guarantees, no regulated-category language.
  • Action or CTA (call to action) appropriate — the ask matches the content and the audience's stage.
  • No hallucinated sources — every referenced study, article, company, or quote actually exists.
  • Audience targeting correct — the output speaks to the declared audience, not a generic reader.
  • Product or process details accurate — features, prices, SLAs (service-level agreements), and capabilities match current reality.
  • Spelling and grammar clean — read it out loud. If something sounds off, fix it.

In Part 2, this checklist gets encoded into a reference file the Instructions point at, so the check runs before you see a draft. Accountability is the mindset; the standing rules are how the mindset gets enforced mechanically.


What just changed

  • You created a Cowork Project pointed at the folder you want the workspace to use, with memory turned off.
  • You ran the onboarding interview and approved the save of your first Instructions file at ./CLAUDE.md.
  • You picked locations for the future reference files (voice, approval, notes — whatever the prompt offered), and the Instructions now contain the first set of "ALWAYS read X for Y" pointers waiting to be filled in.
  • You internalized the accountability frame: the model drafts, you decide. The list of things you never delegate. An eight-item review checklist you can use today.

The shape of the workspace is now set. The Instructions are the root; everything else hangs off them as branches the model follows when a topic comes up. Each part that follows adds another branch.

The posture for everything that follows is simple: Delegate execution. Never abdicate accountability. The model drafts, proposes, classifies, and schedules. You decide, approve, revise, and own the output that goes out the door.

Next: Part 2 — The Playbook turns the review checklist and your house standards into the first reference files the Instructions point to.