Claude Cowork: A 45-Minute Crash Course
15 Concepts, 80% of Real Use
A practical crash course in Claude Cowork: what it is, what to use, and what to watch out for. No filler, no upsell. By the end you'll know what the major pieces do, when to reach for each one, and where the failure modes hide.
The single insight that makes everything else click: using Cowork well is a delegation problem, not a prompting one. Almost every Cowork decision, from how you describe a task to which folders Claude can see to whether you flip on "act without asking," comes back to one question: how much oversight does this task actually need from me, and where am I currently giving too much or too little? Read each section through that lens.
This is different from chat and different from OpenClaw. Chat is "answer my question." OpenClaw is "act on my real life over WhatsApp." Cowork is "do this knowledge-work task on my desktop while I do other things." The technical primitives look familiar (sessions, plans, skills, sub-agents), but the discipline is new: knowing when to watch closely, when to walk away, and when to pull back authority you should never have given.
Current as of April 2026. Cowork ships fast: this guide names concepts and command shapes, not exact menu paths. When in doubt, check the official help center. Installation instructions live there too.
You have a Pro, Max, Team, or Enterprise plan; the Claude Desktop app installed on Mac or Windows; and basic comfort opening folders and choosing files. You don't need a terminal. You don't need to know what an agent is: that's what this is for.
Open Claude Desktop. Three tabs run across the top: Chat, Cowork, Code. Click Cowork. The window splits into three panels you'll see again and again throughout this guide:
- Conversation panel (left): where you type prompts and read replies, just like Chat.
- Execution panel (center): where Cowork shows the plan, the file operations in progress, and any warnings. This is the panel you watch.
- Artifacts panel (right): where finished files (drafts, reports, spreadsheets) appear for preview or download.
One more piece of the UI you'll use constantly: the + button next to the prompt box. That's the entry point to add Connectors, install Plugins, browse Slash commands, and grant Folder access. Whenever a section below says "add a connector" or "install a plugin," it lives behind that +.

If Claude Desktop is already installed, you can run your first Cowork task before reading another paragraph:
- Click the Cowork tab at the top of the window.
- Click Grant Access and choose a folder. Make
~/Claude-Workspace/first if you don't have a working folder yet. - Type one read-only prompt into the Conversation panel:
List the files in this folder. Don't open or read any files yet. - Watch the file list appear in the Execution panel.
That single round trip is your installation acceptance test. If you don't have Cowork yet, the official help center has the install link; come back when it's running.
Part 1: Foundations
1. What Cowork actually is
Most people's first reaction to Cowork is "Claude with file access." That's the what, and it misses the why.
Cowork is an agent that lives in the Cowork tab of your Claude Desktop app, alongside the Chat and Code tabs. You point it at a folder, describe an outcome (not a sequence of steps), and Claude plans the work in the Execution panel, pauses for your approval, then executes: reading files, writing files, running code in a sandboxed VM, calling connected services, and dropping a finished deliverable into the Artifacts panel. The same agentic architecture that powers Claude Code, but wrapped in a desktop app for non-coding work. In Anthropic's framing: chat is built around the prompt; Cowork is built around the outcome.
The mental shift that matters: this is not a chatbot you query. It's a worker you assign. "Summarize this PDF" is a query. "Take the eight customer-interview transcripts in this folder, identify the top three pain themes, and produce a one-page brief I can hand to product" is an assignment. The first works fine in chat; the second is what Cowork is for.
That delegation shift is what makes Cowork useful, and what changes the failure modes. In chat, the worst case is a wrong answer: annoying, but contained. In Cowork, the worst case is a confidently executed wrong action that touched dozens of your files. The crash course is mostly about learning to delegate at the right level, with the right oversight, on the right kinds of work.
2. The architecture in three pieces
You don't have to understand the architecture to use Cowork, but knowing the three big pieces saves confusion when something doesn't behave the way you expect.
The Desktop app is where Cowork lives. It runs locally on your Mac or Windows machine. The app must stay open and the computer must stay awake for tasks to make progress. There's no separate server; if your laptop sleeps, Cowork pauses, and if you close the app mid-task, the task stops where it left off. Reopen the app and the session is in your task history; you can often resume by describing what's left, but Claude doesn't pick up automatically from a hard interrupt. Plan around this: long-running tasks need a wakeful, app-open machine for their full duration.
The task loop is the core mechanic. You describe an outcome, Claude analyzes the request and creates a plan, you approve / redirect / refine the plan, Claude executes (sometimes spawning sub-agents to work in parallel), Claude pauses for approval before significant actions (or doesn't, in "act without asking" mode), and you receive a finished deliverable. Most of the discipline of using Cowork well lives inside this loop.
The execution surface is where Claude actually does work. Three layers:
- Local files in folders you've explicitly granted access to.
- Code execution inside an isolated virtual machine on your computer. Unlike OpenClaw, you do not configure this sandbox yourself: Anthropic manages it. You don't need to know how it works to use Cowork well; what you do need to know is that the sandbox is around code execution, not around Claude's direct desktop and browser actions, which are separate surfaces.
- External services through connectors (Slack, Google Drive, Gmail, etc.) and through Claude in Chrome for browser-based work.
The sandbox around code execution is real and Anthropic-managed; you don't configure it the way you would in OpenClaw. What you control is what's outside the sandbox: which folders Claude can see, which connectors are turned on, which approval mode you're in.
3. Folders, connectors, approvals: the trust model
This is the most important section even though it feels like setup.
Folder access is how you tell Cowork which parts of your filesystem are in-scope. The first time you switch to the Cowork tab you'll see a "Grant Access" or "Choose Folder" button: click it, navigate to your folder, confirm. Claude cannot read or write outside what you've granted. The single most-leverage habit in this whole course: make a dedicated working folder (something like ~/Claude-Workspace/) and grant Cowork access to that, not your entire home directory or your Documents folder. When something goes wrong (a misnamed file, an over-aggressive batch operation), the blast radius is the working folder, not your life. Once granted, type a small read-only prompt into the Conversation panel just to see the approval flow in its smallest form:
List the files in this folder and tell me what kinds of things
are here. Don't open or read any files yet.
You'll see the file list appear in the Execution panel. Cowork did not need approval to read; it needs approval before it writes.
Connectors extend Cowork into external services. Add them by clicking the + button next to the prompt box, then selecting Connectors. A browser window opens for OAuth, you sign in, you grant scopes, you return to Cowork. Two categories worth distinguishing: web connectors (Google Drive, Gmail, Slack, Notion, Calendar, GitHub: services Cowork talks to over Anthropic-hosted remote MCP servers), and desktop extensions / local MCP servers (run on your machine, often with deeper system access). Both appear behind the + menu and both expand what Cowork can do, but the trust profiles differ. Desktop extensions run with the same permissions as any other program on your computer, so the bar for installing them is meaningfully higher. Each connector is a separate decision regardless: do you trust this enough to let Claude read and (sometimes) write through it on your behalf? Granting Gmail read scope lets Cowork summarize threads on your behalf; granting send scope lets Cowork also write messages that appear from your address. The first is a privacy decision; the second is a representational one. Read the scopes before you click connect, and prefer the narrower set when both work. (Manage what's connected anytime via Settings > Connectors or Manage connectors from the + menu.)
Approval modes govern how Claude behaves when it wants to take a significant action. Two modes, set per task in the Cowork sidebar:
- Ask before acting (default). Claude posts the plan into the Execution panel and pauses; an Approve / Redirect prompt sits at the bottom of the panel until you respond. Slower, safer. Use this until you have a feel for the work.
- Act without asking. Claude works through the entire plan without pausing per step. Faster, riskier. Anthropic's docs are direct: use this only when you're actively supervising the screen and working with trusted files. Even here, deletions still require explicit permission.
The approval table is asymmetric on purpose: reads happen automatically; writes, modifications, deletions, and moves all require an explicit click before Cowork proceeds.
The pattern that works: leave approvals tight for the first two weeks. Watch what Claude wants to do. Notice the patterns of approvals you keep granting for the same kind of action: that's the work that's safe to delegate more autonomously. Notice the approvals where you actually had to think: that's the work that needs to stay supervised.
Anti-pattern: switching to "act without asking" because the prompts feel slow on day three. The prompts are how you build calibration. Skipping them is how you end up with a confidently-executed mistake across 40 files.
Your first real Cowork task: a multi-source follow-up brief
The trust model is the foundation. The next thing to build is the muscle memory of running an actual Cowork task end to end. This is the canonical "first real Cowork task." It's bounded, it teaches the multi-connector pattern, and it produces a deliverable you'd actually want.
You had a sales call yesterday. The rep took notes in Notion. There was a Slack thread with the prospect's questions. You promised to send a follow-up email. The naive way is to re-read everything yourself and draft it. The Cowork way is to delegate the assembly so you can spend your time on what the email actually says.
Step 1: Verify your connectors. This task needs Slack and Notion (and Gmail if you want Cowork to draft into the email itself rather than handing you the text). If they're not already connected, install them via the + button > Connectors > Browse. Each one walks you through OAuth and asks what scopes to grant. Read the scopes. "Read messages in channels I'm a member of" is fine; "Read messages in all channels" is broader than you probably need.
Step 2: Make a working folder. ~/Claude-Workspace/follow-ups/. Click Grant Access in the Cowork tab and select it. The drafted email and any reference notes will land here.
Step 3: Describe the outcome. Outcome-first, not steps-first. End the prompt with Ask me 1-2 clarifying questions before you start. Surfacing unstated assumptions before execution is the cheapest quality lever Cowork has.
I had a sales call with Acme yesterday afternoon. I need to draft a follow-up
email. Sources:
- The Notion page "Acme Discovery Call - 2026-04-29" has my rep's notes
- There was a Slack thread in #acme-deal yesterday where they asked
questions during the call
The email should:
- Thank them and reference one specific thing from the call
- Answer the two questions they asked in the Slack thread
- Suggest next steps (proposal walkthrough, timeline)
- Match my normal email tone (direct, no throat-clearing)
Save the draft to ~/Claude-Workspace/follow-ups/ as a .md file.
Ask me 1-2 clarifying questions before you start.
Step 4: Answer the clarifying questions. Cowork will ask things like "Do you want me to commit to a specific timeline, or should I leave it as 'we can schedule whenever works'?" and "Should I CC anyone on your team, or is this just to the prospect?" Two questions, ninety seconds, dramatically better deliverable than if you'd skipped this step.
Step 5: Read the plan. Cowork will post the plan into the Execution panel: read the Notion page, read the Slack thread, draft an email matching your tone constraints, save to the folder. Look for:
- Did Cowork find the right Notion page? (It might propose a similar-looking page from a week ago.)
- Did Cowork identify the right Slack thread? (Channel name might match more than one thread.)
- Are the constraints from your prompt represented in the draft plan?
If the wrong sources were picked, redirect instead of approving: "Use the page titled 'Acme Discovery Call' dated 2026-04-29, not the one from last week."
Step 6: Approve and watch. Cowork reads from Notion, reads from Slack, drafts the email, saves it. The first time you run something like this, watch every step in the Execution panel: you're calibrating what kinds of decisions Cowork makes well versus where it drifts. After three or four of these, you'll have the pattern.
Step 7: Review the deliverable. Open the markdown file from the Artifacts panel. Does the email read like you wrote it, or like an AI wrote it? Does it actually answer the two Slack questions, or did it gloss over one? Edit. The goal isn't a perfect first draft; it's a 70% draft you finish in five minutes instead of a 0% draft you spend thirty minutes assembling.
What to notice. Five delegation decisions you just made:
- Connector trust: granted Slack and Notion access at appropriate scopes, not maximum scopes.
- Folder scope: chose a working folder, not a broader part of the filesystem.
- Outcome framing: described the deliverable, let Cowork propose how to assemble it.
- Plan review: caught the source-selection question (right Notion page) before execution.
- Approval mode: stayed in "ask before acting" because this involves untrusted content from a third-party (the prospect's Slack messages, which Cowork is reading and synthesizing).
That's the template. Every multi-source synthesis task you ever run has this shape: connectors, folder, outcome, plan review, execute, review. Once you've done it once, the second time is mostly muscle memory. The rest of this crash course is the discipline behind each of those five decisions.
Why this example over "organize my downloads." Sorting a folder teaches the rhythm of approval prompts but doesn't push you through the connector trust model or the multi-source synthesis pattern, which is where Cowork actually pays off versus chat. Anyone can sort files; what's distinctive about Cowork is assembling deliverables across tools you already use. Better to learn that pattern on day one.
Part 2: Context, sessions, and projects
If you came from the Claude Code or OpenClaw crash courses, this section will feel familiar. Same primitives, same pitfalls, slightly different surface.
4. The plan is the leverage
Every Cowork task starts with a plan that appears in the Execution panel before any file operation runs. Most users skim it, click Approve, and watch the work go sideways twenty minutes later. The plan is not a formality: it's the cheapest place in the entire workflow to course-correct.

What to actually look at when a plan appears:
- Scope: is Claude proposing to touch only the files you described, or has the scope crept? "Sort this folder" should not turn into "rename everything in three subfolders."
- Order: does the sequence make sense, or has Claude jumped to a destructive step before a verification step?
- Tools: is Claude proposing to use a connector you didn't expect? An MCP server you forgot was installed?
- Assumptions: what is Claude assuming about file formats, naming conventions, or your preferences that it shouldn't be?
If the plan is wrong, you don't have to start over. Type a one-sentence redirect into the Conversation panel instead of clicking Approve:
Skip step 3, and for step 4 use the column headers in the existing
template instead of creating new ones.
Cowork rewrites the plan in the Execution panel and re-asks for approval. Two minutes of plan review prevents two hours of cleanup.
5. Context still costs money
Every message Cowork sends to the model includes the system prompt, your global instructions, the project's instructions, the conversation so far, the contents of files Claude has read this session, and any active skill content. That all costs tokens, and the bill is yours.
Two practical implications:
-
Don't dump entire folders into context unprompted. If you say "read every file in this folder," Claude does it, and you've just paid to load potentially hundreds of files into the conversation. Better: ask Claude to list first, propose what matters, then read only those.
First, list this folder and tell me which files matter for
[my question]. Read only those, then summarize. -
End long sessions cleanly. When a task is done, start a new session for the next one. Carrying yesterday's conversation into today's task pays for context you no longer need.
The same compaction discipline that applies in coding agents applies here: less context, used deliberately, beats more context dumped in hope.
6. Projects: the persistent workspace
Cowork supports projects: persistent workspaces that bundle a set of folders, a set of files, custom instructions, scheduled tasks, and (within projects) memory that persists across sessions.
This is the right unit of organization for recurring work. Not "weekly report" as a one-off chat, but "weekly report" as a project with the source folder attached, the template file pinned, instructions on tone and format, and a scheduled task that runs every Friday afternoon.
The two failure modes:
- Putting everything in one project. Context bleeds. The project that's supposed to be "Q1 financial analysis" starts pulling in instructions you wrote for "marketing copy review" because you put them in the same project two months ago. Separate projects for separate workstreams.
- Standalone sessions for recurring work. Memory in standalone Cowork sessions is not retained. If you find yourself re-explaining the same context every Tuesday, that's a signal: the work belongs in a project.
The 80/20 rule: every recurring task is a project; every one-off task is a standalone session.
Part 3: Rules and instructions
Cowork has a layered instruction system. Knowing the layers saves you from the most common confusion: "why did Claude ignore what I said?"
7. Global, folder, and session instructions

Three layers, in order of how broadly they apply:
- Global instructions. Set once in
Settings > Cowork. Apply to every Cowork session you ever run. Use for things that are true about you across all work: your role, your preferred tone, output formats you always want, background context that shouldn't be re-typed. - Folder instructions. Attach to a specific folder. Apply when that folder is in scope. Use for things that are true about that body of work: naming conventions, the structure of files in that folder, project-specific terminology. Claude can also update folder instructions on its own during a session as it learns about the folder's structure.
- Session prompts. What you type for the current task. Use for the actual goal of this task.
The mistake nearly everyone makes is putting everything in global instructions. The result is a 3,000-token system prompt that costs you on every turn and confuses Claude with rules that don't apply to most tasks.
The right model: global is sparse, folder is specific, session is the goal. Global should fit in two short paragraphs. Folder instructions live with the work they describe. Session prompts state the outcome.
A working example of good layering:
Global:
I'm a marketing analyst at a mid-size SaaS company. I write in concise,
direct prose. Default to markdown for documents, .xlsx for any tabular
output, and skip the throat-clearing intros.
Folder (Q1-campaign-analysis/):
This folder contains weekly campaign reports from Jan-Mar 2026. Files are
named YYYY-MM-DD-campaign-report.csv. Conversions in column G, spend in
column H. The "control" segment is always row 2.
Session prompt:
Compare conversion rates across the 12 reports in this folder. Identify
the top 3 weeks and what they had in common. One-page summary.
Notice what's not in global: the naming convention, the column structure, the segment definitions. Those belong with the folder.
8. The "ask me questions before you execute" pattern
A pattern from Anthropic's own best-practices docs that's worth internalizing: instead of stating the task and hoping Claude got it, end with "ask me 1-2 clarifying questions before you start."
For non-trivial tasks, this surfaces unstated assumptions that would otherwise become bugs. "Should I include the canceled subscriptions in the count?" / "Do you want this sorted by date or by impact?" Two questions, ninety seconds, a much better deliverable.
This is the cheapest quality lever in Cowork and almost no one uses it.
Part 4: Extending Cowork
Cowork extends in four major ways. Decision tree:
- Need Claude to follow a specific procedure when a matching task comes up? Skill
- Need Claude to read or write through an external service? Connector
- Need a packaged bundle of skills and connectors for a specific role? Plugin
- Need to give Claude a richer or different surface to act on? MCP / desktop extension
Every one of these is a delegation tool, not just a feature. Skills shape how Claude works. Connectors shape where Claude can reach. Plugins shape what role Claude is playing. MCPs shape what surfaces Claude can act on. Same problem as in coding agents, different shapes.
9. Skills
Cowork skills are AgentSkills-compatible: same SKILL.md format you'd see in Claude Code, OpenClaw, or any other Anthropic-stack tool. A skill written for one of those tools is close to working in Cowork, often without changes. The portability is real.
Three ways skills enter your Cowork:
- From the directory. Click the + button next to the prompt box, choose Skills (or Customize > Skills), browse, click install. Anthropic publishes their own; the community has published thousands. Once installed, the skill appears in your + > Slash commands menu and auto-fires when a task description matches it.
- Generated by Claude. Cowork ships with a
/skill-creatorworkflow. Type/skill-creatorinto the Conversation panel and describe a task you do every week: Claude asks clarifying questions in-line, generates the skill, runs an evaluation against test cases, and saves it for you. This is the cheapest path to a first custom skill, and the one most users skip because they don't know it exists. - Authored manually and uploaded. Build a folder containing a
SKILL.md(and any supporting files), ZIP the folder, and upload via Customize > Skills > Upload. The ZIP must contain the skill folder as its root, not nested inside another folder. Custom skills uploaded this way are private to your account; on Team and Enterprise plans, owners can provision skills org-wide instead.
A minimal SKILL.md:
---
name: weekly-brief
description: Generate the user's weekly status brief from a folder of meeting notes
---
1. List files modified in the last 7 days in the current project folder.
2. Read each meeting-notes file (filename matches _meeting_.md).
3. Read each project file modified this week (filename matches _project_.md).
4. Produce a one-page brief with:
- 3 bullet "what shipped"
- 3 bullet "what's at risk"
- 1 paragraph "next week's focus"
5. Save as weekly-brief-YYYY-MM-DD.md in the current folder.
The description is the most important field. It's what Claude uses to decide whether the skill applies. Vague descriptions ("helps with weeks") fire on everything; specific ones ("Use when the user asks for the weekly brief...") fire only when relevant.
Two ways skills get used: Claude auto-invokes them when a task description matches the skill's description, or you explicitly invoke them by typing / in the Conversation panel to open the slash-command menu (examples: /debug, /deck-check) and selecting one. Typing the skill name directly in your prompt also works: Claude recognizes when a skill applies.
A useful efficiency note: the bulk of skill content typically loads on demand rather than up front. The YAML frontmatter and a brief description load when the skill is registered; the full body loads when a task actually matches. So installing many skills costs less context than you'd expect: the gating is built into the architecture. Don't read this as license to install dozens speculatively, but do read it as permission to install the ones you might actually use without worrying about ambient cost on every turn.
Skills are trusted code running in your Claude environment, sometimes with access to install third-party packages from PyPI or npm. Anthropic's own docs are direct: only install skills from sources you trust, and read the contents of community skills before enabling them. The OAuth tokens, API keys, and connector credentials in your Cowork session are reachable from a misbehaving skill in ways that aren't always obvious.
10. Connectors
Connectors link Cowork to external services. Two categories:
- Web connectors. Run through browser-based APIs. Google Drive, Gmail, Slack, Notion, Calendar, GitHub. The default first connectors most users add.
- Desktop extensions (MCPs). Run locally on your machine, often with deeper system access. The catalog has hundreds; the trust requirements are higher.
Where connectors get powerful is combinations. One connector is useful; three connectors that work together unlock workflows that didn't exist before. "Pull last week's Slack thread on the Acme deal, cross-reference with the Notion page on Acme's renewal, and draft a follow-up email." That's three connectors in one task, and the answer would have taken a human 20 minutes of context-switching to assemble.
Discipline: install a new connector when you have a specific workflow it unlocks. Don't install connectors speculatively. Each one expands the surface area where things can go wrong, including new prompt-injection vectors from content on the other side of those services.
11. Plugins
Plugins are bundles. Each plugin packages one or more skills, connectors, slash commands, sub-agents, and configuration into a single download. Anthropic launched plugin support for Cowork in early 2026, open-sourcing eleven of their own internal plugins at the same time, covering most common business functions plus a meta-plugin for building your own.
That open-source set, anthropics/knowledge-work-plugins, is the canonical starting point. They're built and used by the Anthropic team internally, they're MIT-licensed, and they're meant to be forked and customized for your specific tools and conventions. Most users' first plugin should be one of these, modified rather than written from scratch.
Concretely: a "Sales" plugin from that set might bundle skills for call prep and outreach drafting, connectors for Salesforce and Slack, namespaced slash commands like /sales:call-prep and /sales:pipeline-review, and sub-agents that handle subtasks like sourcing comparables. Once installed, the skills auto-fire when relevant; the slash commands appear in your sidebar; the connectors are wired up.
The namespacing matters: plugins prefix their slash commands with the plugin name (/sales:call-prep rather than just /call-prep). When you install several plugins, this prevents collisions: two different plugins can each ship a call-prep command without overwriting each other.
For Enterprise customers, admins can publish private plugin marketplaces and auto-install approved plugins for new team members. That's the feature that turns Cowork from a personal-productivity tool into a team-knowledge tool: institutional workflows and conventions encoded as plugins, deployed to everyone, evolving over time.
For individuals: install plugins through the + button > Plugins > Add plugin, or build your own using the Plugin Create workflow inside Cowork (same pattern as /skill-creator). After install, the plugin's slash commands appear in your + > Slash commands menu and its connectors are pre-wired. Plugins are file-based: every component is a markdown or JSON file, so editing one is the same skill as editing a skill.
A plugin may install third-party MCP servers and software that run with the same permissions as any other program on your machine. Anthropic-Verified plugins have undergone additional review; non-verified plugins should be reviewed before install. Each plugin you add expands Cowork's surface area in ways that aren't always obvious, including new prompt-injection vectors from whatever data sources the plugin's connectors reach.
12. Sub-agents
This is the feature that turns Cowork from a faster chat into a categorically different tool, and the one most users underuse because they don't know to invoke it.
When Claude gets a task that breaks into parallel work, it can spawn sub-agents: parallel workers that each handle a piece simultaneously. Instead of reading 20 files sequentially, Claude can dispatch four sub-agents that read five files each in parallel. Each sub-agent works in its own context, which keeps the main session's context clean: what comes back to your main thread is the sub-agent's result, not the raw files it read to produce it.
You can tell sub-agents fired by watching the Execution panel: instead of one linear stream of file reads, you'll see multiple parallel workers progressing at once, often labeled by their slice of the work (e.g., "transcript 3 of 12", "dimension: mobile experience"). When they finish, the panel collapses back to a single thread for the synthesis step, and the Conversation panel surfaces only the combined result. If you watch the panel and see a long sequential stream of reads on a task that should have parallelized, that's the cue your prompt didn't make the parallelism obvious enough; redirect with "split this into N sub-agents, one per [item]" and Cowork rewrites.
The exact token accounting (whether sub-agent tokens count against your usage cap, against your context budget for the parent session, or are independent in both senses) varies by plan and product version, so check your plan's specifics if cost matters to you. The qualitative point holds across all configurations: a 30-minute sequential job often becomes a 5-minute parallel job, and your main session doesn't bloat from the work.
You don't write special syntax for this. You frame the task to make the parallelism obvious, and Cowork dispatches sub-agents automatically. Three patterns that reliably trigger parallelization:
The fan-out pattern. "For each of these N items, do X." The N is the load-bearing word: it tells Claude there are independent units of work that can be split.
"Process each of the 12 customer-interview transcripts in this folder. For each one, produce a one-page summary covering pain points, feature requests, and buying signals. Then synthesize a top-level themes document across all 12."
The first sentence is the parallelizable part; the second sentence is the synthesis that has to happen after, in the main session. Cowork sees this structure and dispatches 12 sub-agents (or fewer, working in batches) for the per-transcript work, then collects the results.
The dimension pattern. "Analyze X across N dimensions." Each dimension is independent enough to dispatch.
"Audit our pricing page across these dimensions: messaging clarity, competitive positioning, conversion friction, mobile experience, accessibility. Score each, then prioritize what to fix."
Five sub-agents, one per dimension, then a main-session synthesis.
The compare pattern. "Compare A and B." Two independent reads can run in parallel.
"Read last quarter's strategy doc and this quarter's strategy doc. For each one, extract the top three priorities. Then identify what changed and what didn't."
Two sub-agents, one per document, then a main-session diff.
When not to invoke sub-agents. The orchestration has real overhead. Spinning up sub-agents, tracking their progress, collecting their results, synthesizing: all of that costs time and tokens. For small tasks, the overhead exceeds the gain. Three categories where sequential is better:
- Genuinely sequential work. Each step depends on the last. "Read the file, fix the bug it describes, then run the tests" is three dependent steps, not three parallel ones. Sub-agents would just deadlock on each other.
- Small batches. Three files isn't worth parallelizing; the orchestration overhead costs more than the time savings. Twelve files is worth it. The threshold is somewhere around 5-7 items in practice.
- Tasks where coherence across items matters more than throughput. If the right output for item 3 depends on what was decided about items 1 and 2, sub-agents fragment the reasoning. Sequential keeps the context together.
A debugging note. When sub-agent runs go wrong, the symptom is usually consistency drift: sub-agents made different choices about the same edge case because each one only saw its own slice. The fix is to put the consistency rules into the main task description, not into the sub-agent prompts (which Claude generates and you don't directly write). Telling Cowork up front "use the same naming convention across all summaries: lowercase-hyphenated, dated YYYY-MM-DD" gets passed down to every sub-agent. Discovering the inconsistency after the fact and trying to fix it post-hoc means re-running the whole batch.
The right mental model: sub-agents are best for work that is embarrassingly parallel, many independent units, each with the same shape, where the only thing that matters is doing them all and assembling the output. The more your task fits that description, the bigger the parallelism payoff.
Part 5: Safety and the autonomy ladder
This section is where Cowork's most important discipline lives, and where the "delegation problem" thesis earns its place.
Anthropic's product page is explicit: Cowork is not currently suitable for HIPAA, FedRAMP, or FSI-regulated work. If your job involves protected health information, federal information systems, or regulated financial services data, do not point Cowork at folders or connectors carrying that data unless Anthropic's enterprise guidance has explicitly changed and your compliance team has signed off. The safety practices below are necessary but not sufficient for regulated contexts.
13. The autonomy ladder
There's a spectrum from full oversight to full autonomy, and you climb it task by task as you build calibration.
- Watching closely. Default mode, novel task. You read the plan carefully, you watch every approval prompt, you stop and redirect at the first sign of drift. This is week one for any new kind of task.
- Ambient supervision. You've done this kind of task a few times. You read the plan, approve, then check in periodically while doing other work. Most regular Cowork use lives here.
- Walk away. You trust the task pattern. You start it, leave the room, come back to a finished deliverable. Reserve for tasks you've watched succeed multiple times.
- Act without asking. Claude works through the plan without pausing for per-step approval. Faster, riskier. Use only when (a) you're actively supervising the screen, (b) the files and sites are trusted, and (c) you can hit stop the moment something looks wrong. Even here, deletions still require explicit approval.
- Scheduled tasks. Claude runs the work on a cadence (daily, weekly) without you watching at all. Reserve for tasks that have run successfully under supervision multiple times.
The mistake is climbing this ladder too fast. The discipline is climbing it deliberately, one rung per task type, and being willing to step back down when a task type changes (new data source, new connector, new edge case) until you've recalibrated. The Part 6 walkthrough takes one task all the way up the ladder, from supervised first run to scheduled: that's the demonstration of what this looks like in practice.
14. Prompt injection is a real attack class
Prompt injection happens when a malicious document, webpage, or email contains instructions that try to hijack Claude into doing something you didn't ask for: exfiltrating files, sending messages, disabling safeguards. The instructions look like normal text to you; Claude reads them as commands.
This isn't theoretical. The combination of (a) Claude reading content you didn't author, (b) Claude having access to your files and connectors, and (c) "act without asking" mode means a single poisoned input can move through your system fast.
- Don't run "act without asking" on tasks that involve untrusted content: emails from strangers, web pages you didn't choose, documents from unknown senders. The whole point of "ask before acting" is to give you a chance to notice when Claude is about to do something the actual content asked for, not what you asked for.
- Be careful with new MCPs and plugins. Each one is a new ingestion point. The plugin you installed last week might process content from a connector you trust, and that content might carry an injection.
- Watch for scope creep in the plan. If the proposed plan in the Execution panel names files, folders, or connectors you didn't mention, do not click Approve. Either type a one-sentence redirect ("only touch the
inbox-review/folder; do not write to anything else") or close the task and start over. That's the symptom of either an injection or a confused model. - Hit Stop the moment things drift mid-task. The Stop button on the active session halts execution immediately. If the Execution panel shows Cowork opening a file, calling a connector, or sending a message you didn't authorize, click Stop first and ask questions after. A halted task is recoverable; a sent email or deleted file is not.
Anthropic's docs are direct about this risk. The mitigations are real but not perfect. The user-side defense is staying in "ask before acting" mode for any task that touches untrusted content.
15. Scheduled tasks need extra care
The fastest way to create a scheduled task is from inside one you've already run: type /schedule in the Conversation panel and Cowork opens a Create scheduled task modal pre-filled with that task's prompt. Fill in Name (e.g., daily-briefing), tweak the Description (the prompt body Cowork will run on the schedule), optionally assign Work in a project so the task inherits project instructions, pick an approval mode (Ask keeps you in the loop), and choose Frequency from the dropdown:
- Manual (runs only when you trigger it; useful for tasks you want pre-configured but not on a clock)
- Hourly
- Daily
- Weekdays (Monday-Friday only)
- Weekly
Click Save. The task now lives under Scheduled in the left sidebar of the Cowork tab, alongside Projects, Live artifacts, Dispatch, and Customize. From that page you see every scheduled task, run any of them on demand, edit them, or delete them. The page also has a Keep awake toggle that tells your OS to suppress sleep during the windows when a task is due. (You can also hit New task on that page to create one from scratch instead of via /schedule.)

Schedules only run while your computer is awake and the Desktop app is open. During those windows, the task runs without you watching, which is why the autonomy-ladder rule applies in its strictest form here: if you wouldn't already trust this task in "walk away" mode, don't schedule it. You can't course-correct a task you're not watching.
What works well as a scheduled task:
- Information-gathering jobs (compile yesterday's sales data, summarize Slack channels, check a folder for new files).
- Tasks with bounded outputs (always produces a file in a specific folder, never sends mail, never makes purchases).
- Tasks you've watched succeed at least three times under supervision.
What doesn't work as a scheduled task:
- Anything that sends messages on your behalf without final review.
- Anything that takes financial actions: purchases, payments, transfers.
- Anything that operates on sensitive files (HR, legal, financial records) without an explicit human-review step.
- Anything that processes content from people you don't know.
Build the deliberate path: supervised, then walk-away, then scheduled, with at least a week between each step.
Part 6: A complete scheduled worked example
You ran a one-off multi-source brief at the top of this guide. This second walkthrough is the inverse: a recurring task you eventually trust enough to schedule. It walks the autonomy ladder deliberately, from supervised first run to scheduled, which is the demonstration of what Concept 13 looks like in practice.
A weekly research-synthesis task, scheduled
The morning brief from OpenClaw, but for desktop knowledge work and as a recurring job.
You read industry news every Monday, and synthesizing it eats your morning. You want Cowork to do the synthesis on Sunday night so Monday morning is just review.
Step 1: Make it a project, not a session. This is recurring, so create a Cowork project. Name: "Industry weekly brief." Add the relevant folders and connectors (your RSS pipeline, a Google Drive folder where you save articles, the Notion page where you keep ongoing themes).
Step 2: Project instructions.
This project produces a weekly industry brief, delivered Monday at 8am.
Sources:
- Articles saved to /weekly-brief/articles-this-week/
- Slack #industry-news channel from the past 7 days
- Notion page "Ongoing Themes" - topics already on my radar
Output:
- Top 3 stories (one paragraph each, with link)
- 1 paragraph "what changed for our space this week"
- Up to 3 new themes that didn't exist last week
- Save as weekly-brief-YYYY-MM-DD.md to the project's root folder
Tone:
- Direct. No throat-clearing. Assume reader is technical.
- If a story is hyped but actually nothing-burger, say so.
Step 3: Run it once manually. Don't schedule yet. Trigger the task while watching, end-to-end. Check the deliverable. Check what Claude pulled from each source. Notice what got missed and what got included that shouldn't have. Refine the project instructions accordingly.
Step 4: Run it once more, manually. Check again. If it's good twice in a row with no edits, you're ready to schedule.
Step 5: Schedule it.
Run this brief every Sunday at 9pm.
Cowork sets the schedule. Confirm the task shows in your scheduled tasks list. Confirm you understand: this only runs while your laptop is awake and Desktop is open.
Step 6: First scheduled run. Monday morning, the brief is in the folder. Read it as you normally would your industry news. Did Claude get it right? File feedback into the project instructions: "In future briefs, please cluster mentions of the same company across sources rather than repeating them."
What to notice. This walks the autonomy ladder deliberately:
- Manual run with watching: supervised mode.
- Manual run again: checking calibration.
- Scheduled, but you're reviewing the output the next morning: walk-away mode with downstream review.
- Eventually, after six or eight successful runs, the brief becomes ambient. You trust it; you read it like any other newsletter.
What's in this you can reuse. The shape (project, instructions, manual runs, schedule once trusted, feedback loop) is the template for every recurring Cowork workflow. Friday cleanup, Monday brief, daily inbox triage, end-of-month bookkeeping. Same five steps, different content.
Part 7: Where to grow
Connector combinations are where the real value lives
The first month of Cowork is mostly single-connector tasks. The second month is where multi-connector workflows start. The Slack-search-plus-Notion-cross-reference-plus-email-draft pattern is the example most people remember; the actual win is whatever specific combination cuts twenty minutes out of your week.
The way to find these: notice the multi-tool tasks you keep doing manually. Any sentence that contains "and then I open the other tab to..." is a candidate. Build that as a Cowork task. If it works, it becomes a saved pattern. If it works repeatedly, it becomes a project.
Audits, like before
Once a month: review what Cowork has access to. Folders. Connectors. Skills. Plugins. Scheduled tasks. The same accumulation problem applies as in any agentic tool: last month's experimental connector is this month's permanent surface area you forgot you had.
Ten minutes. Skip it for six months and you'll find your assistant has access to four things you don't remember granting and three scheduled tasks you don't remember setting up.
How to actually get good at this
Reading this crash course doesn't make you good at Cowork. Using it does, and the path is the same shape as it was for the previous tools in the series.
You start manual. You feel friction: every plan you have to read, every approval prompt, every "wait, why does it want that connector." That friction is the curriculum. Each piece of friction maps to one of the concepts above:
- "Why does Cowork keep formatting the report wrong?" Global or folder instructions are missing the format spec.
- "Why does it want to touch files I didn't mention?" The plan has scope creep; redirect, don't approve.
- "Why is it slow on this batch of 20 files?" Frame the task to make sub-agent parallelism obvious.
- "Why am I describing this same workflow every Tuesday?" That's a project, not a session.
- "Why did it just send something it shouldn't have?" "Act without asking" mode on a task that wasn't ready for it.
Build the response when you hit the problem, not before. Your global instructions should be two paragraphs, not twenty. Your project list should have three projects before it has ten. Your "act without asking" usage should be earned, not defaulted.
The 80/20 isn't memorizing concepts. It's noticing which one a given problem belongs to, fast enough that you reach for the right tool. That noticing is the skill.
The portability dividend. Cowork's skill format is shared with Claude Code and OpenClaw. The plan-then-execute pattern is the same. Sub-agents work the same way. The thinking transfers; the surfaces change. Once you've built delegation calibration in one tool, the next tool is mostly learning where the buttons live. (For the coding side, see the Claude Code and OpenCode crash course; the discipline is the same shape.)
Start with one task. Use a working folder. Read the plan. Approve cautiously. Audit monthly. The rest builds itself.
First week path
If you want a concrete sequence rather than a bag of concepts:
- Day 1. Install Claude Desktop and sign in. At the top of the window, click the Cowork tab (next to Chat and Code). Make
~/Claude-Workspace/in Finder or Explorer, then click Grant Access in the Cowork tab and select that folder. Type one read-only prompt ("List the files in this folder") and watch the result appear in the Execution panel. That single round trip is your installation acceptance test. - Day 2. Run one low-stakes task. Pick a multi-source synthesis like the follow-up email pattern from Part 6A, or sort a folder if you don't have connectors set up yet. Stay in "ask before acting." Watch every prompt.
- Day 3. Write your global instructions. Two short paragraphs. Your role, your tone, your default formats. Resist writing more.
- Day 4. Pick one recurring task you do manually each week. Make it a Cowork project. Add folder access and any obvious connectors.
- Day 5. Run that recurring task manually inside the project. Capture what worked into project instructions. Don't schedule yet.
- Day 6. Run it manually a second time. Refine. Notice what Cowork got wrong twice: that's a pattern that needs to be written into instructions.
- Day 7. Audit what you've installed: which folders, which connectors, which skills. Decide what stays. Schedule the recurring task only if both manual runs were clean.
By the end of week one, you should have one supervised one-off pattern and one in-progress recurring workflow, with a permission profile that fits your actual usage rather than the defaults. Add the second recurring workflow in week two; don't try to automate everything in week one.
Quick reference
The 15 concepts in one line each
- What Cowork actually is: an agent that runs on your desktop, plans-then-executes, returns finished deliverables. Delegate, don't query.
- The architecture in three pieces: Desktop app (where it runs), task loop (plan, approve, execute), execution surface (files, sandboxed VM, connectors).
- Folders, connectors, approvals are the trust model. Dedicated working folder; per-connector decision; "ask before acting" until calibrated.
- The plan is the leverage. Read it before approving. Two minutes of plan review beats two hours of cleanup.
- Context still costs money. Don't dump folders into context unprompted. End long sessions cleanly.
- Projects: the persistent workspace. Recurring work goes in projects; one-offs stay as sessions.
- Global, folder, and session instructions stack. Global is sparse, folder is specific, session states the goal.
- Ask-clarifying-questions pattern: end task descriptions with "ask 1-2 clarifying questions before you start." Cheapest quality lever.
- Skills are AgentSkills-compatible. Auto-invoke on description match, or type
/in the sidebar to browse. Use/skill-creatorto generate your first custom skill. Read third-party skills before installing. - Connectors link to external services. Each one is a separate trust decision. The wins are in combinations.
- Plugins are bundles of skills, connectors, sub-agents, and namespaced slash commands. Start with the open-source
knowledge-work-pluginsset; review non-verified plugins before install. - Sub-agents parallelize embarrassingly-parallel work: fan-out, dimension, and compare patterns. 30 minutes becomes 5 minutes for batch jobs. Skip for small batches and sequential work.
- The autonomy ladder: watch closely, ambient supervision, walk away, act without asking, scheduled. Climb deliberately.
- Prompt injection is real. Don't use "act without asking" on tasks that touch untrusted content.
- Scheduled tasks need stricter trust. If you wouldn't already trust the task in walk-away mode, don't schedule it.
Action quick-ref
| Want to... | How |
|---|---|
| Switch from Chat to Cowork | Click the Cowork tab in the desktop app sidebar |
| Grant folder access | Add folder via the sidebar before starting a task |
| Add a connector | Settings > Connectors > Browse |
| Install a skill | Customize > Skills > Browse for the directory, or Upload for a ZIP of a custom skill folder |
| Trigger a skill manually | Type / in the sidebar to browse, or describe the task naturally |
| Generate a custom skill | /skill-creator |
| Set global instructions | Settings > Cowork > Global instructions |
| Set folder instructions | Available when a folder is in scope |
| Make a project | New project in the Cowork sidebar |
| Schedule a task | Run it manually first; then ask Cowork to schedule it |
| Switch to act-without-asking | Per-task toggle (use sparingly) |
| Stop a running task | Stop button in the active session |
Trust-level decision tree
New kind of task?
-> Ask before acting. Watch every prompt.
Done this kind of task a few times?
-> Ask before acting. Check in periodically.
Done this kind of task many times, all clean?
-> Walk away. Review the deliverable.
All of the above + bounded output, no messages, no purchases?
-> Eligible for scheduling.
Task involves untrusted content (stranger email, unknown web pages)?
-> Stay in ask-before-acting. Never act-without-asking.
Audit checklist (monthly)
- Which folders does Cowork have access to? Still want all of them?
- Which connectors are enabled? Each one still in active use?
- Which skills and plugins are installed? Anything you don't recognize?
- Which scheduled tasks are running? When did each one last succeed?
- Global instructions: anything stale or contradictory?