
How to use Claude Code like its creator (proven workflow that delivers results)
Learn Boris Cherny's battle-tested Claude Code workflow: parallel sessions, plan mode, verification loops, and automated quality checks that boost productivity by 70%.
Most developers use Claude Code like a fancy autocomplete tool. Meanwhile, some teams at Anthropic have 90% of their code written using Claude Code, with productivity per engineer growing almost 70%. The difference isn't the tool—it's the workflow.
Boris Cherny is a Staff Engineer at Anthropic who helped build Claude Code, and his approach to using it has become legendary in developer circles. When he shared his workflow on X, it sparked what observers called "a viral manifesto on the future of software development".
Here's how to use Claude Code the way its creator does—and why this approach works so well.
What makes Boris's workflow different from most developers?
Claude Code is Anthropic's agentic coding tool that lives in your terminal, but most people treat it like a single assistant. Boris sees it differently: Boris doesn't see AI as a tool you use, but as a capacity you schedule. He's distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready.
The official Claude Code documentation covers basic installation and commands, but Boris's approach transforms it into a productivity multiplier through three core principles:
- Parallelization over complexity - Multiple simple sessions beat one overloaded session
- Planning before execution - Always start with plan mode to avoid costly mistakes
- Verification loops - Give Claude ways to check its own work
How do you set up multiple Claude Code sessions effectively?
Boris runs 5 Claude Code sessions in parallel in his terminal, numbered 1-5. System notifications tell him when a session needs input. He also runs 5-10 more sessions on claude.ai/code, sometimes teleporting between local and web with --teleport.
Here's how to replicate this setup:
Terminal Setup:
- Open Claude Code from your browser at claude.ai/code or install locally
- For local installation, use the recommended installation methods:
curl -fsSL https://claude.ai/install.sh | bashfor MacOS/Linux - Number your terminal tabs 1-5 for easy switching
- Start some sessions from your phone each morning for tasks that can run autonomously
Enable System Notifications: Configure your terminal to notify you when Claude needs input. This lets you context-switch efficiently while maintaining momentum across multiple workstreams.
Web Sessions: Use Claude Code on the web at claude.ai/code with no local setup required. Run tasks in parallel, work on repos you don't have locally, and review changes in a built-in diff view.
The magic happens when one agent runs a test suite, another refactors a legacy module, and a third drafts documentation simultaneously.
What's the secret to Boris's planning workflow?
If Boris's goal is to write a Pull Request, he uses Plan mode, and goes back and forth with Claude until he likes its plan. From there, he switches into auto-accept edits mode and Claude can usually 1-shot it. A good plan is really important!
Here's the step-by-step planning process:
Step 1: Start in Plan Mode Most sessions start in Plan Mode (Shift+Tab twice). When the goal is a Pull Request, Boris goes back and forth with Claude until he likes the plan. Then switches to auto-accept edits mode and Claude usually one-shots execution.
Step 2: Iterate Until the Plan is Right Don't rush to implementation. Developers who skip planning to save time often spend more time fixing mistakes. Plan Mode isn't training wheels: it's the measuring before you cut.
Step 3: Switch to Execution Mode Once you have a solid plan, enable auto-accept mode for seamless execution. The upfront planning investment pays off in reduced back-and-forth during implementation.
How should you handle the CLAUDE.md file?
Boris's team has turned CLAUDE.md, a special file that Claude automatically pulls into context when starting a conversation, into a learning system. Each team at Anthropic maintains a CLAUDE.md in git to document mistakes, so Claude can improve over time, and best practices. Boris often uses the @.claude tag on coworkers's PRs to add learnings to CLAUDE.md, ensuring knowledge from each PR is preserved.
What to include in CLAUDE.md:
- Coding standards and style conventions
- Common mistakes to avoid
- Project-specific architecture decisions
- Testing requirements
- Deployment procedures
Currently, Boris's team's CLAUDE.md is 2.5k tokens - substantial context but not overwhelming. This becomes shared context, collective learning, institutional memory that survives individual sessions.
What model and permissions setup works best?
Boris makes two controversial choices that actually improve results:
Model Choice: Always Opus Boris prefers using Opus 4.5 with thinking for all coding, valuing its higher quality and reliability over Sonnet despite its slower speed. While most people should probably just use the defaults - it'll use Opus until you hit 50% usage, then switch to Sonnet for cost efficiency, Boris prioritizes quality over speed.
Permissions: Structured, Not Dangerous For security, Boris almost never uses --dangerously-skip-permissions. Instead, he enables commonly used bash commands that are safe in his environment via /permissions. This spares him unnecessary permission prompts on commands like bun run build:, bun run test:, cc:* and many others.
Set up team-shared permissions: Boris recommends creating a shared settings file—called settings.json—that lives in your codebase. This lets you pre-approve common commands and block risky ones. Instead of every engineer configuring these preferences individually, everyone inherits the same sensible defaults.
Why is verification the most critical component?
The most important thing to get great results out of Claude Code: give Claude a way to verify its work. If Claude has that feedback loop, it will 2-3x the quality of the final result.
Types of Verification:
- Browser Testing: Claude tests every change to claude.ai/code using the Chrome extension. It opens a browser, tests the UI, iterates until the code works and the UX feels good
- Test Suites: Automated unit, integration, and end-to-end tests
- Command Execution: Let Claude run and interpret the results of bash commands
- Mobile Testing: Use simulators for mobile app verification
Verification Automation: Set up "stop hooks," automated actions that trigger when Claude finishes a task. For example, you can set up a stop hook that runs your test suite, and if any tests fail, it tells Claude to fix the problem and finish testing instead of stopping. You can just make the model keep going until the thing is done.
What advanced techniques separate power users from beginners?
Subagents for Specialized Tasks Boris treats subagents like slash commands. Agents are not "one big agent." They're modular roles. Reliability comes from specialization plus constraint.
Boris uses subagents—separate instances of Claude working in parallel—to catch issues before code gets merged. His code review command spawns several subagents at once: One checks style guidelines, another combs through the project's history, another flags obvious bugs. He uses five more subagents specifically tasked with poking holes in the original findings.
Custom Slash Commands Boris uses slash commands for every "inner loop" workflow he does many times a day. The /commit-push-pr command runs dozens of times daily. Commands are checked into git in .claude/commands/ and use inline bash to pre-compute context.
Quality Automation Boris runs a PostToolUse hook that formats Claude's code, because Claude is "usually" well-formatted, and the hook fixes the last 10% to avoid CI failures.
How do you avoid the common pitfalls that derail productivity?
Context Management Use /clear often. Every time you start something new, clear the chat. You don't need all that history eating your tokens, and you definitely don't need Claude running compaction calls to summarize old conversations.
Permission Friction The most annoying thing about Claude Code: it asks permission for everything. You type a prompt, it starts working, you go check Slack, come back five minutes later, and it's just sitting there asking "Can I edit this file?" Yes, you can edit files. That's literally the point.
Session Abandonment Accept that 10-20% of sessions are abandoned due to unexpected scenarios. This is normal—focus on maximizing the successful sessions rather than trying to salvage every attempt.
What's the philosophy behind this workflow approach?
Boris's approach reflects a fundamental shift in how we think about coding with AI. The craft shifts from writing perfect code to building systems that do it. Coding becomes a pipeline of phases: spec, draft, simplify, verify. Each phase benefits from a different "mind".
The verification principle is especially important: The most important tip is to give Claude a way to verify its work through a feedback loop, such as running a bash command, a test suite, or testing the app through the browser or a simulator.
This isn't just about using AI tools—it's about orchestrating AI systems to amplify human capability. While competitors pursue trillion-dollar infrastructure build-outs, Anthropic is proving that superior orchestration of existing models can yield exponential productivity gains.
When you structure your workflow around these principles—parallelization, planning, verification, and specialization—you're not just coding faster. You're building a system that compounds learning and improves over time. That's how a single developer can achieve the output of "a small engineering department" while maintaining code quality that passes human review.