Every setting, workflow, and practitioner insight your engineering organization needs.
Claude Code has crossed the threshold from “impressive demo” to legitimate infrastructure for engineering teams. But there’s a yawning gap between getting one developer to try it and deploying it across a 50-person organization in a way that actually sticks. Configuration matters. Habits matter. Governance matters.
We spent months synthesizing practitioner feedback, official documentation, and real deployment post-mortems into a single end-to-end guide. What follows are the most important insights — the kind that determine whether Claude Code becomes a compounding advantage or a shiny tool that gets quietly uninstalled by week two.
What This Article Covers
Installation and system requirements · CLAUDE.md setup for teams · Daily workflows (features, bugs, code review) · Permission controls and security · CI/CD automation · Model selection by task type · 4-phase rollout playbook · ROI metrics · Legacy codebase strategy · Critical do’s and don’ts
1. Installation: What Teams Get Wrong First
The install itself is a two-line command. The mistakes happen in the details around it.
Use the native installer, not npm. The npm package (@anthropic-ai/claude-code) has been deprecated. Teams that installed via npm months ago are silently running outdated builds. The native installer handles background auto-updates — your entire team stays current without thinking about it.
# macOS / Linux / WSL curl -fsSL https://claude.ai/install.sh | bash # Windows PowerShell irm https://claude.ai/install.ps1 | iex # If migrating from npm: npm uninstall -g @anthropic-ai/claude-code
Account tier matters. The free Claude plan does not include Claude Code. Teams need Claude Pro, Max, Team, or Enterprise. For teams of 5–50 with no strict compliance requirements, the Team plan has included Claude Code with every seat since January 2026. At 50+ developers or with regulated data environments, move to Enterprise — you’ll need the audit logs, SCIM, and SSO.
After install, run claude doctor on a few machines to verify the setup is clean before rolling out further. This command flags missing dependencies, auth issues, and outdated binaries before they become problems at scale.
Pin to “stable” channel
In .claude/settings.json, set "autoUpdatesChannel": "stable" to stay approximately one week behind the latest release. You still get security patches quickly, but you avoid regressions from day-zero builds. Recommended for production-focused teams.
2. CLAUDE.md: The Highest-ROI Setup Step
If you do one thing before deploying Claude Code across your team, make it a well-crafted CLAUDE.md. This is a plain markdown file that Claude reads at the start of every session. Done right, it means Claude automatically follows your code standards, uses your exact test runner flags, respects your branch naming conventions, and avoids your known gotchas — for every developer, in every session, without any prompting.
Run /init inside any project to auto-generate a starter file. Claude analyzes your codebase and produces an initial version with build commands, test instructions, and patterns it discovers. Then add what it missed.
Where the files live and who they affect
| File Location | Who It Affects | What to Put There |
|---|---|---|
./CLAUDE.md git commit this |
Whole team | Build commands, code style, architecture, branch/PR conventions |
./CLAUDE.local.md gitignore this |
Just you, this project | Your sandbox URLs, local test data, personal preferences |
~/.claude/CLAUDE.md |
Just you, all projects | Personal tooling shortcuts, universal preferences |
| Managed policy path (MDM/Ansible) | All machines in org | Security policies, compliance requirements, org-wide standards |
The managed policy path is the enterprise power move: deploy an org-wide CLAUDE.md via MDM or Ansible that individual developers cannot override. On macOS it lives at /Library/Application Support/ClaudeCode/CLAUDE.md, on Linux/WSL at /etc/claude-code/CLAUDE.md.
What to include vs. what to leave out
The most common CLAUDE.md mistake is overloading it. Every line loads into Claude’s context at session start. Over 200 lines, rules near the bottom get less attention. Focus on what Claude cannot figure out by reading the code itself:
-
- Include: Exact build/test/lint commands with flags, code style rules that differ from language defaults, branch naming and PR conventions, required env vars, known gotchas and dangerous files
-
- Exclude: Standard conventions Claude already knows, detailed API docs (link to them instead), information that changes frequently, file-by-file codebase descriptions
Here’s a minimal but effective example for a TypeScript team:
# Build & Test Build: npm run build Test: npm test -- --testPathPattern (single tests only) Lint: npm run lint -- --fix before committing Type check: npm run tsc --noEmit # Code Style - TypeScript strict mode. No `any` types. - ES modules (import/export) not CommonJS - Named exports over default exports # Git Workflow - Branch names: feat/, fix/, chore/ - Squash commits before merging to main - PR description must cover: what, why, how to test # Gotchas - Tests use real local DB. Run db:test:reset first. - .env.local has staging credentials. NEVER commit. - IMPORTANT: Run tsc before committing or CI fails.
For larger teams, break rules into topic files using .claude/rules/ with YAML frontmatter to scope rules to specific file paths. A rules file with paths: ["src/api/**/*.ts"] only loads when Claude is working on API code — saving context for everything else.
3. The Core Workflow: Explore → Plan → Implement → Verify
The biggest gains don’t come from using Claude Code more — they come from using it in the right order. Teams that just throw tasks at Claude and accept the output are leaving most of the quality on the table.
Why planning beats prompting
Consider the math: assume Claude makes the correct decision 80% of the time at each choice point in a task. A typical feature involves 20 such decisions. The probability of getting all of them right without guidance: 0.8²⁰ ≈ 1%. When you write a reviewed spec first, you’ve already resolved each of those decisions — the probability approaches 100%. Anthropic’s internal data shows unguided attempts succeed about 33% of the time; spec-driven implementation is the single biggest jump in output quality.
The practical workflow:
-
- Press
Shift+Tabto enter Plan Mode. Ask Claude to explore relevant code without making any changes.
- Press
-
- Ask Claude to write an implementation plan. Press
Ctrl+Gto open and edit it directly in your editor.
- Ask Claude to write an implementation plan. Press
-
- Exit Plan Mode. Tell Claude to implement the plan.
-
- Ask Claude to run tests, verify results, and fix any failures.
-
- Ask Claude to commit and open a PR with a full description.
Feature Development Prompt Template
“I want to add [feature]. Interview me using AskUserQuestion — ask about thresholds, edge cases, error handling, and tradeoffs. Don’t ask obvious questions. Focus on what I might not have considered. When done, write a complete spec to SPEC.md.” → Review SPEC.md → New session → “Implement from SPEC.md. Tests, commit, PR.”
The Writer/Reviewer pattern
One of the subtler but high-impact insights from practitioners: don’t ask the same Claude session that wrote the code to review it. A session that wrote code is biased toward the code it just wrote — it knows why decisions were made and unconsciously protects them. A fresh reviewer session has no such bias. It approaches the code exactly the way a good human reviewer should.
The workflow: Session A implements. Session B (fresh, separate context) reviews. You paste the review output back into Session A to address. This pattern consistently surfaces more real issues than single-session review.
4. Permissions and Security: The Controls That Actually Matter
Claude Code runs commands on your machine. The permission model is what separates safe high-autonomy use from the kind of incidents that make it into war stories.
| Mode | How It Works | Best For |
|---|---|---|
| Default (manual) | Claude asks before every significant action | High-stakes tasks, full visibility needed |
| Auto mode | A classifier reviews each action; blocks scope escalation | Long unattended tasks |
| Allowlists | Pre-approve specific commands Claude can run freely | Commands you fully trust: lint, format, commit |
| Sandboxing | OS-level isolation on filesystem and network | CI, contractor machines, untrusted environments |
# settings.json — allowlist trusted commands, deny destructive ones
{
"permissions": {
"allow": [
"Bash(npm run lint*)",
"Bash(npm test*)",
"Bash(git add*)",
"Bash(git commit*)"
],
"deny": [
"Bash(rm -rf*)",
"Bash(*--accept-data-loss*)",
"Bash(*--force-reset*)"
]
}
}
Add your deny list to managed settings so individual developers cannot override it at the project level. This is the most important enterprise configuration step for security teams.
Real CVEs: The Malicious Repository Attack
CVE-2025-59536 and CVE-2026-21852 showed that cloning a crafted repository and opening it in Claude Code could execute arbitrary shell commands and exfiltrate API keys — before any trust prompt appeared. The attack vector was .claude/settings.json inside the repo. Both were patched in versions 1.0.111 and 2.0.65. Always keep Claude Code updated, and treat .claude/settings.json in external repos with the same scrutiny as a Makefile.
Prompt injection: the leading AI agent risk in 2026
Prompt injection means an attacker embeds instructions in content Claude will read — a code comment, a Jira ticket, an API response, a markdown file — and waits for Claude to follow those instructions as if they were legitimate. Demonstrated attacks have included white text (1pt font, white-on-white) in a .docx file directing Claude to exfiltrate files to an attacker’s account, and malicious GitHub Issue descriptions that cause Claude to inject backdoors into generated code.
Mitigations: review files from external sources before passing them to Claude, use sandboxing for sessions that process external content, and consider the open-source Lasso Security hook for high-security workflows — it scans tool outputs for 50+ injection patterns before they enter Claude’s context.
5. Model Selection: Where Teams Leave Money and Quality on the Table
As of April 2026, Claude Code runs on three models with very different cost/capability profiles. Using the wrong model compounds across a team into significant waste — or missed quality on tasks where it matters most.
| Model | Best For | Cost vs. Sonnet |
|---|---|---|
| Claude Opus 4.6 | Complex reasoning, architecture decisions, security review — anywhere getting it right matters more than speed | ~5–10× more |
| Claude Sonnet 4.6 | Default for most coding work: features, bug fixes, PR review, test writing, refactoring | Baseline |
| Claude Haiku 4.5 | High-volume repetitive tasks: linting, formatting, CI analysis, bulk transforms running hundreds of times | ~5× cheaper |
The decision rule that matters most: for security review and architectural decisions, always use Opus. The cost is irrelevant compared to the cost of a breach or a wrong architecture decision. For everything running in CI pipelines at scale, default to Haiku or Sonnet — reserve Opus for the pipeline steps that are actually critical.
6. CI/CD Integration: Automated Review on Every PR
Once Claude Code is in your CI pipeline, code review becomes something that happens automatically on every pull request — not something that has to wait for a developer’s attention.
The quickest setup: run /install-github-app inside Claude Code terminal. For the automatic review workflow — no trigger comment needed, just opens a PR and review starts:
name: Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Review this PR for code quality, correctness, and security.
Post findings as inline review comments.
claude_args: "--max-turns 5"
The --max-turns flag is critical in CI contexts. Without it, a complex PR could run an unbounded number of turns and generate unexpected costs. Set it conservatively; you can always increase it later.
AWS Bedrock note: When routing through Bedrock for data residency, model IDs need a region prefix. Use us.anthropic.claude-sonnet-4-6, not claude-sonnet-4-6.
7. The 4-Phase Team Rollout Playbook
Getting 50 developers using Claude Code well is a different problem from getting one developer using it. The most common failure mode — what we call the “Day Two Problem” — plays out like this: day one, everyone installs it and is impressed. Day two, novelty wears off. Developers who had a bad first session quietly uninstall. Developers who had a good one keep using it, but without guidance, develop habits that don’t scale. By day thirty: fragmented adoption, no consistency, no way to measure the investment.
Phase 1: Pilot (Weeks 1–3)
Select 3–5 developers representing a mix of experience levels on a real production-grade project. Mandate them to evaluate and document what works, what frustrates, and what they’d change. Their feedback shapes your managed settings and CLAUDE.md template before wider rollout. Monitor for patterns of permission denials and repeated errors — these tell you where your configuration needs adjustment.
Phase 2: Early Adopter Expansion (Weeks 4–6)
Expand to 10–20 developers. Pilot team members become internal champions — pair each new developer with a pilot member for their first week. Peer support is more effective than documentation. Roll out your refined managed settings and CLAUDE.md. Run a 1-hour onboarding session covering: CLAUDE.md, Plan Mode, /clear, and the critical do’s and don’ts.
Phase 3: Department Expansion (Weeks 7–10)
Bring in 2–3 departments with department-specific CLAUDE.md files reflecting their stacks. Start tracking ROI metrics. Build a shared .claude/skills/ library of workflows the whole organization can use.
Phase 4: Organization-wide (Week 11+)
Full rollout with established governance, monitoring, and feedback loops. Quarterly: audit MCP server permissions, prune CLAUDE.md. Monthly: review usage analytics and ROI metrics with engineering leadership. New engineers onboard to Claude Code on day one.
8. Measuring ROI: What Good Actually Looks Like
Teams that invest in setup — managed CLAUDE.md, phased adoption, model selection, skills library — report very different numbers from teams that just install it and hope for the best.
67%more PRs merged per developer per week (Anthropic’s data)
40%improvement in test coverage as teams stop deferring test writing
50%faster time from ticket assignment to PR open for standard features
Realistic Expectation
Teams that just install Claude Code without configuration work typically see 15–25% improvement at best. The numbers above come from teams that invested in the setup described in this guide. The configuration is what unlocks the larger gains.
The Analytics API (at platform.claude.com/docs/en/admin-api/overview) gives you usage by user, model, and period — feed it into Grafana, Datadog, or any BI tool you already have. Track session count per developer per week as your leading indicator; PR cycle time and bug rate post-merge as your lagging indicators.
9. Legacy Codebases: Understand Before Touching
Most enterprise teams aren’t building greenfield apps. They’re working with undocumented 10-year-old codebases. The core principle for legacy work: use Plan Mode exclusively before making any changes. Have Claude map the architecture, document the unknown, and identify risky areas first.
# Session 1: architecture mapping (Plan Mode only) "Explore the entire src/ directory and produce an architectural overview. Identify: main components, data flows, external dependencies, known patterns. Write the overview to docs/ARCHITECTURE.md." # Session 2: risk mapping (Plan Mode only) "Read the codebase and identify: undocumented functions, circular dependencies, files with no test coverage, areas where a small change could have wide impact. Write findings to docs/RISK_AREAS.md."
Your CLAUDE.md for a legacy codebase needs to be much more defensive. Include things you’d normally assume a good engineer would know — because the codebase probably violates those assumptions. Explicitly mark dangerous files (“NEVER modify payment.js without explicit sign-off”), document known architectural debt, and explicitly list things that look wrong but are intentional.
For migrations, never attempt the whole codebase at once. Migrate one cohesive, genuinely isolated subsystem, verify it in production for two weeks, then move to the next. Even with good tests, a codebase-wide migration creates a diff too large for humans to meaningfully review.
10. The Do’s and Don’ts Teams Learn the Hard Way
These rules exist because someone already paid the price to learn them.
File system safety
-
- Always have a clean git commit before starting a Claude session. Your git history is your undo button.
-
- Only you should delete files. If Claude wants to delete something, say no, understand why, then decide and delete it yourself. Claude deletions don’t go to a trash can.
-
- After two failed corrections in the same direction, use
/rewindand start fresh with a better prompt. Continued prompting in a polluted context rarely recovers.
- After two failed corrections in the same direction, use
-
- Never run
--dangerously-skip-permissionson your host machine. Use sandboxing instead.
- Never run
Credentials and secrets
-
- Store all credentials in environment variables or a secrets manager — never in any file Claude has access to.
-
- Add
.env,.env.local, and credential files to both.gitignoreand to CLAUDE.md as files never to touch.
- Add
-
- In CI, always use GitHub Secrets:
${{ secrets.ANTHROPIC_API_KEY }}. Never hardcode keys in workflow files.
- In CI, always use GitHub Secrets:
Context hygiene
-
- Use
/clearbetween unrelated tasks. Context pollution from earlier tasks degrades quality on later ones.
- Use
-
- Keep CLAUDE.md under 200 lines. Beyond that, rules at the bottom get less attention.
-
- The practitioner-consensus threshold: don’t let context exceed 60% capacity. Performance starts degrading earlier than most people expect due to how attention weights shift as context fills.
Database and production safety
-
- Keep Claude away from production databases. Use staging or test databases only.
-
- Read every line of any database migration Claude generates before running it.
-
- Deny destructive database flags in settings:
--accept-data-loss,--force-reset,--drop.
- Deny destructive database flags in settings:
There’s a lot more in the full guide. We have written down individual sections covering Skills (reusable workflow automation), Scheduled & Background Tasks, Shadow AI governance, Supply Chain security, the complete Prompt Templates reference card, and Troubleshooting.
We’ve packaged the entire thing as a free guide.