I’ve been using Claude Code as my primary coding agent for six months across two production codebases — one .NET, one TypeScript. Long enough to know what works, what’s marketing, and what most teams set up wrong. Here’s the honest debrief.
This is not a tutorial. It’s the post I wish someone had handed me on day one.
The single biggest mistake
Most engineers treat Claude Code like a smarter ChatGPT — they open it, ask it to “fix the bug in user.service.ts,” and judge the tool by the quality of that one answer.
That’s not how the tool works. Claude Code is an agent loop with tools, hooks, and persistent project memory. Treating it as a chat window is like buying a Tesla and using it as a radio. You get maybe 5% of the value.
The engineers who get the most out of it have done four specific things:
- Written a
CLAUDE.mdfile that documents what the project actually is - Added pre/post-tool hooks that run lint and tests automatically
- Configured a
settings.jsonallowlist so they stop getting permission prompts - Added 1–2 MCP servers wired to their actual systems (DB, issue tracker)
If you skip these, you’re using a $20/month tool to do $2 of work.
What it’s genuinely great at
After six months, the workflows where it consistently saves real time are narrower than the marketing suggests, but they’re substantial:
Multi-file refactors with a clear shape. “Rename getUserById to findUserById everywhere, update the tests, and move the file from services/ to repositories/.” This kind of mechanical change with predictable boundaries is where the agent shines. It does in 30 seconds what takes me 15 minutes carefully.
Reading code I didn’t write. When I onboard onto a codebase or have to debug something in a service I haven’t touched in 8 months, asking Claude Code to “explain the data flow from this controller to the database” is faster and more reliable than reading the code top-down. Crucially, it tells me about the actual code, not the documentation that’s three releases out of date.
Test scaffolding. “Write integration tests for this endpoint covering the happy path, the 404, the 401, and the duplicate-email case.” It produces 80% correct tests; I edit the last 20%. Net: I write more tests than I would otherwise. That alone justifies the cost.
One-shot script writing. Migrations, log parsers, cleanup utilities. Things you’d write in 20 minutes and use once. The agent writes them in 2 minutes; I read them in 1.
What it’s quietly bad at
Equally important — and rarely discussed:
Anything requiring opinionated taste. Naming things, designing APIs, deciding what to abstract. The agent will produce something that works, but it’s almost always a notch worse than what a senior engineer would write. I’ve stopped delegating these.
Long-horizon work without checkpoints. “Build out this feature end-to-end” without breaking it into pieces produces sprawl. The agent will create files you didn’t ask for, abstractions you don’t want, and feature flags for hypothetical future cases. Plan mode is the antidote (more on that below).
Anything with a fuzzy success criterion. “Make this faster” without a baseline metric or specific bottleneck is a recipe for changes you can’t evaluate. Always pin down the actual metric first.
Production debugging without observability. It can read logs and stack traces, but it can’t see your Datadog dashboards or your distributed traces. If your “bug” requires understanding what happened in production, the agent is a junior engineer guessing — useful only after you’ve narrowed it down.
The configurations that move the needle
If I had to onboard a teammate onto Claude Code in 30 minutes, here’s what I’d actually configure for them:
1. A real CLAUDE.md
Not three lines. A real one. What the project does, the architecture decisions, the non-obvious gotchas, the testing convention, what NOT to touch. Mine for the .NET project is 90 lines and saves easily 50% of my “explaining context” time across sessions.
2. Hooks for lint and test
A PostToolUse:Edit hook that runs the linter on the changed file. A Stop hook that runs the unit tests for the changed module. The agent gets immediate feedback on whether its change broke anything, and you stop having to babysit.
3. A real allowlist in settings.json
By default the agent asks permission for every shell command. After two days, you’re approving the same 20 commands constantly. Allowlist your read-only ones (git status, npm test, dotnet build, gh pr view) and your inner-loop ones. Keep destructive ones (git push, git reset --hard, rm) gated.
4. Plan mode for anything non-trivial
Plan mode is review-before-execute. The agent shows you what it intends to do; you approve or edit; then it runs. Use it any time the task touches more than two files. The 30 seconds you spend reading the plan saves 30 minutes of unwinding bad decisions.
5. One or two MCP servers
The single biggest unlock for me was wiring an MCP server to our internal documentation and to the issue tracker. The agent can now answer “what’s the design doc for feature X” without me copy-pasting URLs. Building these is straightforward — covered in depth in the Building MCP Servers & AI Tool Integrations course.
Sub-agents — when to actually use them
Sub-agents (Explore, Plan, general-purpose, etc.) are powerful but easy to over-use. My rules:
- Use Explore when you need to find something in a codebase you don’t know — it has a separate context window so it doesn’t pollute your main session.
- Use Plan before any non-trivial implementation — same reason, plus it gives you a written artifact to review.
- Use general-purpose for parallelizable independent work (e.g., “research X while I implement Y”).
- Don’t use sub-agents for things you can finish in 5 minutes yourself. The orchestration overhead is real.
The economics
For solo engineering work, Claude Code at $20/month pays back in the first afternoon. For team use, the math is more nuanced:
- Junior engineers get the biggest absolute productivity gain (50%+) but also produce the most code that needs review.
- Senior engineers get a smaller percentage gain (15–25%) but the gain is in higher-leverage work — they’re freed from drudgery, not from architecture.
- Code review load goes up, not down. Plan for it. Either rotate review duty more aggressively or set a stricter “agent-generated PRs must include reasoning” policy.
The teams that win with this tool are the ones that change their process, not just their tooling. The teams that lose are the ones that adopt it as a productivity hack and end up shipping more low-quality code faster.
The 80/20 if you read nothing else
If you take one thing away from this:
Spend an afternoon on configuration before you spend a week on usage. A serious
CLAUDE.md, a real allowlist, two well-chosen hooks, and one MCP server will get you 80% of the value. Most engineers never do this and judge the tool by an unconfigured experience. That’s like judging Linux by the live USB demo.
If you want a structured route through hooks, slash commands, sub-agents, and MCP integration as a real workflow — not a feature tour — that’s exactly what the Claude Code Mastery: Agentic Coding for Engineers course is built around. Five weeks, ten lessons, focused on shipping faster on real projects rather than building toy demos.
Where I’d push back on the hype
Claude Code does not turn a junior into a senior. It makes a junior faster at producing the kind of work juniors produce — which is mostly fine, but the limiting factor for a junior’s career is judgment, not typing speed. The tool does not teach judgment. Mentorship and code review still do.
It also does not eliminate the need to read code carefully. Engineers who skip the read-and-understand step because “the agent wrote it” are accumulating debt at a rate they don’t see yet. The bills come due in a quarter or two when nobody on the team knows how the system actually works.
Use it. But use it with the same standards you’d apply to a contractor: you’re responsible for what ships, even if someone else typed it.
Want a concentrated path through Claude Code’s power features? Start with Claude Code Mastery: Agentic Coding for Engineers. Already shipping with it? The Building MCP Servers & AI Tool Integrations course is the natural next step — write your own MCP servers to expose your team’s internal tools and data to Claude.