I use AI tools every day. I teach courses on building with them. I’m not the guy telling you they’re a fad. But after two years of watching engineers — including myself — work with these tools, I can name the situations where AI consistently makes the work worse.
If you read one contrarian post about AI in development this year, make it this one. The cost of overusing these tools is real and mostly invisible until it’s expensive.
Rule 1: Don’t use AI to learn fundamentals
This is the one I’d put on a billboard.
When you’re a junior or you’re picking up a new language, the temptation is overwhelming. You ask Claude or Copilot, get a working answer in 10 seconds, paste it in, ship it. The bug is fixed. You move on.
What you didn’t do: read the docs, hold the data structure in your head, struggle with the type system, build the mental model. Six months later, you can’t debug your own code without the AI. You don’t recognize patterns. Your code reviews are surface-level because you can’t see what’s wrong without an AI to ask.
I’ve watched this happen to several junior engineers I’ve mentored. They ship more in their first three months than juniors used to. By month nine, they plateau in a way that’s hard to recover from, because the foundational reps never happened.
Rule of thumb: if you can’t write the code without the AI, you don’t yet know the topic. The AI is fine for the second time you do something, not the first.
Rule 2: Don’t use AI for naming, API design, or abstractions
This is the one most surprises engineers when I say it out loud.
LLMs produce technically correct names and APIs. They almost never produce good ones. The difference is taste — knowing that processData() is a smell, knowing that a method that takes 7 parameters wants to be a class, knowing that this abstraction is premature and that one is overdue. Taste is built from years of maintaining code other people wrote, especially your own past code.
I’ve stopped delegating any of these to the agent:
- Function and method names
- Class boundaries and responsibilities
- API surface design (what arguments, what return shape, what errors)
- When to extract an abstraction
- When to not extract one
The agent will give you a defensible answer to all of these. It will rarely give you the right one. The cost shows up months later when nobody can read the code.
Rule 3: Don’t use AI for production debugging without observability
This trap snares senior engineers more than juniors.
A bug shows up in prod. You paste the stack trace into Claude. It speculates plausibly. You implement the fix. The error stops appearing in your test environment. You ship. Two weeks later, the error is back.
What happened: the stack trace was a symptom, not a cause. The agent guessed at the cause from the symptom alone — without your distributed traces, your metrics, your deployment history, or the four other related errors that occurred in the same hour. It produced a fix that addressed something but not the problem.
Production debugging requires you to:
- Reproduce or precisely characterize the failure
- Isolate which change introduced it
- Build a hypothesis from real signals (logs, traces, metrics)
- Verify the fix actually addressed the root cause
The AI is useful at step 3 and step 4 — but only after you’ve done steps 1 and 2. Skip those, and you’re applying plausible patches forever.
Rule 4: Don’t use AI to do something you can’t evaluate
If you can’t tell the difference between a good answer and a bad one, you have no business asking the AI for an answer.
This sounds obvious. It isn’t. Here are the situations where it bites:
- Writing in a language you don’t know well. The agent writes Rust that compiles and runs but isn’t idiomatic. You ship it. A real Rust engineer reviews your repo later and the code is full of beginner anti-patterns.
- Architectural decisions outside your experience. “Should I use Kafka or RabbitMQ here?” is not a question with a single answer; it depends on operational concerns the agent can’t see.
- Security-sensitive code. The agent will produce code that looks secure. You have no way to know it actually is unless you understand the threat model yourself.
In all these cases, the AI doesn’t replace expertise. It launders ignorance into output you ship without realizing you don’t understand it.
Rule 5: Don’t use AI for one-line edits
This one is small but real. Engineers reach for the AI to insert a single line or rename a variable. They wait 4 seconds for it to think. They review what it produced. They accept.
That same edit takes you 1 second by hand if you’d just done it. The cumulative time spent waiting on the agent for trivial edits is, in my measurements, larger than the time it saves on the medium ones.
Rule of thumb: if your fingers know what to type, type it. The agent is for things where deciding what to type is the work.
Rule 6: Don’t use AI for things that need to be remembered
Anything that has to live in your head — the deal you struck with another team, the constraint that’s not in the code, the politically delicate compromise that explains why this code looks weird — should not be delegated to the agent. The agent has no memory of these things across sessions. It will refactor away the weird-looking code that exists for a reason nobody documented.
Document the why. Either in code comments where appropriate, in CLAUDE.md if the agent needs to know it, or in design docs. The “why” is the part of engineering that doesn’t transfer to AI, and the part that’s most valuable.
Rule 7: Don’t use AI to skip the code review
You wrote the code. (Even if the agent typed it, you wrote it — you’re responsible.) Read it. Line by line. Out loud, if it helps.
Engineers who skip this step because “the agent already validated it” are accumulating a kind of debt that’s invisible until it’s overwhelming. The agent’s validation is statistical: this code looks like code that works. It’s not the same as the validation a thoughtful human gives: this code is correct for this system, in this context, given what I’m trying to achieve.
Two years in, this is the pattern I see most strongly correlated with engineers whose careers stall: they treat the AI as a quality gate. It is not one.
What I do instead
For each of the cases above, here’s what I actually do:
| Situation | What I do |
|---|---|
| Learning a new topic | Build the mental model first. AI for the second project, not the first. |
| Naming things | Write three names by hand. Pick the best. Maybe ask the AI to critique. |
| Production debugging | Pull logs, traces, metrics. Form hypothesis. Then AI can help write the fix. |
| Code outside my expertise | Pair with a human expert when the stakes warrant it. AI is not a substitute. |
| Trivial edits | Type them. Faster, less context-switching. |
| Important context | Document it. In code, in CLAUDE.md, in a design doc. |
| Code review | Read every line, every time. The AI is a junior collaborator, not a senior one. |
The deeper point
The framing question I’d encourage every engineer to ask before reaching for an AI tool:
What skill am I building, and is the AI making me stronger or weaker at it?
For mechanical work, AI makes you stronger by freeing time for the work that matters.
For judgment work, AI makes you weaker by letting you outsource the reps that build judgment.
The engineers who’ll thrive over the next decade are the ones who can tell the difference. Not the ones who use AI most aggressively, and not the ones who reject it. The ones who know precisely when to put it down.
If you want a structured path through working with AI as a real engineering tool — including the honest tradeoffs — start with Prompt Engineering & AI Workflow Automation. For going deeper into building production AI systems, the Building with Claude API and Claude Code Mastery courses pick up where this article leaves off.