Don’t Let AI Make You Stupid

TLDR: AI coding tools make you productive and stupid at the same time. The fix is to match how much you delegate to how much you want to learn. Describe problems in your own words. Read docs before coding. Save the agents for the boring stuff.

How we used to learn

Before AI, we had Stack Overflow. To use it, you had to do something first: understand your problem well enough to describe it in general terms. You had to isolate the issue, strip away the specifics of your codebase, and ask a question a stranger could answer.

That process was the learning. Half the time you solved it yourself just by writing the question. The other half, you understood the answer because you’d already done the diagnostic work.

Four levels of delegation

There’s a ladder of how much thinking you hand off to AI. Each step is more productive and less educational.

You describe the problem in your own words. This is the Stack Overflow mode. You still own the mental model. The AI is a reference book.

You paste the broken code and ask for a fix. You skipped the diagnosis. You might learn from the answer. You probably won’t.

You paste the broken code and ask what’s happening. Better. You’re using AI as a teacher. You’ll remember this one.

You give an agent your whole repo and a task. You’re managing a trainee now. You might understand what changed. You’re losing how and why.

The pressure is always to slide down the ladder. The knowledge loss is invisible until the AI gives you something subtly wrong and you can’t tell.

A system that works for me

I’ve settled on a few rules.

Boring work gets full delegation. I maintain a legacy .NET 4.8 solution. I’ve already learned everything it has to teach me. Letting AI handle it is no different from using any other automation tool. No knowledge lost because there’s no knowledge I want to gain.

Interesting work gets the Stack Overflow treatment. I describe the problem in general terms. If I can’t formulate it that way, that’s my signal. I don’t understand the problem well enough yet. At that point I paste code and ask the AI to explain, not to fix.

New territory starts with documentation. When I’m learning something new, I read the API docs before writing any code. I load them on my Kindle. Reading documentation without the ability to immediately try things forces you to build a mental model. You come to the keyboard with intent instead of poking at things until something works. Less screen time is a bonus.

Local models for thinking, cloud models for labor. If my interaction pattern is asking conceptual questions in plain language, I don’t need a frontier model with full repo context. A local LLM through Ollama handles it fine. Privacy comes free, and the limited capability is a feature. It removes the temptation to hand over everything.

The point

Most people maximize AI capability and then wonder why they’re getting worse at their job. The trick is the opposite. Constrain the AI to match the level of engagement you want to maintain.

AI is a tool. Tools shape their users. A man with GPS forgets how to read a map. That’s fine if the signal never drops. But it does, and when it does, you’re lost.

Rules

  • Delegate boring work fully. Automate what you’ve already mastered.
  • Describe interesting problems in your own words before asking AI.
  • If you can’t describe it in general terms, ask AI to explain, not to fix.
  • Read documentation before writing code. Do it away from the computer.
  • Use local models for thinking. Save cloud agents for labor.
  • Constrain the tool to match the engagement you want to keep.
Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Persistence

Next Post

Measurement Trends Yesterday and Today

Related Posts