There's a moment that happens quietly, not during a crisis or a late night deploy, but in the middle of an ordinary 1:1. Your engineer is walking through something technical, explaining an approach, and somewhere in the second minute you realize you've been nodding without actually following. Not because you stopped caring. Because the thread between your brain and the work has grown thin in ways you haven't been tracking.
I've had that moment. More than once. Not as a conscious choice to disengage but through the accumulation of small decisions about where to put attention over a stretch of weeks. The calendar fills up with reviews, cross team coordination, and strategic conversations. The individual time in the code gets shorter, then rarer. And the drift doesn't announce itself. It just settles in.
That moment used to belong to engineering managers who had spent too long in spreadsheets and leadership offsites. The senior engineer who drifted into pure process. The VP who still introduced himself as technical but hadn't opened a PR in two years. The drift was slow, mostly invisible, and if you asked them directly, they'd tell you they were still close to the work.
Now there's a new accelerant.
The Pattern Showing Up in Engineering Threads
A few weeks ago I spent time reading through a thread of engineers reflecting on what AI directed development had done to their instincts. Not what it had done to their velocity or their output quality. Their instincts.
One engineer put it precisely. He had switched to mostly directing AI agents, reviewing generated output, and approving pull requests. Then a memory issue showed up in production. Real debugging work. The kind that requires you to hold a mental model of what the system is doing across several layers at once. The instinct that tells you where to look first wasn't there. Not gone completely, but slow. Like coming back from a long vacation and trying to find things in your own kitchen.
He was a senior engineer. He had built systems before. The skill hadn't disappeared in a week. It had eroded quietly because the daily reps that maintained it had stopped.
Reading through the rest of the thread, the same pattern kept surfacing with different details. Engineers who could still read code, could still approve PRs, could still talk fluently about architecture. But the deeper instincts ... the ones that come from having written enough code to recognize when something smells wrong before you can articulate why ... those were getting soft. And the people who had it the worst were the engineers who had been most productive by every surface measure.
The muscle memory for how long things actually take, why certain patterns break under load, what a subtle bug smells like before you can see it ... that erodes when you stop writing.
This used to be a senior leadership problem. You move into strategy, you stop coding, your instincts slowly drift. Everyone understood that trade-off. Now the same dynamic is compressing into a much shorter timeline because AI assisted workflows can remove the hard thinking from a day's work without removing the appearance of productivity.
Two Ways to Use a Tool
There are two fundamentally different relationships with AI tools, and they produce opposite outcomes.
Using AI to move faster through problems you understand sharpens you. You bring the judgment, the tool accelerates the execution, and the feedback loop stays intact. You're still doing the thinking. You're still debugging, still making architectural decisions, still feeling the resistance of the problem. The tool handles the parts that were always mechanical. The judgment is still yours.
Using AI to skip problems you don't want to think through does the opposite. The output still arrives. The velocity numbers still look fine. But the thinking that produced judgment in the first place is being quietly outsourced, and judgment doesn't maintain itself in storage.
The version most engineers tell themselves is that they've moved up the value chain. They're doing higher order thinking now. Directing rather than implementing. That framing can be true. The question is whether the directing is backed by the judgment that only comes from having done the implementing long enough to internalize what the hard parts actually are.
The tricky part is that both approaches look identical from the outside. PRs get merged either way. Features ship either way. The decay happens below the line of what any dashboard is measuring.
The relationship you have with AI tools determines whether they make you better or make you dependent.
What Atrophy Actually Looks Like
I started tracking the signals in myself years ago, long before AI was part of this conversation. The 1:1s that had slowly become status reports because I was asking "what's going on" instead of "walk me through the approach." Nodding through a technical explanation I was only partially following. Going three weeks without being genuinely surprised by something in the codebase.
None of those signals announce themselves loudly. They're a slight dulling. The gap between where you think your instincts are and where they actually are.
For engineers in mostly AI directed workflows, the same signals appear earlier and faster. The days that feel productive because the output is high but the actual thinking has been thin. The growing hesitation before jumping into a debugging session without an assist. The way complex problems start to feel unfamiliar not because they're harder but because the pattern recognition that used to make them tractable is getting rusty.
The industry hasn't built a metric for this yet. Which means most people won't notice until they can't ignore it.
Staying Close
I still read pull requests. Not to approve syntax, and not to micromanage the decisions engineers are making. I read them to understand how the team thinks. Where connections are getting made across the codebase, where they're not, whether the patterns we aligned on are holding under real delivery pressure.
Staying that close to the system means that when something matters, the instinct is there.
A bug came in a few months ago that was creating a bad customer experience. Instead of routing it through the normal support path, I checked the logs. Found a data issue. Had the customer's experience fixed in thirty minutes. That wasn't remarkable engineering. It was just what's possible when you haven't let the thread between your brain and the system go slack.
That's the thing about technical instincts. They don't disappear all at once. They degrade gradually through disuse, and they come back gradually through deliberate reps. The engineers who will navigate the AI era best are not the ones who resist the tools. They're the ones who stay intentional about which parts of the thinking they keep.
For an engineer, that might mean taking on at least one problem per sprint that requires real debugging without an AI assist. Not as a purity exercise but as maintenance. For a leader, it means staying close enough to the work that the gap between what you think you know and what's actually happening stays narrow. Reading the PRs. Getting into the logs. Letting yourself be surprised by the codebase sometimes.
The tools should be doing the mechanical work faster. They should not be doing the thinking that produces judgment.
The tools should be doing the mechanical work faster. They should not be doing the thinking that produces judgment.
The Question Worth Asking Right Now
If the AI stopped working tomorrow, could you still do the job you were hired for?
Not a version of it. The actual job. Could you debug a production issue without an assistant? Could you architect a system with just your own reasoning? Could you review a PR and have a real opinion about the approach, not just whether the tests pass?
For engineers, this question is already uncomfortable in some orgs. For leaders who have been directing AI workflows from a distance, it may be more uncomfortable still. For either, the discomfort is the signal.
Not that you're failing. Not that you need to abandon the tools. Just that the drift has been happening and you haven't been watching for it.
AI is writing code now. What it can't write is the years of being wrong about systems and learning from it, the instinct built from having felt the weight of bad architectural decisions, the judgment that only exists because someone did the hard thinking long enough to make it automatic.
That part is still yours. The question is whether you're keeping it.
