"I can't explain how it works," he said.
Not because he hadn't tried. Not because he was incapable. He had stared at the function for twenty minutes before our meeting, tracing the logic, trying to reconstruct the reasoning that had produced it. The code compiled. The tests passed. But when he tried to walk through it line by line, the understanding wasn't there. The logic felt borrowed. Alien. Like reading someone else's handwriting and realizing you don't recognize your own name.
The code was his. He had written it. But he hadn't written it.
The Friction Is Gone
Here's what he told me happened. He hit a problem he didn't fully understand, so he described it to an AI assistant. The AI generated plausible-looking code. It compiled. Tests passed. He shipped it. The whole cycle took a fraction of the time it would have taken him to wrestle with the problem himself, to get stuck, to unstick himself, to actually understand what he was building.
The friction that used to force understanding was gone.
I heard someone say it plain in a conversation that stuck with me: weak devs plus AI equals weak output ... faster. The junior engineers feeling this most acutely aren't struggling because AI is too hard. They're struggling because AI made it too easy to skip the work that builds judgment.
The friction that used to force understanding was gone.
What Gets Lost When Speed Wins
This isn't a story about one engineer having a bad week. Once I knew to look for it, I started noticing the pattern everywhere. PRs getting bigger. Review comments getting thinner. The energy behind the work shifting, and not in a direction that leads to better engineering.
I watched a thread of over 500 experienced engineers describe what happened when AI tools rolled out without thoughtful guardrails. The stories were remarkably consistent. PRs ballooned as AI generated more code than humans could reasonably review. Codebases filled with inconsistent patterns, each engineer prompting their way to a slightly different solution for the same problem. Service boundaries blurred. Error handling became a patchwork. The codebase started feeling less like an intentional system and more like a junk drawer with a CI/CD pipeline.
One engineer described it with a line I haven't been able to shake. The first sign wasn't that the team missed a date. It wasn't that quality cratered overnight. It started earlier and quieter than that. Engagement dropped first. People started pulling back. Then the quality of the thinking started to shift. Problem-solving got thinner. Engineers who used to wrestle hard with tradeoffs started giving quicker, flatter responses.
By the time performance looked off in the metrics, the team had already been drifting for a while. The breakdown started in trust and attention long before it showed up in delivery.
I Didn't See It At First
I have to admit something I didn't see clearly when we first gave the team access to these tools. I thought the risk was that engineers wouldn't use AI enough. That they would resist change and stick to old patterns out of fear or habit. I spent energy on encouragement and permission when I should have been paying attention to what happened when the tools became invisible.
The junior engineer in my conference room wasn't resisting AI. He was using it constantly. That was the problem. He had become fluent in describing problems to a machine but less fluent in solving them himself. The judgment that comes from wrestling with hard problems was atrophying because he wasn't being forced to wrestle anymore.
He had become fluent in describing problems to a machine but less fluent in solving them himself.
I saw it in myself too if I'm honest. There were moments where I prompted my way through something I should have understood more deeply, where I accepted working output without accepting the understanding that should have come with it. The convenience was seductive. The cost was invisible until it wasn't.
The Exponential Nature of the Problem
A leader gave me the cleanest explanation I've heard of what's actually happening. AI isn't a multiplier. It's an exponent.
A multiplier makes people think the tool adds a fixed amount of value. An exponent is different. It magnifies whatever is already there. If a team has clear standards, strong review habits, shared judgment, and disciplined engineering patterns, AI makes those things more powerful. If a team is loose, inconsistent, and already carrying weak habits, AI doesn't smooth that out. It amplifies the instability.
That's why two teams can buy the same tools under the same pressure and end up in completely different places. The tool didn't create the difference. The foundation did.
I watched this play out in real time. Teams that had strong standards before AI arrived were producing more code at the same quality bar. Teams that were already inconsistent became inconsistent faster. The junk drawer codebases became junk drawer codebases with more volume. The engineers who were already struggling didn't get rescued by the tool. They got buried by it.
AI isn't a multiplier. It's an exponent.
What the Dashboard Can't See
Your AI adoption dashboard is probably showing you participation metrics right now. How many engineers have access. How many prompts are being sent. Usage rates and feature adoption and all the numbers that make leadership feel like the rollout is succeeding.
Here's what those numbers can't tell you: whether your engineers understand the code they're shipping. Whether the judgment that used to be built through friction is still being built at all. Whether your team's accumulated wisdom is growing or eroding while everyone moves faster.
The junior engineer who couldn't explain his own function wasn't an outlier. He was a canary. The kind of signal that shows up in behavior and conversation long before it shows up in velocity charts or defect rates.
The Question That Matters
Ask your engineers what they wish had been different about how the tools were introduced. Don't defend the rollout. Just listen.
What they tell you is what your dashboard can't see. The moments where they accepted generated code without understanding it. The times they shipped something that worked but couldn't explain why. The creeping sense that they were getting faster at the wrong things.
Some of them will tell you they noticed the drift in themselves and pulled back, forced themselves to slow down and understand. Others will admit they haven't found the discipline yet. They're still riding the speed wave, hoping the understanding will come later, knowing somewhere underneath that it probably won't.
What you hear will tell you whether your AI adoption is actually succeeding or just moving faster toward a future where fewer people on your team actually know how the system works.
The code compiles. The tests pass. The dashboard shows green.
But somewhere in your organization right now, a junior engineer is realizing he can't explain his own code. He's exhausted and a little bit lost. And he's waiting to see if anyone notices.```
