I was on the phone with a friend who runs a CMS platform. We were talking about AI adoption across his customer base when he cut through the hype in ten seconds.
"Sht in, sht out," he said. "AI doesn't solve the decades of issues that distributed teams present."
That was it. The conversation shifted. He'd been watching companies make the same bet ... ship work to lower-rate markets with the expectation that AI would cover the gap. The tool doesn't fix coordination problems. It doesn't fix unclear ownership. It doesn't fix architectural decisions that get revisited every three months because nobody ever aligned on the tradeoffs.
AI just produces output faster. Good or bad, it comes out faster.
Speed Without Foundation
My friend sees the pattern across his customer base. Companies that struggled with architectural decisions before AI haven't found a shortcut. They've found a way to compress the same gaps into a tighter window. The teams that were already shipping inconsistent patterns, unclear ownership boundaries, and technical debt that accumulates silently ... those teams are now doing all of that faster.
If your team already struggled to make sound architectural decisions, AI doesn't rescue them. It just helps them make more bad decisions faster.
I've seen this pattern enough times now to recognize it. Teams adopt the tooling, see initial velocity gains, and mistake speed for health. The metrics look good for a sprint or two. Then the accumulated weight of unchecked decisions starts showing up. Refactors that should have been caught in review. Patterns that diverged across the codebase. Technical debt that formed silently because everyone was moving too fast to notice.
The tool didn't create the problem. It revealed how little structure was there to begin with.
The Judgment Gap
What separates teams that thrive with AI from teams that struggle isn't the AI. It's judgment.
Teams with strong judgment can evaluate what the model produces. They know their patterns. They understand their tradeoffs. They can look at generated code and recognize when it fits and when it's a mismatch. AI becomes a force multiplier for people who already know what good looks like.
Teams without that judgment can't evaluate what they're getting. They're outsourcing decisions they never learned to make themselves. The result isn't better engineering. It's faster execution of uncertain choices.
Teams without judgment can't evaluate what they're getting. They're outsourcing decisions they never learned to make themselves.
This is the uncomfortable truth about AI tooling in engineering. It doesn't level the playing field. It steepens the curve. The gap between teams with strong technical judgment and teams without it gets wider, not narrower. The strong teams move faster and build better. The weak teams move faster and build more of what they already had.
The Oracles We Build
I was the oracle on a team once.
Decisions ran through me. The projects that worked were the ones I was close to. I read that as signal that I was adding value. It was actually proof that I'd built dependency, not capability. The engineers weren't deferring to me because my judgment was better. They were deferring because I had never built a culture where their judgment was tested. When I stepped back, the decisions didn't get easier. They just got slower and more uncertain.
That same pattern is what worries me about AI tooling in weak engineering cultures. When you stop making decisions yourself, you stop building the judgment that lets you evaluate decisions made by others. Including decisions made by models.
A senior engineer told me a story that still sits with me. He had spent years building systems, switched to mostly directing AI agents, then later hit a production memory issue and realized the instinct to debug was gone. Not degraded. Gone.
When ChatGPT arrived, teams like the one I used to run had an obvious replacement oracle. Different interface. Same problem underneath.
What Actually Matters
The teams that thrive with AI have done the work before the tool arrived. They don't need AI to tell them what good looks like. They already know.
They have clear standards. Not just lint rules and style guides ... real standards that describe how decisions get made, what tradeoffs matter, when to follow the pattern and when to break it. Standards that live in documentation and in practice. The same person can explain why something was built that way and why it shouldn't have been. That's the sign of a healthy standard.
They have review culture that interrogates before approving. Reviews that ask "why" before checking the boxes. That create space for pushback without making it personal. Where junior engineers can question senior decisions and senior engineers can admit when they missed something. The authority isn't in the title. It's in the reasoning.
The teams that thrive with AI have done the work before the tool arrived. They don't need AI to tell them what good looks like. They already know.
They have engineers who can defend decisions in their own words. Not quote a recommendation. Not cite a benchmark someone else ran. Construct the argument. Weigh the tradeoffs. Say "here's what I considered, here's what I chose, here's what I'm watching to know if I was wrong." That capability is what makes AI output useful instead of dangerous.
The Work Before The Tool
If you're leading a team that's adopting AI tooling, the question to ask isn't about usage rates or productivity metrics. It's about judgment.
Can your engineers evaluate what the model produces? Do they have the framework to recognize a good recommendation from a bad one? Can they explain why they're accepting or rejecting what AI suggests, or are they just accepting what looks plausible?
The work that matters happens before anyone opens the tool. It's the standards you set. The review culture you build. The time you spend teaching engineers to think instead of just execute. AI doesn't replace any of that. It requires it.
AI doesn't replace the work of building judgment. It requires it.
I had that moment myself with Cursor. Opened it, used it for ten minutes, shut it down. The suggestions arrived faster than I could evaluate them. Every keystroke generated a new option to consider, a new pattern to question, a new decision to make. It wasn't helping. It was flooding.
Later I recognized what that was. Not that AI was bad. That I needed to be clearer about what I was looking for before I could use it well. The teams that will thrive in this transition are the ones who recognize that same signal.
That's The Real Question
My friend on the phone wasn't worried about whether companies were using AI. He was worried about what they were expecting it to fix. Decades of coordination problems don't disappear because the tool got better.
AI doesn't fix weak engineering. It just speeds it up.
The question for every team is whether that's something you want. Whether your foundation can handle the acceleration. Whether your engineers can evaluate faster without losing the thread of what actually matters.
If they can, AI is a multiplier. If they can't, it's just faster output of the same problems you already had.
That's the conversation worth having. Not whether to use AI. Whether you're ready for what it will amplify.
