AI Is Shipping Your Blind Spots

Every prompt is one perspective.

Black and white illustration of a bearded man in a baseball cap inside a gear icon; half his face is lit, half lost in shadow, suggesting a blind spot.

I shipped a spec that missed three edge cases a QA engineer would have caught in five minutes.

Not buried edge cases. Not obscure branching logic deep in a data model. Surface stuff. The kind of thing that lives at the front of the test plan, the first page of the test run, so obvious in hindsight that explaining it in a retrospective is the most embarrassing part. There was no incident. No postmortem. Just a dev cycle quietly bleeding out through holes that should never have made it past the spec.

The QA team's test cases weren't ready yet. That happens in spec-driven development ... teams don't always move in lockstep, and some functions are still prepping when others are already building. I know this. I've been managing that coordination for years across teams on three continents. But knowing the gap existed didn't stop me from walking straight into it. The happy path was solid. The spec covered the main flow perfectly. It just never got the perspective of someone whose entire job is to find what the happy path misses.

I talk about engineering as a team sport. I wrote that spec alone.

The Problem Nobody Is Naming

AI is exceptional at amplifying output. It is considerably less exceptional at amplifying perspective.

When you write a prompt, you bring your mental model, your assumptions, your frame. The model responds inside that frame. It does not volunteer the QA engineer's instinct that a happy path spec almost always breaks on state management. It does not bring the DevOps engineer's quiet read on what happens to this flow under load at 2am. It gives you a thorough, considered answer to the question you asked.

The question you asked was incomplete.

The feedback loop is the problem. And moving faster with a broken feedback loop is just moving faster toward the wrong thing. If you are using AI to accelerate delivery without fixing the loop, you are not shipping product. You are shipping your assumptions.

AI doesn't have blind spots. You do. And it will ship yours at scale if you let it.

The AI productivity conversation tends to live at the tool layer. Are we using the right model? Are we structuring prompts well? Are we reviewing output carefully? Those questions matter. But they all assume the bottleneck is technical. Most of the time, the bottleneck isn't technical.

What I Built Instead

After that spec, I stopped treating AI as a solo collaborator. The QA timing gap wasn't going away. But I had no system for bringing the QA perspective into the spec when QA wasn't in the room yet. I built a system of personas.

A business analyst. A technical writer. A product owner. A QA engineer. A DevOps engineer. Not generic role labels ... personas built from years of being in rooms where things went sideways. Each one carrying a specific lens, a specific vocabulary, a specific set of questions that function inside a given layer of the work.

When I work through a spec now, I run it through each persona before it ever touches a dev. The QA engineer asks about the edge cases. The DevOps engineer asks about the failure modes. The product owner asks whether the spec actually solves the user's problem or the problem I described. The business analyst asks what we're assuming about the customer that we haven't verified.

Each angle surfaces something the previous one missed.

Each persona carries the failure experience of someone who has watched that layer break. You cannot build that from one discipline. A specialist knows what strong looks like in one layer. They don't know what the person two layers above or below them is watching for. The system works because it does not stay in one lane.

Expertise is real. Depth still matters. But depth in one function is not a substitute for a system that crosses functions.

The Loop That Actually Matters

Everyone is focused on making their prompts better. Structuring outputs, iterating on instructions, getting sharper at the technical mechanics of working with models. That is not wrong. It is also not the leverage point.

The feedback loops that make AI powerful aren't technical loops. They're leadership loops.

Knowing where your judgment needs to be in the loop and where you can let the model move without you. Knowing how to look at a piece of output and give precise feedback that actually improves the next iteration ... not just "this isn't right" but "this missed the failure mode that happens when the user doesn't complete step two."

That kind of feedback requires a mental model that spans the whole system. You cannot shape AI output well if you only understand one layer of what you're building.

The engineers I've seen struggle most with AI are not the ones who lack technical skill. They're the ones who have never had to advocate for the whole system. They have spent their careers going deep in one place, and now they are prompting from that one place ... and getting output that is technically complete and strategically incomplete.

Staying close enough to the codebase to have an informed view has always mattered. But the ability to hold multiple functional perspectives at once matters more now than it ever did. Because AI will faithfully execute whatever perspective you bring.

The quality of your perspective is the constraint.

Prompting Is a Delegation Skill

Every prompt is a delegation decision. You're deciding what problem to hand off, how to frame it, what constraints to provide, and what you're going to do with the output. The brief determines the outcome.

Poor delegation produces rework. Strong delegation produces leverage.

The persona system makes that mandatory. Before you hand work off, you check the frame. You ask what the QA engineer would see, what the business is actually trying to accomplish, what breaks under pressure. Then you brief.

AI lets you do this at speed. The catch is you have to already have the range to model those perspectives. If you don't have the range, the speed works against you.

The Blind Spots Are Ours

Blaming AI for your blind spots is like blaming the mirror for what you look like.

The model isn't hiding your gaps. It's reflecting your frame back at you with precision. When the output misses something important, the miss was almost always present in the input. In the assumption you didn't surface. In the edge case you didn't think to specify. In the angle you left out of the room.

The spec I shipped missed three edge cases because I wrote it alone, from one perspective, with one set of mental models. The QA team wasn't in the room. AI was. AI helped me write that spec faithfully. It did exactly what I asked. The tool wasn't the problem.

The solution was to build a system that forces multiple perspectives into every consequential piece of output before it ships. To do with AI what strong teams do with humans ... bring more than one angle to the table, on purpose, every time.

That's a leadership problem. Not a technology problem. And it's one that grows in proportion to how much AI you're putting into the loop.

Every prompt you write is one perspective. The question isn't whether AI can do the work. It's whether you have the range to catch what it misses.

The blind spots aren't AI's fault. They're yours.