'AI Wrote It' Is Just the New 'Steve Wrote It'

On blame culture, accountability, and why the author never mattered

Black-and-white illustration of a bearded man shrugging in front of cracked servers and flames, holding a card labeled 'STEVE AI' to symbolize blame-shifting from one author to another.

The bug report made no sense. A guard clause that should have caught the edge case wasn't there. I traced the logic back through three refactorings, four renames, and finally hit git blame. The commit was from 2019. The author wasn't someone I recognized. Let's call him Steve. He left the company in 2021.

The engineer beside me looked at the screen, sighed, and said the phrase I'd heard a dozen times before: "Oh, Steve wrote it." Said with a shrug, like it was weather. Unavoidable. Someone else's problem.

Steve's code was still there, still breaking, still accumulating comments in code reviews like barnacles on a ship nobody wanted to claim. We had a name for it. Steve's code. Not legacy ... legacy implies something earned its keep. This was just... tolerated. Code that arrived with a built-in excuse.

I used to think I was different from the people who shrugged and moved on. I was the careful one, the thoughtful one, the person who asked hard questions in architecture reviews. Then I remembered The Daily Deal.

I used to think I was different from the people who shrugged and moved on. Then I remembered The Daily Deal.

When I Was Steve

Early in my career, hungry and ambitious and convinced I could solve anything, I got put in charge of a project I should have declined. A client needed a daily deal feature on a platform that didn't support it. The "right" answer was telling them no, or building a proper microservice outside the platform, or waiting for platform support. The answer I chose was a hack. A custom application bolted onto a SaaS system that was never designed for it. We took it on the chin and shipped the bolt-on.

We got it live. That was the metric that mattered at the time. Feature shipped, client happy, invoice paid.

Six months later I learned the truth. The feature never quite worked. Support had been quietly patching it, working around its failures, apologizing to the client for issues that kept resurfacing. The code I'd written was still there, still breaking, still nobody's job to fix properly. The total cost of ownership ... support hours, client frustration, accumulated technical debt ... was probably ten times what a proper solution would have cost.

I was Steve. My code had my fingerprints on it, my architectural decisions, my shortcuts. But in the retelling, it became "that Daily Deal thing" or "the legacy system" or "the client's mess." The language shifted. The author became abstract. The failure became weather.

The language shifted. The author became abstract. The failure became weather.

The Pattern in Practice

Steve's code isn't about competence. Steve was a senior engineer with decades of experience. The issue was consistency ... he would fix something in response to feedback, then remake the same pattern two PRs later. Guard clauses forgotten. Error handling inconsistent. Abstractions that were almost right but leaked implementation details in ways that propagated through the system.

The code did the bare minimum. It compiled. It passed tests. It was just... blah. Lifeless. Code written by someone going through the motions of engineering without holding the craft.

And here's the thing: we kept shipping it. Review after review with fifty-plus comments, and it still merged. Still deployed. Still became someone else's problem when it broke. The process existed ... reviews, comments, feedback ... but the accountability didn't. The code had a built-in excuse, so the organization learned to work around it rather than fix it.

AI Enters the Room

Now I hear the same tone about generated code. "The model hallucinated that regex." "AI wrote this function." Said the same way, like the code appeared through no human action. Like nobody approved it, merged it, deployed it to production.

I understand the impulse. When Copilot suggests a solution that looks plausible and you're under deadline pressure and the ghost text feels like magic, it's easy to stop thinking. You tab-complete your way through a feature. The code works in your manual test. You ship it.

The failure modes are different from Steve's ... AI doesn't forget guard clauses, it invents them. It doesn't duplicate logic, it fabricates utility functions that don't exist. But the excuse structure is identical. Code that arrives with built-in absolution. Not my fault. The machine did it.

The failure modes are different from Steve's. But the excuse structure is identical.

What Process Actually Means

The question isn't whether code came from a human or a model. The question is whether there's a process to catch problems before they ship.

For The Daily Deal, the process failed in the design phase. We should have recognized that "can we hack this together" was the wrong question. Real process starts with architectural tests ... can we explain why this approach is sound? Can we articulate the failure modes? Can we prove this won't become tomorrow's Steve code?

For Steve's code, the process failed in the moment we let it ship. Fifty-three comments on a PR isn't a review ... it's a complaint that got ignored. Real process requires blocking merges that don't meet standards, not just documenting their failures. It requires accountability for the reviewer who approves, not just the author who submits.

For AI-generated code, the process fails when we skip the thinking. Real process means treating generated code like any other contribution: subject to the same architectural tests, the same standards, the same requirement that someone can explain why it works and owns the consequences when it doesn't.

Real process means treating generated code like any other contribution.

The Cost of Ownership

The Daily Deal taught me something about cost I didn't understand at the time. The "savings" of shipping fast ... of accepting the hack, of becoming Steve ... compound in reverse. Support tickets. Client frustration. The slow erosion of trust that comes from systems that "mostly work."

We talk about technical debt as if it's a ledger entry, something to pay down later when we have time. But some debt is structural. Some code is so poorly fitted to its context that fixing it properly means rebuilding what it touches. The total cost of ownership becomes so high that tolerating the broken thing feels cheaper than fixing it. So the Steve code persists.

AI doesn't change this equation. Generated code that bypasses review, that ships because it compiles and passes surface-level tests, that accumulates in the system without coherent architecture ... that code has the same cost profile. The author is different. The math is the same.

Accountability Over Attribution

Code doesn't get a pass because of its author. Steve's bugs weren't acceptable because he was human. AI's bugs aren't acceptable because it isn't. The question was never who wrote it. The question was always whether we had the process and the will to catch problems before they became someone else's weather.

The organizations that get this right don't obsess about attribution. They invest in the design phase. They block merges that don't meet standards. They accept that velocity measured in features shipped is often velocity toward a future where everything is Steve code, just with different authors.

The AI is Steve. Steve is the AI. Both are just excuses for skipping accountability. And excuses are only as powerful as the culture that accepts them.