I spent time in a thread with over 500 experienced engineers talking about how AI destroyed their love of coding. People who've been writing code since childhood. The grief is real and specific. Not burnout. Not boredom. Something that mattered got quietly taken away.
I read every comment (almost). A pattern emerged early.
It wasn't the tool.
It was the rollout.
Leadership sends an email. All teams will integrate AI tools by end of quarter. Approved vendors attached. Adoption metrics to follow. No pilot. No feedback. No conversation about which types of work actually benefit and which don't.
Just a mandate and a dashboard.
What happens next looks fine for months. Adoption metrics climb. Velocity holds. The quarterly review slide looks clean. And then, quietly, something starts to degrade. Decisions that used to come from judgment start coming from autocomplete. The people who were exceptional at the hard parts start to feel like the hard parts don't matter anymore. Engineers can't debug their own systems because they never fully built them.
The failure curve is slow and invisible until it isn't. By the time it shows up in production, leadership has moved on to the next initiative.
Why Mandates Backfire
Adults do the exact same thing teenagers do when you mandate something. They do the opposite. We laugh about it when we're talking about teenagers. "Of course they pushed back, you told them they had to." But nobody admits that dynamic doesn't stop at 18.
Put someone in a corner, tell them this is how they work now, measure whether they're complying ... you haven't driven adoption. You've driven compliance.
Compliance means engineers will use the tool on the tasks that get credit, and quietly stop applying full judgment everywhere else. Real adoption ... the kind where someone teaches a colleague something they found on their own at 10pm ... requires a different entry point.
Nobody admits that dynamic stops at 18.
There's an incentive problem underneath this too. Mandates work for leadership visibility. They create visible progress without ownership of the outcome. When adoption metrics are green, the initiative looks successful. When the codebase quietly degrades six months later, that's an engineering problem. The person who sent the email is rarely in the room for that conversation.
The mandate is an incentive design problem disguised as a communication failure. The person who writes it rarely lives with how it lands.
What I Did Instead
We were given a mandate too. It stopped with me.
Before I could give my team anything, I had to give myself time with it. The first time I opened Cursor, I used it for ten minutes and shut it down. It felt foreign. I recognized the feeling. That's fear of change dressed up as productivity skepticism. I'd seen it in other people. I hadn't expected to see it in myself.
I came back. Built workflows. Trained agents. Built pipeline tools. Now I live in it ... not because I'm inside my team's sprint, but because I want to be where they are.
When I introduced AI to my team at Converse, I didn't send an email.
I gave them a week. I was one step ahead. That's all a leader needs to be. Not a hundred. Just one.
We blocked time specifically for exploration. Not meetings, not deliverables ... just room to try things without a sprint deadline breathing down their necks. I worked with one of my engineers to curate training materials so the team could go deep on their own terms. We had real conversations about where AI creates leverage and where it doesn't. We took quarterly planning ... one of the most time-consuming cycles in most engineering orgs ... and figured out together how AI could cut the overhead without hollowing out the thinking.
The frame wasn't "we need to adopt this." We wanted to go see what this thing could do.
When the goal is adoption, you're measuring whether people clicked the buttons. When the goal is curiosity, the signal is different. It's the spontaneous "you have to try this" that shows up in a Slack thread before the workday starts. Engineers comparing notes across workstreams because they found something that changed how they work. You can't dashboard that. But you know it when you see it.
My team uses AI constantly now. They also love their work. Those two things aren't in conflict. But they would be if I'd just sent that email.
The Work That Benefits and the Work That Doesn't
Part of what mandates miss is that they treat every engineering task the same. And they're not.
AI is exceptional for boilerplate. Testing scaffolds. Exploring an unfamiliar API. Generating the scaffolding for something you already know how to build but don't feel like typing for three hours. That's real leverage. Real time returned to the things that require your full brain.
The picture changes on nuanced system design. Security critical paths. Code that needs to survive five years of edge cases from customers you haven't met yet. The AI productivity mirage ... the gap between dashboards that look healthy and codebases that are quietly falling apart ... lives in this blind spot. When you stop asking where the unplanned work comes from, you've already paid the cost.
When you mandate usage without making those distinctions explicit, engineers stop making them too. The boilerplate and the critical path start to blur. And the engineer who was great at knowing the difference stops being asked to.
Getting in the Dirt
I showed my team how AI changed my own workflow. Not a demo. Not a slide deck. An actual walkthrough of something I was working on ... what I tried, what surprised me, what I still don't trust it with. I made space for them to bring discoveries back. I let it be visible when I was learning from what they found.
When you're measuring people, they optimize for the metric. When you're curious alongside them, they start optimizing for the thing itself.
The difference between a team running on curiosity and one running on compliance isn't the tooling. It's the leader's posture. The same AI produces completely different outcomes depending on who showed up to lead it. The tool is a mirror. It reflects how the team already works.
Adults won't admit they're running the teenage playbook. But the moment you stop mandating and start exploring alongside them ... the whole energy shifts.
Protecting What Actually Matters
The mental model is the asset. Not the output.
Engineers who built systems from scratch carry something that can't be captured in a metric. They know why things are the way they are. They can debug not just the failure but the conditions that produced it. That knowledge takes years of sitting with hard problems until they break open.
It erodes fast when every hard problem gets outsourced before the engineer fully lives with it.
Strong teams still make engineers sit with the problem before handing it to the tool. They do second passes on critical paths without the tool. They build systems for making understanding visible ... PRs that document the why, not just the what. They treat the mental model as something you protect intentionally, because nobody defaults to protecting it when velocity is what you're measuring.
The engineers in that thread who still love their work ... when you read carefully, they're on teams where someone made those distinctions out loud. Where the leader had a conversation instead of sending an email. Where they got curiosity as a frame instead of compliance as a requirement.
The mental model is the asset. Not the output.
The mandate isn't just a rollout mistake. It's a signal about what you believe the work is.
When you reduce your engineers to an adoption metric, you've told them exactly how you see them. They'll confirm they got the message ... not with words, but with the code they ship six months later.
