He said it almost as an aside, somewhere around the twenty-minute mark of our call. He was running a team of several dozen engineers, and we had been talking through where his AI rollout stood. Then the real thing surfaced. "If I'm being honest," he said, "all I've done is buy the licenses." And then he paused. I asked him what his next step was. He didn't answer right away. That pause held everything at once ... the announcement he'd already sent, the budget he'd already committed, the expectation he'd already set in motion, and the gap between all of that and what was actually happening on his team.
The licenses were live. Nobody knew how to fly.
The Procurement Assumption
This isn't one CTO. I've watched the pattern repeat across engineering organizations at every scale over the last two years. The sequence is almost always the same. Leadership sees where AI is going, someone in procurement gets involved, licenses are negotiated and purchased, an announcement goes out. And then the organization waits for adoption to follow, as though signing the contract was the work.
Hope takes over where planning should be.
I tracked a thread last year that surfaced more than five hundred engineers describing what happened inside their organizations when leaders treated procurement as adoption. The details varied. The shape didn't. Tools sat unused. Teams felt no clear permission to experiment, no structure for learning, no signal that leadership had done the work themselves. After a few months of low usage numbers, the mandate arrived. Use this. Show us adoption metrics. The tool that might have become a genuine productivity multiplier became a compliance exercise instead. Nobody built anything real with it, and by the time leadership noticed, the window for genuine capability building had mostly closed.
The announcement is not the flight plan. The license is not the training.
I Was Not Immune to This
I want to tell you about my own entry into AI tools, because it is relevant here.
I opened Cursor for the first time, spent about ten minutes with it, and shut it down. The story I told myself was that it didn't fit my workflow. I was moving too fast for this. The integration felt foreign. I convinced myself I was being pragmatic, that I had evaluated it and found it lacking.
I hadn't evaluated anything. I was scared of something I didn't understand yet, and I dressed that up as productivity skepticism because "this doesn't fit my workflow" is easier to say than "this is unfamiliar and I don't know what I'm doing with it."
Fear is very good at finding professional clothing.
About a month later, my tech lead kept pulling me back. He had been building real workflows with Cursor, training agents, shipping things faster. He wasn't preaching about it. He was just a step ahead of me, and he kept saying "try it again" ... not as a directive but as an invitation from someone I trusted. I came back. I stayed with it long enough for it to feel natural. I built workflows. I built pipeline tools. The relationship changed completely.
What changed wasn't the tool. What changed was that I had someone who had been through the discomfort ahead of me, and a reason to stay past the part where it felt awkward.
That experience shaped how I brought AI to my team. I didn't want anyone shutting the tab after ten minutes because nobody gave them a reason to stay.
What a Flight Plan Actually Looks Like
When I introduced AI tooling at Converse, I didn't start with an announcement. I started with time.
I gave the team two weeks of blocked calendar. No delivery pressure, no meetings stacked on top of it, no expectation that they would come out the other end with something to demo for leadership. The ask was simple: explore the tools, build something small, break something, bring the surprise back. That was it.
My tech lead started a weekly call where engineers shared what worked, what failed, and what caught them off guard. I wasn't in every session on purpose. My presence in every session would have changed what people were willing to say out loud, because teams perform for leaders even when the leaders are trying to create safety. The point was shared curiosity, not managed performance. Engineers talked to each other about what they were actually experiencing, and they learned at the pace that actually sticks.
Then I walked my team through my own workflow. Not a polished demo. The real thing ... what I was using AI for, what I was still doing without it, where I had gotten it wrong, what I still didn't trust it with. Not as proof that I had figured it out. As proof that figuring it out was normal and ongoing, and that the leader being in the middle of the learning alongside the team was what the learning was supposed to look like.
A flight plan is time, structure, and a leader who visibly went first.
What Happens When You Skip It
We did not always get this right, and I want to be clear about that.
When we first rolled out AI coding tools without enough structure underneath the rollout, things degraded quietly. PRs got bigger. Review times stayed flat. The codebase filled with inconsistencies that had not been there before. Error handling varied file to file. State management looked different across services. We had built a junk drawer with a CI/CD pipeline.
The problem was not the tool. The problem was that AI is very good at scaling whatever you give it, and what we gave it was a team that had not yet aligned on what good looked like. The tool did not create the inconsistency. It amplified what was already underneath the surface.
We had to go back and do the work we had skipped. We documented patterns and decision records. We got the humans aligned on standards before we tried to encode those standards into guardrails. We built lint rules, architectural tests, and AI workflows trained on our patterns instead of generic training data. Teams that build the system before they automate it move faster later because they are not cleaning up a junk drawer six months in. The codebase stabilized within weeks. Not because the tool got better. Because the foundation it was building on finally made sense.
What the Leaders Who Got It Right Did Differently
The common thread I've seen in leaders who actually moved their teams through this, not just past the announcement phase but into genuine capability, is that they went first.
They didn't wait for their teams to build fluency and then check in on the metrics. They used the tools themselves, visibly, and they talked about it. They named what surprised them. They created protected time for exploration before the expectation to perform arrived. And when the early experiments were clumsy or the initial numbers were low, they treated that as the work being done, not the work being missed.
The leaders who didn't get there sent the announcement, watched the dashboard, and escalated when the numbers didn't move. That escalation turned the tool from an invitation into an obligation. Teams that feel obligated to perform adoption will show you the metrics without showing you the change. The numbers go up. The capability doesn't follow. The mandate has no return address because nobody ever owned what came after it.
The jet with keys and no trained pilots doesn't sit on the tarmac forever. Eventually someone is asked to fly it because the budget has been spent and the announcement has already gone out. That is when the real cost shows up.
Back to That Pause
The CTO asked me, after the silence stretched out, what I thought he should do. I told him to go use the tools himself for two weeks before he asked anyone else to. Not because he needed to become an expert first. But because his team would know whether he had or hadn't, and that knowledge would shape everything about how seriously they took what came next.
He said he had been meaning to.
Meaning to is not a flight plan either.
