A Diagnostic Workshop for Engineering Leadership Teams
Your AI Rollout Is Generating Output. Your Delivery Confidence Isn't.
Output is up. Delivery confidence isn't. Review quality is softer. Commitments are harder to stand behind.
Those gaps have a cost. Rework. Slipped timelines. Roadmap commitments that start requiring explanation. The teams paying that cost right now had weak foundations before AI arrived. AI just made it expensive.
This is a facilitated half-day or full-day session that identifies exactly where AI pressure is creating execution risk in your org … and gives your leadership team a concrete plan to fix it before it becomes a missed launch, a budget conversation, or a resignation letter.
Sessions available July 13, 2026. Spots limited to 15 per session.
Early validation
Pressure-tested in working conversations with VPs and directors of engineering who are accountable for AI adoption and delivery outcomes.
- Enterprise retail
- Digital commerce
- Global platform orgs
What AI Pressure Is Actually Exposing
AI didn't create the execution problems running in most engineering orgs right now.
It accelerated them. Raised the stakes. Made them more expensive.
Leaders making decisions about systems they no longer fully understand. Metrics rewarding throughput while quality drifts underneath. Engineers producing more output while losing the fundamentals that made the output worth shipping. Review gaps hiding behind velocity numbers. Standards nobody aligned on before the tools arrived.
The result is faster output with more rework. Velocity without delivery reliability. Decisions being made with less confidence than the dashboard suggests.
AI tools, training, and adoption programs don't create value if the operating model underneath them is inconsistent. Every dollar spent on AI tooling leaks through the gaps this workshop identifies.
The teams where AI is working had strong foundations before AI showed up. Shared standards. Leaders who still understand the systems they're deciding about. Accountability that doesn't depend on a dashboard reading green.
The teams where AI is creating friction had cracks before AI arrived. AI turned the pressure up.
This workshop identifies which patterns are running in your org and what to fix first.
AI didn't create most of the leadership problems on your engineering floor. It's exposing them faster and making them more expensive.
If this matches what you are seeing in delivery reviews, the queue is where the fit conversation starts.
Get in the QueueSix Patterns Already Running on Your Engineering Floor
Six patterns that show up in every engineering org navigating AI adoption under real delivery pressure. Each one is invisible on a dashboard. Each one is obvious to the engineers living with it. In retail and eCommerce environments ... where go-to-market dates are fixed, release quality is non-negotiable, and platform stability directly affects revenue ... these patterns compound faster.
Mandate vs. Adoption
Leadership rolls out AI with an email and a dashboard. No pilot. No feedback loop. No conversation about where AI helps and where it creates risk in your release process. Adoption becomes compliance. The engineers who care most about what ships to production are the first ones to disengage.
The best AI adoption I've led started with a week of exploration and zero deliverables. I was one step ahead of my team. That's all a leader needs to be. The worst adoption I've seen started with a mandate and ended with a team that stopped caring about the quality of what they shipped ... two weeks before a major launch.
How did your team adopt AI, and who decided how it would be used?
Technical Drift and Judgment Decay
Leaders who stopped building can't evaluate AI-generated output. They approve PRs they can't fully read. They commit timelines based on assumptions they can no longer pressure-test. In retail engineering, this means launch dates get promised on code nobody in the leadership chain actually understands. The gap between leadership decisions and engineering reality widens every quarter.
I still read pull requests. Not to approve them. To understand how my teams think about problems. I still prototype before I promise a timeline to a VP. The moment I stop understanding how our systems break, I lose the ability to make good decisions about them. In a retail org, those decisions land on a customer-facing storefront with revenue attached. Most leaders lost that ability months ago and their teams already know it.
When was the last time your leadership team touched the system they're making release decisions about?
Measurement Distortion
Someone saw a headline about AI making developers 3x productive. Now that's the target. Nobody measured what 1x looked like. Velocity became a growth metric instead of a planning signal. Story points are being inflated to survive expectations that were never grounded in reality.
A 3x improvement in code generation means nothing if your deployment pipeline takes four hours. A 3x improvement in ticket throughput means nothing if half those tickets are rework from last sprint. In retail engineering, inflated velocity creates a specific danger ... leadership builds roadmaps and launch commitments on throughput numbers that don't reflect actual release readiness. The best engineering leaders I've worked with never asked for 3x anything. They asked what's in the way and removed it.
What are you actually measuring, and does it tell you whether your next release is safe to ship?
Inflated velocity is the most dangerous metric in retail engineering. It builds launch commitments on throughput numbers that don't reflect release readiness.
Skill Atrophy
Engineers directing AI agents all day are producing more output than ever. They're also losing the muscle memory for debugging, for thinking through edge cases, for understanding why certain patterns break under load.
AI is a multiplier. But a million times zero is still zero. The engineers who use AI best are the ones who built deep skills first and use AI to extend them. The engineers who skipped the fundamentals are crashing at a higher altitude with a bigger smile. In retail and eCommerce ... where Black Friday traffic, flash sale spikes, and launch-day load don't forgive shallow understanding ... atrophy shows up as production incidents at the worst possible time. Your team has both kinds of engineers. Your leaders need to know the difference.
If AI stopped working tomorrow, could your team still debug a production issue during a peak traffic event?
Cognitive Fatigue and Recovery
Before AI, engineering had natural downtime built into the day. Waiting for builds. Writing mechanical tests. Refactoring that was almost meditative. Those moments weren't wasted time. They were recovery time.
AI eliminated all of it. Every minute is now high-stakes evaluation. Did the model get this right. Does this fit the architecture or just pass the tests. The mental model most leaders carry is that AI saves time so engineers should have more capacity. The reality is that AI compresses effort and engineers hit cognitive walls earlier. In a retail org with seasonal peaks, mandatory overtime around launches, and a release calendar that never slows down ... the fatigue compounds until it shows up as missed edge cases, sloppy reviews, and quality breaks during the windows that matter most.
Where in your team's day is nobody asking them to evaluate anything?
Culture and Standards Misalignment
Five engineers using AI five different ways. No shared standards for how to evaluate output. No agreement on when to trust AI and when to override it. Engineers citing ChatGPT in architecture reviews instead of defending decisions in their own words.
AI didn't create this problem. It exposed teams that never had a shared decision-making framework. The teams where AI is working well are the ones that had strong engineering culture before AI showed up. The teams where AI is creating chaos are the ones that were already running on convenience instead of standards. In retail engineering, where platform consistency directly affects customer experience and every team's code ends up in the same production environment ... inconsistent standards become inconsistent releases. AI just made it visible to everyone at the same time.
Does your team have a shared framework for when and how to use AI, or did everyone just figure it out on their own?
Business Case
Why finance and delivery both care about this diagnostic
Attrition cost avoided
When senior engineers lose trust in how work gets done, they leave. All-in replacement often lands between $200K and $400K. Surfacing the leadership conditions behind churn beats recruiting your way out of a pattern problem.
Stalled adoption unblocked
Rollouts stall on leadership gaps more often than tooling gaps: mandates without feedback, misaligned metrics, no shared framework for using AI. The diagnostic names which gap is binding so you stop solving the wrong problem.
Incident risk reduced
Fatigue, atrophy, and inconsistent standards show up as production issues during the windows that matter. Naming those conditions early is cheaper than a postmortem on the launch your roadmap depended on.
AI investment ROI recovered
Tooling, training, and enablement spend leaks when the operating model underneath is inconsistent. This session maps where that leakage is happening so budget and attention go to fixes that compound.
How the Workshop Works
Before
Pre-work in the audit
Each participant receives the AI Leadership Audit: scenarios, self-assessment prompts, and questions designed to surface what most leadership teams avoid discussing. They arrive having already sat with the uncomfortable parts.
1 / 6
This is not an AI tools demo. This is not a keynote. This is a diagnostic session for engineering leaders who know that speed without standards creates production risk.
What Your Leadership Team Walks Out With
Shared Language.
Your leadership team leaves with specific names for the patterns running in your org. When everyone can name the same problem, you stop managing symptoms and start fixing the system.
Honest Visibility.
The conditions driving rework, accountability slippage, and inconsistent delivery surface where they’re productive. You see what’s creating execution risk … not only what the dashboard claims.
More Predictable Delivery.
Each leader leaves with one concrete system change tied to the acute patterns on their team. Something that changes how work is reviewed and shipped within days, not quarters.
Reduced Exposure.
Slipping commitments, executive escalation, and attrition start as unnamed leadership dynamics. This workshop surfaces them while you can still act on your timeline.
What's Actually at Stake
The patterns in this diagnostic show up in release quality, incident rate, roadmap accuracy, and retention. They do not stay confined to a slide deck.
The point is not to dramatize AI. It is to surface leadership dynamics early enough that your team can act on its own timeline instead of someone else's incident review.
Who This Is For
You’re a CTO, VP of Engineering, or Senior Director accountable for delivery outcomes that are getting harder to trust.
Your team is shipping more. You’re less confident in what’s shipping. Rework is up. The ratio of output to reliable delivery is moving in the wrong direction.
Adoption is uneven across teams. Some move. Others stall. The variance is hard to audit and harder to explain to executives.
Review quality, accountability, and alignment are softer than they were a year ago. You’re spending more time managing inconsistency than building toward the roadmap.
Thirty to 150 engineers is where these patterns become expensive fast. If AI pressure is real in your org, this session was built for your team.
Who Facilitates This
Jono Herrington
Jono built and led Converse's global digital engineering org at Nike, scaling the team across North America, Europe, and Asia. He's spent 15+ years building eCommerce platforms and the engineering teams behind them ... from the era when Salesforce Commerce Cloud was still called Demandware through today's composable and headless architectures.
He still writes code. He still reads pull requests. He still prototypes before he promises a timeline. He led AI adoption across a distributed engineering team without a single mandate ... and watched 500+ engineers in an online thread describe what happens when their leaders chose mandates instead.
Every diagnostic pattern in this workshop comes from a failure he experienced first or documented from the front lines of engineering leadership during the AI transition.
Every mirror in this workshop comes from a mistake he made first.
The patterns running on your team right now are already affecting your release quality, your roadmap accuracy, and your best engineers' patience. The question is whether you surface them on your terms or discover them in a post-incident review.
Investment
The cost of this workshop is less than one month of a senior engineer's salary.
The cost of not running it is a preventable production incident during the launch your roadmap was built around.
| What's included | Half-day$15,000*Up to 15 participants | Full-day intensive$25,000*Up to 25 participants |
|---|---|---|
| Pre-session AI Leadership Audit (each participant) | ||
| Facilitated session with Jono Herrington | ||
| 30-day follow-up check-in | ||
| Post-session summary: findings & recommendations | ||
| Extended work on all six patterns | — | |
| Team-level AI workflow assessment | — | |
| Individual action plans | — |
*Travel and lodging billed at cost for all engagements.
This workshop is typically funded through engineering enablement, platform operations, or digital engineering budgets.
Session dates by email
Prefer to wait on outreach? Add your email for new dates only. To book a session, use Get in the Queue at the bottom of this page.
Looking for Individual Development?
Join the monthly cohort with engineering leaders from other companies. Same diagnostic. Built for individual leaders instead of a full staff session.
Learn about the monthly cohortQuestions Before You Reach Out?
Is this virtual, on-site, or hybrid?
Both. Most teams start virtual; on-site is available when it fits your incident calendar and leadership offsite plans. Travel and lodging are billed at cost.
How do we book a session?
Email with a short note on your org size, AI rollout stage, and who would join from leadership. Expect a 20-minute fit call before dates are confirmed.
What prep is required?
Each participant receives the AI Leadership Audit in advance. The session lands harder when leaders read it and show up having already sat with the uncomfortable parts.
How many people should attend?
Half-day sessions support up to 15 leaders; full-day supports up to 25. The goal is the right decision-makers in the room, not everyone with a title.
What do we receive after the workshop?
A post-session summary of diagnostic findings and recommendations, plus a 30-day follow-up check-in to test what held under real delivery pressure.
How does pricing work?
Half-day and full-day tiers are listed below. Engagements are typically funded through engineering enablement, platform operations, or digital engineering budgets.
Get in the Queue
The first session runs July 13. Spots are limited to 15 people per session. Send a note and we'll have a 20-minute conversation to confirm the diagnostic is the right fit for your engineering org.