This is part 1 of an ongoing series on setting up AI the right way. Up next ... From Standards to Structure.
You can't train a non-deterministic tool on standards that don't exist.
Most teams setting up AI are configuring rules, hooks, and workflows before they've written down how their team actually works. The AI produces something different every time because there's no shared foundation to anchor to ... and neither would a new engineer walking in on their first day.
What We Got Wrong
When we introduced AI coding tools at Converse, we jumped straight to configuration. Custom instructions. Lint rules. Workflow automation. We had strong opinions about how code should be written on our team, but those opinions lived in people's heads. They lived in code review comments that never made it into a doc. They lived in the decisions made by the engineers who had been there longest and had absorbed the context through sheer proximity.
The AI had none of that. So it defaulted to its own training data. And every engineer gave it different signals. Over months, we got every version of good running simultaneously. Services calling external APIs in conflicting ways. Error handling that varied depending on who opened the file that week. State management patterns nobody could reason through consistently when an incident hit.
We tried to fix it by tightening the AI configuration. Wrong move. The problem wasn't the tool. The humans hadn't aligned first.
Once we saw that, we stopped touching the AI setup and started writing things down.
What to Write Down
Not a wiki. Not a Confluence page that got updated once in 2022. Markdown files. Flat, versioned, living in the same repo as the code. Specific enough that a new engineer could open them and understand how decisions get made on your team, not how they get made in general.
Here's the scope of what matters.
- Code review standards ... what you push back on, what you approve with a comment, and what stops a PR from merging
- Pull request expectations ... size limits, description format, and what needs a linked ticket
- Coding style beyond the linter ... naming conventions, file structure, when a pattern is appropriate and when it isn't
- Service patterns ... how you call external systems, handle retries, timeouts, and circuit breaking
- Logging standards ... what gets logged, at what level, and with what metadata
- State management conventions ... so the approach is the same regardless of who touches the file
- Error handling decisions ... what you surface, what you swallow, and why
- Testing expectations ... what deserves a unit test, what needs an integration test, what you don't mock
- Definition of ready and definition of done
- Architecture decision records (ADRs) ... so future engineers know not just what was decided but the reasoning behind it
- API contracts ... your approach to versioning and backward compatibility
- Security patterns ... what applies across every service without exception
- Deployment and rollback standards
- Incident response and triage process
Some of this already exists on your team in scattered form. The work is to centralize it, put it in a format the AI can actually read, and make it the thing you point to when someone asks "how do we do this here."
Why You Start Here
When you onboard a new engineer, you don't hand them access to the codebase and say "figure it out." You walk them through how things work here. You build their mental model of your standards before you expect consistent output from them.
The AI is no different. It learns from what you give it. If you give it inconsistent signals, it produces inconsistent output. If you give it documented standards, it has something real to anchor against. The rules, hooks, and custom workflows you build later are only as good as the foundation underneath them.
What we noticed once we did this work
Onboarding got faster. Engineers who joined the team after we documented our standards came up to speed in weeks rather than months. The documentation was built for the AI, but it served the humans too. That's not a coincidence. Good standards documentation is good standards documentation. The AI just makes it more urgent to have.
You have to lay the foundation before the framework. The AI Leadership Audit maps what that foundation looks like across all six dimensions ... jonoherrington.com/leadership-audit.