Issue #003

From Standards to Structure

April 14, 2026·4 min read

This is part 2 of an ongoing series on setting up AI the right way. Part 1 ... Before the AI Setup.

You wrote the markdown. You put it in the repo. You pointed at it in onboarding.

The AI still doesn't know it exists.

That's the gap most teams don't see until the inconsistency is already baked in. The documentation lives in one place. The AI's operating context lives somewhere else entirely. Closing that gap is the work this edition is about.

After we documented our engineering standards at Converse ... error handling, service patterns, logging conventions, ADRs ... the codebase got cleaner. Engineers referenced the docs. Onboarding improved. But when we went back to the AI generated code six months later, the inconsistency was still there.

The AI hadn't read the markdown.

It wasn't pulling from our repo documentation when it generated code. It was pulling from its default training data. Every suggestion came back technically valid and contextually wrong for our team. Two engineers prompting the same problem still got two different approaches, because the AI had no way to prefer our way.

The documentation was a human win. We needed a different step to make it an AI win.

We took the standards markdown and broke it into four categories the AI tool could actually use.

Rules

The constraints that always apply. Things like "every external API call must have a circuit breaker" or "all errors surface with a correlation ID." These go into the AI tool's global configuration. No exceptions. No prompting required. The tool starts from these.

Hooks

Conditional triggers. They fire when a specific pattern appears. If an engineer starts working in the payments module, the hook surfaces the payments specific security standards. If a new service file is created, the hook injects the service template. Hooks are context aware. They bring the right standard to the right moment without the engineer having to remember to ask.

Skills

Specialized workflows the AI can run on command. A skill for generating a new API endpoint that follows your contract standards. A skill for writing a PR description that includes your required fields. A skill for creating an ADR that follows the decision record format your team uses. Skills are the intersection of your standards and a repeatable task.

Agents

The autonomous ones. They run on triggers or schedules. An agent that reviews open PRs against the team's size standards and flags anything over four hundred lines. An agent that checks whether new files follow your naming convention before they hit review.

The decomposition doesn't need to be perfect on the first pass. It needs to be started.

The part that surprised us ... the AI is actually good at helping you figure out how to break this up.

We opened a planning conversation and gave it the standards markdown we'd written. We asked it to tell us which standards would work best as rules, which as hooks, which as skills, and which needed agent level enforcement. Its reasoning was solid. It surfaced things we hadn't thought about ... which standards were too context dependent to be rules, which ones required knowing what file type was being edited, which ones were genuinely ambient versus genuinely reactive.

This is where using Opus matters. You want the model to think through the decomposition, not just sort it. The distinction between a hook and a skill is subtle. The distinction between a rule and an agent matters for where the enforcement lands. A weaker model will categorize. A stronger model will reason about tradeoffs.

You already have the knowledge in the markdown. The AI helps you figure out what to do with it.

The payoff was different than I expected. The immediate win wasn't consistency, though that came. The first win was that engineers stopped surfacing the same questions in code review. The standard was already in the room before anyone opened a file.

When your standards live in the tool's operating context, the tool stops generating the wrong thing. The code review conversation moves from "this is how we handle errors" to "does this pattern make sense here." The standard becomes ambient. It's the default, not the exception.

That's what you're building when you decompose the markdown. Not a ruleset. A way for your standards to travel with the work.

The AI Leadership Audit's Chapter 6 maps the full framework across the dimensions teams most often skip ... jonoherrington.com/leadership-audit.