We shipped a release, something broke, customer service called, and we fixed it ... then the same issue hit the next release and we applied the same fix again. That is the moment this became clear for me. Pager rotation answers who gets alerted, and reliability answers why this happened again.
A release went out, something broke, customer service called, and the team responded. We fixed it. Good response, bad reliability. The same issue happened on the next release with the same customer-service escalation, and we applied the same fix again. Paging did what it was supposed to do. Follow-through did not. The missing step was simple and expensive. We did not update the runbook with the failure mode and the fix after the first incident. That omission looks small when everyone is tired and the incident is closed. It looks huge when customer service calls again with the exact same failure.
When people say "reliability," most teams jump straight to mechanics.
- who is on call
- escalation trees
- alert routing
- shift coverage
All of that matters. None of that prevents recurrence by itself.
You can have perfect pager coverage and still rerun the same incident loop every release if leadership habits are weak.
A pager distributes pain. Leadership habits remove repeated pain.
The Habit Gap
The gap between response and reliability usually lives in four places.
- Runbook discipline. Fixes do not become operational memory.
- Decision traceability. Context stays in people, not artifacts.
- Ownership clarity. Everyone responds, nobody owns prevention.
- Follow-through cadence. Post-incident actions are discussed, not shipped.
In our case, runbook discipline and follow-through were the miss. We responded fast, but we failed to convert a solved incident into system behavior. So the system did what systems do when nothing changes. It repeated, and repetition is what separates random failure from leadership failure.
Why Teams Keep Repeating Incidents
Because response feels productive and follow-through feels like overhead. In the moment, fixing production is urgent. Updating documentation, assigning preventative ownership, and verifying change before the next release feels secondary. Secondary work is exactly where reliability is built.
This is why teams often brag about incident response quality while quietly carrying recurrence debt.
Fast response can coexist with high repeat rate, and that should scare you more than a slow response once. A single slow response can be bad luck. A repeated incident is a system telling you exactly where your leadership discipline is weak.
The hard part is that repeat incidents often look different at surface level. Different trigger, different file, different person on call. Leaders convince themselves it is "new." Underneath, the behavioral miss is usually identical ... knowledge did not move into system memory, so the team keeps paying rediscovery tax.
The Leadership Job Between Releases
If you want fewer repeat incidents, leadership has to install non-negotiable habits between releases.
- every material incident updates the runbook before close-out
- every fix includes "how we prevent this exact class again"
- every action has an owner and due date
- every next release checks whether those actions were completed
No owner means no action. No due date means no urgency. No pre-release check means the same bug gets a second chance.
Teams call this "unlucky timing." The incident log usually calls it "same root cause, different Tuesday."
Many teams think the missing ingredient here is more process. It usually is not. It is enforcement.
You do not need a 17-step incident framework. You need leadership that treats recurrence-prevention actions like release-critical work, not optional cleanup.
That means changing what gets socialized in leadership forums. Most updates include incident happened, customer impact duration, and mitigation complete. Add these two every time ... recurrence-prevention owner, and recurrence-prevention verification date. If those fields are missing, the update is incomplete.
The Reliability Review That Actually Works
Keep it small and brutal. Run a 20-minute reliability follow-through review before each release.
- What failed last release?
- What did we change in runbook and ownership?
- Which action is still open?
- Should this release be gated until that is done?
If you never ask Question 4, you are doing ceremony, not reliability. If you ask Question 4 but never hold the line, you are doing theater.
A lot of teams get stuck here because they confuse speed with discipline. "We have to move fast" becomes the reason gates get waived. Then the same incident comes back with higher cost and lower trust.
Speed without recurrence control is fake speed. You are borrowing from future stability.
The Common Lie
The common lie in operations is "we learned a lot." Learning without behavior change is just narrative.
We "learned a lot" after the first incident too. Then the same issue happened next release. That line should be permanently disallowed in leadership meetings unless someone can point to shipped prevention changes.
If "we learned a lot" keeps showing up every week, congratulations ... you built a book club, not a reliability program.
A better standard is simple. Ask "What did we change in the system that makes this failure less likely next release?" No change means no learning, just memory, and memory expires the moment pressure returns.
If you want a leadership red flag, watch for this sentence after incidents ... "Let's make sure we remember this next time." That sentence usually means nobody owns codifying it. Memory is not a control. Ownership is a control.
The Runbook Standard That Prevents Repeat Pain
A runbook should not be a generic troubleshooting wiki. For repeat-failure prevention, every incident update should include all of these.
- Failure signature ... how to recognize the issue quickly
- Immediate mitigation ... what stabilizes customer impact now
- Root cause summary ... what actually failed
- Permanent prevention step ... what changes before next release
- Owner and due date ... who is accountable for completion
If any one of these is missing, the runbook is incomplete for reliability purposes, and incomplete runbooks create confident responders with inconsistent outcomes.
One more practical standard. Write runbook language so a person who was not in the incident can execute it. If your update requires tribal context from the original responders, it is not ready.
If the same incident happens twice, treat recurrence like a leadership signal ... not bad luck dressed up as engineering mystery.
Pager rotations are necessary, but they are not the strategy. Reliability is the strategy, and runbooks plus follow-through are where that strategy becomes real.
What to Measure for 30 Days
If you want proof that leadership habits are changing reliability, track these 3 numbers for one month:
- Repeat incidents by service area
- Percent of incidents with runbook updates completed before next release
- Percent of follow-through actions closed by due date
Do not overcomplicate this. If repeat rate is flat and closure rate is low, your incident program is still response-heavy. If repeat rate drops while closure rate rises, your leadership habits are improving, and reliability is finally compounding.
The Cost You Do Not See on the Incident Report
Repeat incidents do not just cost engineering hours. They quietly tax every team touching the customer journey. Customer service loses trust because they keep escalating a "new issue" that feels suspiciously familiar. Product pads plans with defensive timeline buffers. Engineering credibility drops because "resolved" starts to sound like "temporarily quiet."
That is why recurrence has to be treated like a leadership signal, not only a technical metric. A repeated incident is proof that your organization can respond under pressure but cannot consistently learn under pressure. Those are two different capabilities.
One practical move that changed this for us was splitting incident closure into two states:
- mitigated ... customer impact is controlled
- closed ... prevention actions are verified and shipped
Most teams collapse those into one state, and that is where follow-through dies. The ticket gets closed when the pager quiets down. The harder prevention work becomes a "later" task and later rarely survives the next priority wave.
Separating those states forced better behavior fast:
- no one could claim completion without prevention evidence
- release leads could see open risk in plain language
- leadership reviews stopped rewarding fast mitigation alone
If this feels strict, good. Reliability is supposed to be strict around recurrence. Speed is still valuable, but speed without memory is just expensive repetition. Teams do not burn out only from big incidents. They burn out from the demoralizing feeling of solving the same incident twice and calling both wins.
Another forcing function that works is to make recurrence visible at the same level as availability. Most dashboards celebrate uptime and MTTR. Add a recurrence trend by service and review it in the same leadership forum. When recurrence is invisible, leaders can tell themselves the system is healthy because incident volume looks manageable. When recurrence is visible, the conversation shifts from "we handled it fast" to "why are we still paying this tax?" That shift is where reliability behavior starts compounding.
If you only add one line to your weekly ops review, make it this ... "Which recurrence risk did we reduce this week?" You cannot fake that answer for long, and that is exactly why it works.
This Week
Pick your last incident and run this test. Is the runbook updated with failure mode + fix? Is there an owner for recurrence prevention? Is the action complete before next release? If any answer is no, you are still operating in response mode, not reliability mode.
Add one more check before your next release. Is the runbook update linked in the release checklist? If it is not linked, it is probably not done. Then add this gate ... "Has recurrence prevention been verified in staging or pre-release validation?" Documentation without verification is optimism. Optimism alone is weak cover in production.
If this resonates, pair it with Your Team Is One PTO Away from Missing the Quarter for the dependency side of the same operational risk.
