Applied node

The Bainbridge Warning — Book on AI Agent Governance Infrastructure

58 pages. Five documented failures. Four capabilities. One argument about the infrastructure gap that is already installed — and what to build before it becomes expensive.

Status Live
Price Bundle
Live

The Argument

Most organisations building AI agent systems right now are moving faster on capability than they are on the infrastructure that makes capability safe to depend on.

The gap between those two speeds is a debt. The debt is not visible yet, because it has not been called in. When it is called in, the cost will be larger than what it would have taken to prevent it.

That is the whole argument. The rest of the book is the evidence for it, the explanation of why it happens, and the description of the four things you need to build to prevent it.

Read the Introduction free →

What the Book Covers

The Automation Irony and Its Modern Heir

Lisanne Bainbridge’s 1983 paper described what happens when automation becomes reliable enough that humans stop practicing the skills they need when it fails. The same structure applies to AI agent deployment. The irony is not approaching. It is already installed.

At the core of the argument: automation does not merely introduce unpredictability. It performs a deeper substitution. It replaces mortal measurement — measurement by someone who cannot be separated from the consequences of being wrong — with something reversible. The agent can be retrained. Its errors can be attributed elsewhere. At every level, the consequences of being wrong become redistributable. That redistribution is the mechanism.

Five Things That Already Happened

Five documented failures, three of which predate AI agents. Air France 447 (2009). The Flash Crash (2010). Mata v. Avianca (2023). Replit (July 2025). Anthropic interpretability research (April 2026). The structure of the failure is the same across all five. The technology just makes it faster.

The Four Things You Have to Build

  • Bounded Verifiability Latency — classifying every action by how reversible it is, and using that classification to determine where validation is placed and what form it takes
  • Explicit Compositional Contracts — specifying what each agent in a pipeline actually does in production, so you can test the combination before deploying it and halt cascades before they complete
  • Continuous Deterministic Layer Regression — treating the instructions governing agent behaviour as code: version-controlled, reviewed before change, and tested at four levels including production
  • Dual Ownership — naming the two kinds of authority that AI agent governance requires, defining the boundary between them explicitly, and building a resolution path for when they disagree

When Governance Becomes a Performance

Eight signals that a governance framework has become theater. And the harder variant — Detection Theater — where the monitoring is genuine but structurally insufficient: correctly measuring the layer it is pointed at while the failure lives at a layer it is not pointed at.

Why This Window Is Closing

The regulatory (EU AI Act enforcement, August 2026), market (enterprise buyers now asking governance questions before procurement), and evidence case (233 documented AI safety incidents in 2024, up 56% year-on-year) for building governance infrastructure now rather than retrofitting it under deadline pressure.

The Bundle

This book is designed to be read before the companion workbook. The bundle includes all three:

  • The Bainbridge Warning v1.3 — current edition (58 pages), incorporating Anthropic interpretability research published April 2026 and a fifth case specimen on functional state monitoring
  • The Bainbridge Warning v1.1 — original March 2026 edition (74 pages), with extended annotations and alternative framings
  • Cognitive Infrastructure Readiness v2.0 — the practitioner workbook that translates the book into checklists, templates, and measurement tools with named owners, tooling requirements, and completion criteria for each primitive

Read the book first. Then use the workbook. That sequence matters.

Common questions

What is in the book?

58 pages. Five documented governance failures across three decades — Air France 447 (2009), the Flash Crash (2010), Mata v. Avianca (2023), the Replit incident (2025), and Anthropic interpretability research published April 2026. Four capabilities you have to build. Eight signals that governance has become theater. And the functional state monitoring question that sits upstream of all four capabilities.

What is Detection Theater?

Governance Theater is when monitoring is performative. Detection Theater is the harder variant: monitoring that is genuinely working — correctly measuring the layer it is pointed at — while the failure lives at a layer it is not pointed at. The book names this and describes when it appears.

What is in the bundle?

The Bainbridge Warning v1.3 (58 pages, includes April 2026 interpretability research), The Bainbridge Warning v1.1 (74 pages, original March 2026 edition), and the Cognitive Infrastructure Readiness v2.0 workbook. Read the book first. Then use the workbook. The sequence matters.

Who is this for?

Engineering leaders, governance leads, and operations directors deploying AI agents into consequential workflows. Also useful for executive teams who need to understand the structural risk, not just the capability profile. Written for people who have watched something automated fail in a way they could not explain to a non-technical person.

How does this relate to the Bainbridge Warning assessment service?

The book installs the diagnostic frame. The assessment service deploys that frame against your organisation's specific infrastructure. Reading the book first makes the assessment work faster and go deeper.

Is there a free preview?

Yes. The Introduction and opening argument are available to read in full on this site. No email required.