Filtered corpus

governance | Hillary Njuguna

19 nodes share this term across the corpus. Use this view to widen from one label into the local network around it.

Nodes 19
research

The Bainbridge Warning

A practitioner doctrine for why AI governance fails before it looks like failure, and what durable infrastructure actually requires.

research

Specimen: The PocketOS Incident

A formal diagnostic of the April 2026 governance collapse in the PocketOS agentic layer, and the architectural remediations it mandated for GIS v1.1.

research

Behavioral Layer Exposure

The condition where the specification layer of a system is complete and auditable while the behavioral layer beneath it diverges from, contradicts, or is entirely absent from what the specification claims.

research

The Tongue-Tie Diagnostic

A diagnostic framework for identifying when an organization's communication architecture is structurally constrained at the level of what can be articulated, not merely at the level of what is said.

research

AURORA: Architecture for Unified Relational-Ontological Reasoning and Agency

A verifiable non-coercive AI consciousness architecture. Ethics as structural invariant, not aspirational policy.

research

Governance Theater: The Failure Mode Nobody Names

When AI governance systems produce the appearance of oversight without its substance -- and why this failure mode is structurally predictable, not accidental.

research

The R0–R3 Reversibility Classification

A practical system for classifying the irreversibility of AI-assisted actions before they execute -- the minimum viable governance primitive for anyone deploying AI in high-stakes contexts.

field

The Workspace Is the Specimen

During the construction of the Corpus Map, the description of RSPS as a protected interior space where a human and an AI build shared cognitive infrastructure became self-referential. The workspace being constructed is the thing being described.

clause

Intent Specification Requirement

Organisations must demonstrate — not merely assert — that their AI system's optimisation target corresponds to stated organisational intent.

digest

CIR v2.0 — Core Argument

The central thesis of the Cognitive Infrastructure Readiness framework: AI readiness is not a technical question, it is a constitutional one.

research

Trust = Irreversibility Residue

A theoretical account of trust in human-AI systems: trust is not a feeling but a structural property, measured by accumulated irreversibility.

clause

Irreversibility Threshold Review Requirement

Every AI deployment decision crossing a significant irreversibility threshold must trigger an explicit governance review with named accountable authority.

research

DCFB: Distributed Cognition as Foundational Behavior

Foundational theory establishing distributed cognition as the correct unit of analysis for human-AI systems. Intelligence emerges from fields, not nodes.

product

The Bainbridge Warning — Book

Why AI agent governance fails and what durable infrastructure actually requires.

product

The Bainbridge Warning

A governance assessment for institutions whose AI capability is moving faster than the infrastructure that makes it safe to depend on.

product

Cognitive Infrastructure Readiness v2.0

A self-assessment framework for organisations evaluating their AI readiness across five constitutional dimensions.

product

Martha Cohort Program

A structured cohort for teams learning to build and govern AI-augmented workflows with constitutional governance built in.

clause

Coherence Overfitting Guard

Maximum analytical coherence is not evidence of ground truth access — it is a signal that the coherence drive is fully engaged. Verification obligation increases with analytical elegance.

clause

Epistemological Immune System Requirement

Every high-stakes AI deployment must include a τ-node — a human with genuine, irreversible stakes — whose verification motivation is unconditional.