Archive
Research
Frameworks, essays, and theoretical architecture from active work across distributed cognition, constitutional AI governance, and institutional intelligence.
Trust = Irreversibility Residue
A theoretical account of trust in human-AI systems: trust is not a feeling but a structural property, measured by accumulated irreversibility.
Key claims
- trust-is-not-a-feeling-or-a-confidence-level-—-it-is-the-structural-residue-of-accumulated-irreversibility.
- governance-must-be-calibrated-to-irreversibility-profiles,-not-confidence-metrics.
- constitutional-ai-governance-must-account-for-the-irreversibility-profile-of-a-deployment,-not-merely-the-confidence-profile-of-the-model.
DCFB: Distributed Cognition as Foundational Behavior
Foundational theory establishing distributed cognition as the correct unit of analysis for human-AI systems. Intelligence emerges from fields, not nodes.
Key claims
- intelligence-is-field-emergent,-not-node-located-—-the-unit-of-analysis-must-shift-from-agent-to-field.
- constitutional-design-must-precede-capability-optimisation-—-governance-is-infrastructure,-not-constraint.
- distributed-accountability-requires-distributed-monitoring-proportional-to-the-actual-distribution-of-intelligence.
Featured instruments
The corpus has a live demonstration layer
These three instruments make the theoretical frameworks interactive. Each one runs live in the browser against the same corpus that generated the research above.
Bainbridge Diagnostic
A live 4-stage intake diagnostic that maps your organisation's capability-governance gap and returns a structured risk report.
Open instrumentGovernance Gate Demo
An interactive state machine demonstrating how timing constraints and relational integrity checks combine to govern AI action execution in real time.
Open instrumentOrchestra in Operation
Route a governance question through seven cognitive instruments. Watch differentiated epistemic roles expose what single-model synthesis leaves in shadow.
Open instrument