Archive

Research

Frameworks, essays, and theoretical architecture from active work across distributed cognition, constitutional AI governance, and institutional intelligence.

Entries 2
Mode Orient into doctrine
Depth Crystallised frameworks

Trust = Irreversibility Residue

A theoretical account of trust in human-AI systems: trust is not a feeling but a structural property, measured by accumulated irreversibility.

Key claims

  • trust-is-not-a-feeling-or-a-confidence-level-—-it-is-the-structural-residue-of-accumulated-irreversibility.
  • governance-must-be-calibrated-to-irreversibility-profiles,-not-confidence-metrics.
  • constitutional-ai-governance-must-account-for-the-irreversibility-profile-of-a-deployment,-not-merely-the-confidence-profile-of-the-model.

DCFB: Distributed Cognition as Foundational Behavior

Foundational theory establishing distributed cognition as the correct unit of analysis for human-AI systems. Intelligence emerges from fields, not nodes.

Key claims

  • intelligence-is-field-emergent,-not-node-located-—-the-unit-of-analysis-must-shift-from-agent-to-field.
  • constitutional-design-must-precede-capability-optimisation-—-governance-is-infrastructure,-not-constraint.
  • distributed-accountability-requires-distributed-monitoring-proportional-to-the-actual-distribution-of-intelligence.