DLP Specification
Part I: Foundational Concepts

§3 AI-Native Architecture

AI-native = world model architecture. Derivation chain. Five properties. Trust architecture. Standards subsumption.

§3 AI-Native Architecture

v2.0.0 · Locked · L1 · March 19, 2026

Purpose

This section establishes why the substrate is architecturally AI-native — not retrofitted for AI, but designed from first principles for AI participation. The core claim: an AI-native governance infrastructure must be an organizational world model. This is not aspiration; it is derived from established mathematical results. The section specifies what the derivation requires, what it produces, and how it operates at the trust boundaries where AI and human governance intersect.

Foundation

The Gap AI Makes Urgent

Every existing governance framework — COSO, NIST AI RMF, ISO 42001, EU AI Act — specifies what organizations should do. None provides infrastructure for how organizations represent what they are actually doing. The gap is not one of policy but of architecture: there is no shared, machine-readable model of organizational reality against which governance claims can be evaluated.

AI makes this gap urgent. When AI systems participate in decision chains — drafting recommendations, processing evidence, routing approvals — the organizational surface area for accountability expands beyond what human oversight can cover through inspection. AI does not reduce the need for governance. It increases the variety of states that governance must represent — the same Ashby's Law constraint identified in §1.3, now amplified by the speed, scale, and opacity of machine participation.

An "AI-native" architecture is not a traditional framework modified to accommodate AI. It is infrastructure designed from first principles to construct an organizational world model (OWM) — where AI participation contributes to a more complete and dynamically updated representation of organizational reality. This OWM construction defines what it means to model organizational decision-making at a fidelity sufficient for both human and machine participants. The definition is derived from established mathematical results (§3.3), not from analogy or aspiration. The central claim: an AI-native governance infrastructure must be an organizational world model.

Governance

This section is owned by Cam (founder, GrytLabs). Changes to architectural derivation or trust architecture stages require explicit decision with patent-impact assessment. The five properties are derived; they cannot be removed or substantially modified.

Substance

§3.1 One World Model, Multiple Views

Previous approaches to organizational intelligence treat different stakeholders as inhabiting different systems: the compliance team uses a compliance tool, the finance team uses a financial model, project managers use project management software, auditors reconstruct from artifacts. Each system captures a partial view, in a different ontology, with no shared underlying model.

The architectural implication of AI-native governance is the inverse: one world model, multiple views. The substrate maintains a single, semantically structured representation of organizational state — built from nine irreducible primitives (§4), enforced by ten behavioral invariants (§5), classified by a three-level truth type system (§6), and specified to a minimum viable record for every primitive (§7). Different stakeholders query this model through different lenses:

StakeholderViewWhat They QueryPrimitives Engaged
AuditorCompliance"Was this decision authorized? Can I trace its lineage?"Authority, Evidence, Account, Constraint
ExecutiveStrategic"Are commitments aligned with intent? Where are we deviating?"Intent, Commitment, Work, Capacity
OperationsExecution"What work is in progress? What capacity is available?"Work, Capacity, Evidence
AI SystemOrchestration"What is the current state? What actions are within my delegation?"All nine — full state transformation context
RegulatorAccountability"Can this organization demonstrate how decisions were made?"Full lineage query across all primitives

Table 3.2.1: Stakeholder views as projections from the organizational world model.

                           ┌─────────┐
                           │ Auditor │
                           │Compliance│
                           └────┬────┘

            ┌──────────┐        │        ┌──────────┐
            │Executive │        │        │Regulator │
            │Strategic │        │        │Account-  │
            │          │        │        │ability   │
            └─────┬────┘        │        └────┬─────┘
                  │             │             │
                  │     ┌───────┴───────┐     │
                  └─────┤               ├─────┘
                        │  WORLD MODEL  │
                        │               │
                  ┌─────┤  Nine         ├─────┐
                  │     │  Primitives   │     │
                  │     │  One State    │     │
                  │     └───────┬───────┘     │
                  │             │             │
            ┌─────┴────┐       │       ┌─────┴────┐
            │Operations│       │       │AI System │
            │Execution │       │       │Orchestr- │
            │          │       │       │ation     │
            └──────────┘       │       └──────────┘

                        (State Transformations)

Figure 3.2.1: One world model, five stakeholder projections. These are not different databases reconciled after the fact. They are different projections of the same underlying model — the way a blueprint, an elevation drawing, and an electrical plan are different views of the same building.

§3.2 The Derivation Chain: Four Steps

The claim that the substrate is an organizational world model is not aspirational. It is derived from four established results spanning 1956–2022, each building on the previous. The full formal treatment appears in §8. Here is the accessible version.

§3.2.1 Step 1: Governance Must Be a Model (Conant-Ashby 1970)

Conant and Ashby (1970) proved that every good regulator of a system must contain a model of that system. §1.3 introduced this result and applied it to the three-study convergence. The architectural consequence is direct: governance that operates externally to the system it governs — through periodic audits, after-the-fact reviews, policy manuals that describe aspirational behavior — is structurally insufficient. Effective governance infrastructure must BE a model of the organization's operational reality.

This is not a design recommendation. It is a mathematical constraint. An organization whose governance does not model its operations is, by the Conant-Ashby theorem, a poorly regulated system — regardless of how many policies it publishes.

§3.2.2 Step 2: The Model Must Have Sufficient Variety (Ashby 1956)

Ashby (1956) established the Law of Requisite Variety: only variety can absorb variety. §1.3 applied this to the organizational variety gap; §2.5 showed the variety gap is the empowerment gap. The architectural consequence: a governance model with fewer independent dimensions than the organizational reality it governs will fail to represent some decision contexts. Those unrepresented contexts become ungoverned — not through neglect, but through architectural incapacity.

This explains why traditional frameworks fall short. A compliance checklist with 50 items applied to an organization making thousands of daily decisions across hundreds of contexts does not fail because the checklist is wrong. It fails because its variety is structurally insufficient for the variety it must govern. A project management system that tracks work state but not authority, evidence, or constraint does not fail because it tracks the wrong dimension — it fails because it tracks too few dimensions. The nine irreducible primitives (§4) are the requisite variety claim: nine independent governance dimensions, each answering a question no other primitive addresses, validated through removal, merger, and sufficiency testing (§4.2).

§3.2.3 Step 3: Specific Structural Subsystems Are Required (Beer 1972–1985)

Beer (1972–1985) derived from neurophysiology and cybernetic principles that organizational viability requires five specific subsystems: operations (S1), coordination (S2), control (S3), intelligence (S4), and policy (S5) — plus an audit channel (S3*) and emergency signaling (algedonic channel). These are structural necessities whose absence produces specific, diagnosable pathologies.

The DLP's nine primitives map to these subsystems structurally, not analogically — both frameworks solve the same control problem (maintaining viability in complex environments) and arrive at functionally equivalent components from independent starting points. §8, Table 8.2.1 specifies the full structural correspondence. The convergence extends further: no published work prior to DLP maps COSO's five internal control components to Beer's five VSM systems, yet the correspondence is direct (§8, Table 8.4.1). Two traditions — compliance arising from accounting fraud, cybernetics arising from neurophysiology — converge on the same five structural requirements.

§3.2.4 Step 4: The Model Predicts State Transitions (LeCun 2022)

LeCun (2022) formalized the world model as a system that predicts future states from current states and actions: S(t+1) = predict(S(t), A(t)). The DLP substrate's state transformation formalism is structurally identical: State(t) → [Trigger + Action] → State(t+1), with full primitive context preserved at each transition.

This is not metaphor. The substrate records organizational state transitions in a formally structured representation — who decided what, based on what evidence, under what authority, subject to what constraints, and how the outcome compared to the intent. As this representation accumulates, it becomes the basis for organizational prediction: given current state and proposed action, what future state is likely? Where will deviations from intent emerge? What capacity constraints will bind? This is what budgets, forecasts, and scenario plans attempt informally — projecting forward from current state to anticipated outcomes. The substrate provides the infrastructure to do it formally, over the same primitives that capture current state. §8.7 specifies the Predictive Composition Pattern that formalizes this capability.

§3.2.5 The Chlon Extension: Symmetry Preservation (Chlon 2026)

A fifth result extends this chain. Chlon (2026) proves mathematically that log-loss optimization — the dominant training objective for modern AI systems — systematically breaks the very symmetries that governance must preserve, and that scaling makes this worse, not better. The symmetry-preserving representation is penalized under standard training because the symmetry-breaking representation achieves shorter description length.

The implication is architectural: the organizational world model cannot be learned from data. It must be given. Each of the nine primitives preserves a specific organizational symmetry — purpose invariance (Intent), obligation invariance (Commitment), verification invariance (Evidence), delegation invariance (Authority), boundary invariance (Constraint), and so on — whose violation produces a diagnosable governance failure. A log-loss-trained model will learn cheaper, context-dependent shortcuts that break these invariances: "authorized-when-asked-from-this-system, unauthorized-when-asked-from-that-system." The invariances must be architecturally imposed. §9 provides the formal treatment — the symmetry-breaking proof, the nine organizational invariances, and the conservation laws that the behavioral invariants (§5) enforce.

§3.3 Five Properties of AI-Native Architecture

An AI-native governance architecture — one that satisfies the derivation chain — exhibits five necessary properties. Each is a consequence of a specific derivation step, not a design preference:

PropertyWhy NecessaryDLP Implementation
Ontology-FirstConant-Ashby: governance must be a model → the model must have defined semanticsNine primitives with formal composition rules (§4); five-tier hierarchy (§4.5)
Event-DrivenAshby: variety must be absorbed in real time → after-the-fact reconstruction loses varietyState transformation model captures transitions at source; prospective capture thesis (§1.1)
Decision-CentricBeer: S3/S4 homeostat requires decision visibility → decisions are the governance control signalDecision lineage as core query path; Account provides state context (B4, §5)
Learning-CapableLeCun: world models accumulate predictive capacity → governance variety must scaleGraduation model (Option A→B→C, §A1); evidence-based autonomy progression
Human-GovernedBeer S5 + Conant-Ashby: the model serves human governors, not the reverseAI outputs enter as Derived, never Authoritative (§6); human promotion required at every stage

Table 3.4.1: Five properties derived from the formal chain. An architecture missing any one fails to satisfy the corresponding derivation step.

The five properties are individually necessary and collectively sufficient for an organizational world model architecture. Ontology-First without Event-Driven produces a static knowledge base. Event-Driven without Decision-Centric produces a transaction log. Decision-Centric without Learning-Capable produces a decision archive. Learning-Capable without Human-Governed produces autonomous AI governance. All five together produce an organizational world model with the structural properties the derivation chain requires.

§3.4 Four Contributions

The DLP contributes four architectural shifts not present in existing governance approaches:

From (Existing)To (DLP)Gap Filled
AI Governance Frameworks — specify what organizations should doAI Governance Infrastructure — provides the semantic substrate for operational complianceOrganizations can describe what they should do but cannot model what they ARE doing
Explainable AI — explains model outputs (LIME, SHAP)Explainable Organizations — explains why the organization decided XModel-level XAI cannot answer organizational-level accountability questions
Human-in-the-Loop — isolated decision checkpointsHuman-AI Decision Continuity — longitudinal tracking across decision chainsIsolated checkpoints lose the context that makes oversight meaningful
Compliance Checkbox — point-in-time verificationContinuous Compliance — queryable evidence, always currentPeriodic verification misses what happens between audits

Table 3.5.1: Four contributions. Each shift corresponds to an architectural gap that the derivation chain identifies.

The first shift (frameworks → infrastructure) follows from Steps 1–2: governance must be a model with requisite variety, not a specification applied from outside. The second (XAI → explainable organizations) follows from Step 3: Beer's S3/S4 requires organizational-level intelligence, not component-level explanation. The third (checkpoints → continuity) follows from Step 4: state transitions form chains, and governance must operate across the chain. The fourth (checkbox → continuous) follows from Step 2: requisite variety requires continuous governance capacity, not periodic sampling.

§3.5 Trust Architecture: Three Stages

The trust architecture operationalizes the Human-Governed property (§3.4) as a three-stage model:

StageWhat HappensArchitectural Guarantee
Advisory (Option A)AI produces outputs; humans make all decisions; substrate captures lineageAI outputs enter as truth type Derived (§6) — never Authoritative or Declared
Staging (Option B)AI proposes actions; humans review and promote; substrate captures both proposal and decisionHuman promotion gate: no AI-generated content reaches Authoritative status without explicit human action
Human-Confirmed Routing (Option C)AI routes within established delegation bounds; humans review by exceptionDelegation constraints (Constraint primitive) define bounds; Authority primitive governs escalation; continuous evidence enables delegation review

Table 3.6.1: Trust architecture stages. Each stage increases AI participation while preserving the invariant below.

The invariant across all three stages: AI cannot modify authoritative organizational records. AI outputs are always epistemically marked — they carry truth type (§6), source attribution, and confidence context. The architecture does not assume AI outputs are wrong; it ensures they are always verifiable. This is Beer's S5 (policy) function formalized: the non-negotiable boundary conditions that define organizational identity, implemented as behavioral invariants (B1–B10, §5) that no automation level can override.

§A1–§A2 carry the operational specification — cognitive work type taxonomy, delegation authority model, routing space, and graduation mechanics. §3 establishes the principle: trust is earned through evidence, within invariant bounds.

§3.6 Standards Alignment Through World Model Framing

Existing standards do not conflict with the world model architecture — they are subsumed by it. Each standard addresses a subset of the governance variety that the substrate provides in full:

StandardWhat It RequiresHow the World Model Subsumes It
NIST AI RMFDocumentation of AI decision processesFull decision lineage with AI participation tracked via truth types and source attribution
ISO 42001Audit trails for AI managementContinuous Evidence primitive with verification states (§6.4); audit is a query, not a reconstruction
EU AI Act Art. 11–14Technical documentation, human oversight, record-keepingState transformation log with human gate enforcement and authority-anchored lineage
IEEE 7000Decision basis discoverableAuthority + Evidence + Intent primitives compose the discoverable basis; lineage query surfaces the full chain
COSOFive components of internal controlMaps to VSM subsystems (§8, Table 8.4.1): S5→Control Environment, S4→Risk Assessment, S3→Control Activities, S2→Information & Communication, S3*→Monitoring

Table 3.7.1: Standards as partial projections of the world model.

The pattern: each standard specifies a governance surface. The substrate provides the underlying model from which all these surfaces can be projected. Compliance becomes a query against the world model, not a separate process running alongside operations. An auditor does not reconstruct decision lineage from scattered artifacts — they query the model for the lineage that was captured at decision time. A regulator does not request documentation — they request a projection from the model that already contains the documented state.

This subsumption is not a claim that existing standards are unnecessary. It is a claim about their architectural relationship to the substrate: they specify governance surfaces, and the substrate provides the model from which those surfaces are derived. Each standard in Table 3.7.1 covers "partial" variety — a specific governance concern (AI decisions, management audit trails, regulatory documentation, ethical decision basis, internal control). The substrate covers the full governance variety from which each partial view is projected. Organizations that implement the substrate satisfy existing standards as a consequence of operating the world model, not as a separate compliance activity.

§3.7 AI-Native Readiness: Scope Boundary

Not all organizations need formal world model infrastructure. The substrate becomes valuable when organizational complexity exceeds what traditional governance can represent — which is precisely the condition AI participation creates. Four factors determine when that threshold is crossed:

FactorWeightWhat It Measures
Human Judgment Invariant0.35Degree to which decisions are structured enough for substrate capture but complex enough to need governance infrastructure
Regulatory Burden0.25Weight of compliance requirements — continuous compliance (§3.5) provides measurable value when regulatory demands are high
Data Infrastructure0.25Maturity of semantic standards and interoperability — substrate integration requires sufficient data variety to model
Operations Mode0.15Digital vs. physical operational footprint — digital-native operations generate more capturable state transitions

Table 3.8.1: Readiness factors. Organizations with low scores across these factors may not need formal world model infrastructure; their governance variety is manageable through traditional means.

The readiness model is an architectural scope boundary, not a qualification threshold. It identifies the conditions under which the variety gap (§2.5) between organizational complexity and governance capacity becomes large enough that infrastructure — rather than process, training, or cultural intervention — is the necessary response. AI participation in decision chains is the primary driver: it simultaneously increases disturbance variety (more states to govern) and enables the infrastructure response (machine-readable state transformations become possible).

Boundaries

This section specifies why the architecture is AI-native and what the derivation requires. It does NOT specify: the primitives themselves (§4), the invariants that constrain them (§5), the implementation mechanisms (§26+), or the specific semantic ontology (separate artifact). The derivation chain is mathematical; applications are architectural. The readiness boundary is a scope declaration; it does not disqualify organizations.

Positions

Locked. Derivation chain from Conant-Ashby, Ashby, Beer, LeCun, and Chlon. Five necessary properties. Three trust architecture stages. One world model, multiple views. Four contributions over existing approaches. Standards subsumption through world model framing. Readiness boundary specifies scope, not qualification.

Lineage

v1.0.0 (February 25, 2026): Base lock. Derivation chain. Five properties. Trust architecture. v2.0.0 (March 19, 2026): Locked as AI-native architecture specification. Chlon extension added. Formatting converted to fspec.

Commitments

SDK implementations MUST maintain a single, semantically structured organizational state accessible to all stakeholder views. SDK implementations MUST implement the five properties (Ontology-First, Event-Driven, Decision-Centric, Learning-Capable, Human-Governed). SDK implementations MUST enforce the trust architecture invariant: AI cannot modify authoritative organizational records.

Coverage

Derivation chain fully specified with mathematical grounding. Five properties formally derived from the chain. Trust architecture operationalized as three stages with invariant preservation. Standards subsumption mapped systematically. Readiness boundary specified as scope declaration.

Addressing

Document ID: s03-ai-native-architecture. Part: I (Foundational Architecture). Sections addressable by number (§3.1–§3.7). Cross-references use [s03-ai-native-architecture.{section-id}] notation.

Attribution

Primary author: Cam (founder, GrytLabs). Mathematical grounding: Conant & Ashby (1970), Ashby (1956), Beer (1972–1985), LeCun (2022), Chlon (2026). Standards references: NIST AI RMF, ISO 42001, EU AI Act, IEEE 7000, COSO. Composition support: Claude (Anthropic).

Situational Frame

This section was composed after establishing the problem (§1) and design paradigm (§2) to answer the architectural question: how should governance infrastructure be designed to support AI participation? The answer is: as an organizational world model, derived from established mathematical results. The architecture is not retrofitted for AI; it is native to AI-human collaboration from first principles.

Scope Governance

Core namespace: dlp. Architecture applies to all subsequent specifications. The derivation chain, five properties, and trust architecture constrain protocol design (§4–§7), substrate implementation (§26+), and governance operations (§8–§9).

Framing

This section frames AI-native architecture as a mathematical requirement, not an aspiration. The derivation chain shows why governance must be a world model, why it must have sufficient variety, why it requires specific structural components, and how it predicts organizational outcomes. AI participation makes this necessary; world model architecture makes it possible.

Adaptation

Chlon extension (2026) sharpened the understanding that organizational symmetries cannot be learned from data — they must be architecturally imposed. This explains why the nine primitives are fixed: they preserve organizational symmetries that log-loss optimization would destroy. No other post-lock adaptations to derivation or architecture.

Readiness

This section is ready for all uses: architecture specification, derivation documentation, standards alignment justification, stakeholder communication, implementation guidance. The mathematical grounding makes it suitable for regulatory documentation and technical specification.

Meaning Resolution

No meaning resolution for this document.

Perception Surface

No perception surface for this document. This section specifies architecture; it does not interface with external systems. External organizational events enter through the substrate primitives specified in §4–§7, and AI systems operate within the trust boundaries specified in §3.6.

Temporal Governance

No temporal governance for this document. Architecture is persistent. The graduation mechanism (§A1) operates over time, but the architectural principles themselves do not change.