DLP Specification
Part II: The Core Protocol

§5 Behavioral Invariants

Ten invariants (B1-B10). Three groups. SHACL enforcement. Two-pass validation. Constraint conflict detection.

§5 Protocol Behavioral Invariants

v2.0.0 · Locked · L1 · March 19, 2026

Purpose

Ten behavioral invariants constrain how primitives may relate to each other. Each invariant enforces a relationship between primitives that must hold for governance to be structurally sound. If any single invariant is removed, a specific governance failure occurs that no other invariant or combination of invariants can prevent.

The invariants are the mechanical enforcement of the conservation laws identified in [s09-symmetry-conservation.decision.positions]. They define the rules the protocol guarantees; substrate implementations determine how those guarantees are met.

Foundation

Research Grounding

Each invariant is independently necessary. The governance failure it prevents cannot be compensated by any other invariant.

IDGovernance Failure ClassWhy No Other Invariant CompensatesResearch Grounding
B1Unbounded effort without agreementB2 checks feasibility but not authorization — capacity-verified work can still be unauthorizedCOSO control activities (controls require formal commitment); Beer VSM S3 (resource allocation requires negotiated agreements)
B2Impossible promises without detectionB1 checks authorization but not feasibility — authorized work can still be infeasibleAshby's Law of Requisite Variety (controller capacity must match disturbance); Simon satisficing (bounded rationality skips availability checks)
B3Unmarked AI outputs corrupting canonical recordB4 provides decision context but not epistemic classification — you know when evidence was recorded but not whether to trust itSAS 142 digital evidence standards (audit evidence requires epistemic provenance); AI governance literature (distinguishing AI from human output is the foundational requirement)
B4Opaque decisions without verifiable contextB3 marks evidence quality but doesn't require decision-time state — you know the evidence quality but not the circumstances of the decisionDecision provenance (accountability requires reconstructable decision context); process accountability (WHO/WHAT/WHEN/WHY requires state at decision time)
B10AI outputs promoted without human epistemic judgmentB3 classifies evidence by origin but does not enforce the promotion gateway — evidence carries its truth type, but nothing prevents a system pathway from advancing Derived to Declared without human review. B10 is a specialization of B3 for AI actors: it makes explicit that the truth type boundary between Derived and Declared/Authoritative requires human action, not just human policy.EU AI Act transparency requirements (AI outputs must be identifiable and reviewable); NIST AI RMF (human oversight of AI-generated content); SAS 142 (digital evidence requires human professional judgment for epistemic classification)
B5Unlegitimized binding actionsB6 binds rules to objects but doesn't trace who has the right to act — you know what rules apply but not who is authorizedPrincipal-Agent Theory (agency requires verifiable delegation); Beer VSM S5 (organizational identity requires traceable authority to root)
B6Rules disconnected from governed objectsB5 traces authority chains but doesn't bind rules to objects — you know who is authorized but not what rules govern themConservation law enforcement (same rule must produce same prohibition); Beer VSM S2 (coordination requires constraints that bind, not suggest)
B7Problems invisible to governanceB8 routes signals but cannot create them — routing is useless without a mechanism to raise signals in the first placeEdmondson psychological safety (raising concerns requires a legitimate channel); Morrison organizational silence (architecture must not be the barrier to surfacing)
B8Signals captured but never reaching authorityB7 enables flagging but doesn't ensure delivery — signals exist but reach no one with jurisdictionBeer algedonic channel (emergency signals bypass hierarchy to reach whoever can act); IIA Three Lines Model (risk signals require defined routing pathways)
B9Emergence captured but never operationalizedNone of B1–B8 converts emergence into organizational action — signals, evidence, and authority function but insights remain inertKlein RPD (recognition must convert to action); Nonaka SECI (explicit knowledge must be acted upon, not merely recorded)

Collective Properties

Mutual independence. Each row produces a distinct failure class. The "Why No Other Invariant Compensates" column confirms that no invariant's function can be performed by another.

B10 relationship to B3. B10 is a specialization of B3 for AI actors. B3 requires every Evidence instance to carry a truth type — it classifies epistemic origin. B10 requires that the transition from Derived (AI-generated) to Declared or Authoritative requires explicit human action — it enforces the promotion gateway. Without B10, B3 could be trivially satisfied by a system that labels AI output as Derived but then auto-promotes it to Declared through a workflow that does not require human epistemic judgment. B10 closes this gap. The two invariants have distinct enforcement shapes: B10-AIOutputDerived (Pass 1, always Blocking — AI output must enter as Derived) and B10-PromotionRequiresHuman (Pass 2, SPARQL — promotion transitions on AI-originated Evidence must have a human actor as the promoting decision-maker).

Group completeness. The three groups — resource integrity (B1–B2), epistemic integrity (B3–B4, B10), and governance routing (B5–B9) — cover the three dimensions of governance failure: doing the wrong work, trusting the wrong data, and routing governance incorrectly. No fourth dimension has been identified.

S5 correspondence. The ten invariants collectively formalize Beer's S5 (Policy) function. S5 defines the invariances that cannot be violated regardless of operational pressures. B1–B10 are the mechanical specification of that policy function: the relationships between primitives that must hold for governance to be structurally sound.

Conservation Law Correspondence

Each behavioral invariant is the enforcement mechanism for a conservation law identified in [s09-symmetry-conservation]. The conservation laws define what must be preserved across every governance state transformation. The invariants define how that preservation is mechanically enforced.

Conservation Law (§9)What Is PreservedInvariantEnforcement
Organizational direction (Intent)Purpose persists through governance actionsB9Emergence converts to Decision, closing the loop to organizational purpose
Binding force (Commitment)Responsibility cannot vanishB1, B2Work requires Commitment (B1); Commitment requires Capacity (B2)
Feasibility truth (Capacity)Feasibility cannot be wished awayB2Commitments are backed by verified capacity
State transformation fidelity (Work)Transformations are authorizedB1Work traces to a commitment — no unauthorized state changes
Epistemic status (Evidence)Proof quality is intrinsic to the recordB3Every evidence artifact carries its epistemic classification
Option space completeness (Decision)Full state is visible at decision timeB4Decisions link to the account providing state context
Scope preservation (Authority)Delegation scope is recordedB5Authority chains terminate at a traceable root
State fidelity (Account)State is verifiable, not narrativeB4Decision context is a recorded account, not a post-hoc reconstruction
Rule universality (Constraint)Same rule produces same prohibitionB6Constraints bind to governed objects mechanically
Emergency signaling (Algedonic)Problems can always reach authorityB7, B8All objects are flaggable (B7); signals route to the authority chain (B8)
AI epistemic boundary (Evidence)AI-generated content cannot self-promote to canonical statusB10Promotion from Derived to Declared/Authoritative requires human action — the epistemic boundary between machine inference and organizational knowledge is architecturally enforced

The relationship between layers: Organizational symmetry (§9.1) → Conservation law (§9.2) → Behavioral invariant (§5, this section) → SHACL shape (shapes graph) → Substrate enforcement (§26).

Governance

This section is owned by Cam (founder, GrytLabs). Changes to invariant definitions require explicit decision with patent-impact assessment. Adding or removing an invariant changes the conservation law correspondence and must be reflected in [s09-symmetry-conservation].

Substance

§5.1 Invariant Definitions

The ten invariants partition into three functional groups. Each group addresses a distinct class of governance failure.

Group 1: Resource Integrity — Ensure work and commitments are grounded in reality.

IDRuleConstraintFailure Without
B1Work requires CommitmentEvery Work instance must link to exactly one CommitmentShadow work — effort expended on activities no one has formally agreed should happen
B2Commitment requires CapacityEvery Commitment must link to at least one Capacity allocationImpossible promises — commitments made without verifying feasibility

Group 2: Epistemic Integrity — Ensure evidence and decisions are grounded in truth.

IDRuleConstraintFailure Without
B3Evidence requires Truth TypeEvery Evidence instance must carry exactly one truth type from the controlled vocabulary: Authoritative, Declared, Derived, OpaqueEpistemic corruption — AI-generated outputs enter the canonical record without marking, and the system can no longer determine which entries can be trusted at what level
B4Decision requires AccountEvery Decision must link to exactly one AccountContext-free decisions — the organization knows that a decision was made but not against what state, making post-hoc audit impossible
B10AI output requires human review for epistemic promotionEvery Evidence instance with truth type Derived that originated from an AI actor (§22.2) must receive explicit human review before promotion to Declared or Authoritative. No architectural path exists for AI-generated content to bypass human epistemic judgment.Unreviewed AI promotion — AI-generated outputs advance to canonical status without human epistemic judgment, collapsing the trust boundary between machine inference and organizational knowledge. B3 classifies evidence by epistemic origin; B10 enforces that the promotion gateway between AI-generated and human-verified knowledge requires human action.

Group 3: Governance Routing — Ensure authority, constraints, signals, and emergence flow correctly.

IDRuleConstraintFailure Without
B5Authority must be traceableEvery Authority instance must either be a root authority or link to a parent authority through a delegation chain that terminates at a rootUntraceable authority — binding actions occur without verifiable legitimacy; authority becomes de facto rather than de jure
B6Constraint binds primitivesEvery Constraint must target at least one primitive instance and specify an enforcement mode (Blocking, Warning, Logging, or Advisory)Unbound rules — constraints exist as documentation but do not operate on governed objects; rules are aspirational rather than operational
B7All objects flaggableEvery primitive class must have a signal attachment surface — the architectural guarantee that any governed object can be flaggedInvisible problems — humans cannot attach signals to the objects they observe; organizational silence becomes architectural, not cultural
B8Signals route to authorityEvery Signal must route to an authority on the governance chain for the flagged object, at any depth in the delegation hierarchyUnrouted signals — problems are captured but structurally cannot reach anyone with the power to act; the system appears to listen but cannot respond
B9IQ resolution creates DecisionEvery resolved Investigative Query must produce a Decision or Work itemInert emergence — patterns are captured but never operationalized; the gap between what the organization knows and what it does about it grows silently

§5.3 Enforcement Architecture

Invariants are specified as SHACL constraint shapes — declarative definitions of what must hold, independent of how the substrate enforces them.

Shapes Graph Separation

Constraint shapes live in a dedicated shapes graph, separate from the class ontology (§4). The ontology defines what primitives are (Open World Assumption — what is not stated may still be true). The shapes graph defines what must be validated (Closed World Assumption — what is not present is a violation). This separation is architectural: profile layering (§21) applies different shapes graphs to the same primitive classes; shapes can be updated without modifying class definitions; the substrate layer (shapes, CWA) and advisory layer (ontology, OWA) remain architecturally distinct.

Shape Classification

Each invariant shape is classified by the SHACL feature level required to express it:

ShapeInvariantSHACL TierRationale
B1-WorkRequiresCommitmentB1CoreSingle-node property check: commitment link exists
B2-CommitmentRequiresCapacityB2CoreSingle-node property check: capacity link exists
B3-EvidenceRequiresTruthTypeB3CoreSingle-node property check: truth type value in controlled vocabulary
B4-DecisionRequiresAccountB4CoreSingle-node property check: account link exists
B5-AuthorityTraceableB5 (immediate)CoreDisjunctive check: isRootAuthority OR authoritySource present
B5-AuthorityChainReachesRootB5-T (transitive)SPARQLTransitive closure: delegation chain terminates at root; detects circular delegation
B6-ConstraintBindsPrimitivesB6 (binding)CoreSingle-node property check: target and enforcement mode exist
B6-ConstraintUniversalityB6-U (universal)SPARQLCross-instance: universal constraint targets all instances of governed class
B7-SignalSurfaceCoverageB7SchemaDeployment-time: ontology structure guarantees signal attachment for all primitive classes
B8-SignalsRouteToAuthorityB8Core + SPARQLCore: routing target exists. SPARQL: target is on the flagged object's authority chain at any depth
B9-IQResolutionCreatesDecisionB9SPARQLConditional: when status = Resolved, resolution output must exist
B10-AIOutputDerivedB10 (entry)CoreSingle-node property check: Evidence with AI actor as producer must have truth type = Derived
B10-PromotionRequiresHumanB10 (promotion)SPARQLGraph traversal: any promotion transition on AI-originated Evidence must reference a Decision where made_by is a human actor (§22.2 actor type = Human)

Two-Pass Validation Strategy

Pass 1: SHACL Core (always Blocking). B1, B2, B3, B4, B5 (immediate), B6 (binding), B10 (entry). Single-node property checks — no graph traversal required. Run on every write operation at O(1) per instance. Violations prevent the operation. No profile variability — universal across all substrate profiles.

Pass 2: SHACL-SPARQL (profile-configurable). B5-T, B6-U, B8 (routing), B9, B10 (promotion). Graph traversal required — transitive closure, cross-instance joins, conditional evaluation. Configurable schedule (per-operation, batch, or periodic). Default enforcement mode varies by shape; profiles configure per organizational context.

Schema Pass: B7 (deployment-time). Validates ontology structure, not instance data. Shapes-on-shapes pattern: ontology loaded as data, schema shape verifies signal surface coverage. Runs once at deployment and again on schema changes. Must pass before any instance data is loaded.

Constraint Conflict Detection

The validation architecture also detects conflicts between Constraint instances. At constraint creation or merge time, the engine checks whether a new constraint contradicts existing constraints on the same target. Three conflict classifications:

Conflict TypeDescriptionExample
Same-scope contradictionTwo constraints on the same primitive instance with incompatible enforcement modes or requirementsA Blocking constraint requires field X; another Blocking constraint prohibits field X — on the same governed object
Nested-scope contradictionA parent-scope constraint conflicts with a child-scope constraint on the same targetParent namespace requires minimum 3 approvers; child namespace sets maximum 2 approvers
Enforcement-mode conflictTwo constraints target the same governed relationship but specify contradictory enforcement modesOne constraint marks a relationship as Blocking; another marks the same relationship as Advisory

Detected conflicts surface as B8 Signals routed to the authority governing the conflicting constraints. The protocol detects and reports; it does not resolve. Resolution is a policy decision — the authority decides which constraint prevails, producing a Decision (B4) with rationale. Conflict resolution patterns are DESIGN SPACE.

Conflict detection runs during Pass 2 (SPARQL). Default enforcement mode is Warning — a conflict does not block constraint creation but produces a signal that must be resolved through the authority chain.

B8 Routing Scope

B8 validates that the signal's routing target is somewhere on the flagged object's authority chain — the immediate governing authority or any ancestor at any depth in the delegation hierarchy. This implements Beer's algedonic channel: emergency signals bypass intermediate hierarchy to reach whoever has the power to act. The violation is routing to an authority entirely outside the governance chain, not routing to a higher-level authority on the correct chain.

§5.4 Enforcement Modes

ModeBehaviorInvariant Application
BlockingPrevents the operation entirely. The violating write/create is rejected.All Pass 1 shapes (B1, B2, B3, B4, B5, B6 binding, B10 entry). Default for B5-T, B10 promotion. B7 schema test.
WarningAllows the operation but produces an alert. The governance record includes the violation.Default for B6-U, B8 routing, B9. Profiles may upgrade any Warning to Blocking.
LoggingRecords the violation silently. No alert, no block. Visible only in audit queries.Available as profile downgrade for any SPARQL shape during organizational transitions.
AdvisorySuggests an alternative without enforcement.Reserved for non-invariant Constraints (§4.1). Not used for B1–B10 invariants.

Pass 1 shapes are always Blocking — no profile may downgrade below Blocking. Pass 2 shapes default to Warning or Blocking depending on the invariant; profiles may adjust but may not downgrade below Logging.

§5.5 Violation Handling

Both validation passes produce SHACL Validation Reports persisted as Evidence records with truth type Authoritative. This closes an architectural loop: Constraint (shapes) → validation → Evidence (reports) → Account (audit trail), exercising B3 and B4 on the invariant enforcement mechanism itself.

Each violation record captures: invariant violated (B1–B10 identifier), shape that fired (URI from shapes graph), focus node (violating primitive instance), severity (sh:Violation), enforcement action (Blocked/Warned/Logged), validation pass (Pass 1/Pass 2/Schema), and timestamp.

Override Protocol

When a Warning-mode violation requires an authorized override: the overriding actor must hold Authority (B5) governing the primitive instance in violation; the override is itself a Decision (B4) recording the specific invariant, rationale, and exception scope; the override produces Evidence (B3) with truth type Declared; overrides create new records (append-only — the original violation Evidence is never modified).

Audit Trail

Every violation is recorded regardless of enforcement mode. Blocking violations record the rejected operation. Warning violations record the alert and any subsequent override. Logging violations record silently. The violation audit trail is queryable through the same lineage mechanisms as any other Evidence record (§28).

Boundaries

This section specifies behavioral invariants — relationships between primitives that must hold. It does NOT specify: the primitives themselves (§4), the conservation laws the invariants enforce (§9), the implementation schema for invariant enforcement (§26), or the specific SHACL shape definitions (shapes graph, separate artifact).

Constraint conflict detection detects and reports conflicts; resolution patterns are DESIGN SPACE for substrate implementations.

Positions

Locked. Ten behavioral invariants (B1–B10). Three functional groups. SHACL enforcement via two-pass validation + schema pass. Four enforcement modes. B10 is a specialization of B3 for AI actors. Constraint conflict detection surfaces conflicts as B8 Signals.

Post-lock additions (v2.0.0). B10 (AI epistemic promotion gateway). Opaque added to B3's controlled vocabulary. Constraint conflict detection (three types). All post-lock changes composed from POST-LOCK-BREADCRUMBS.md decided items.

Lineage

v1.0.0 (February 25, 2026): Base lock. Nine invariants (B1–B9), three groups, SHACL enforcement architecture. v2.0.0 (March 19, 2026): Post-lock composition. B10 added to Group 2 (Epistemic Integrity). Opaque truth type added to B3 vocabulary. Constraint conflict detection added to enforcement architecture.

Commitments

SDK implementations MUST enforce all Pass 1 shapes as Blocking. SDK implementations MUST support the four enforcement modes. SDK implementations MUST persist violation records as Evidence with truth type Authoritative. SDK implementations MUST support the override protocol for Warning-mode violations.

Coverage

All ten invariants fully specified with: rule definition, constraint expression, failure-without analysis, research grounding, SHACL shape classification, and enforcement mode. Conservation law correspondence complete. Constraint conflict detection specified at protocol level; resolution patterns are DESIGN SPACE.

Addressing

Document ID: s05-behavioral-invariants. Part: II (Core Protocol). Individual invariants addressable as B1–B10. Shapes addressable by shape name (e.g., B10-AIOutputDerived). Cross-references use [s05-behavioral-invariants.{section-id}] notation.

Attribution

Primary author: Cam (founder, GrytLabs). Research and composition support: Claude (Anthropic).

Situational Frame

Invariants were derived through iterative research across 17 sprints, with B1–B9 locked at base specification (February 2026) and B10 composed from post-lock decisions (OI-29, March 2026). Constraint conflict detection composed from OI-17.

Scope Governance

Core namespace: dlp. All invariant shapes reside in the core shapes graph. Profile-specific shape configuration operates within the core namespace via enforcement mode overrides, not shape modification.

Framing

Invariants are the protocol's policy layer — Beer's S5 function mechanized. They are not operational rules (those are Constraints configured per-instance). They are structural guarantees that hold regardless of operational context, profile configuration, or graduation stage.

Adaptation

B10 was added post-lock when the gap between B3 (truth type classification) and enforcement of the AI-to-human promotion gateway became apparent. B3 classifies; B10 enforces. The two invariants are architecturally complementary — B10 could not exist without B3, and B3 is incomplete without B10 in an AI-native context. Constraint conflict detection was added when OI-17 identified that the protocol specified constraints but had no mechanism for detecting when constraints contradicted each other.

Readiness

This section is ready for SDK implementation. All invariants have SHACL shape classifications and enforcement mode assignments. The two-pass validation strategy is implementable with standard SHACL tooling. B10 shapes require actor type classification (§22.2) to be implemented first.

Meaning Resolution

No meaning resolution for this document.

Perception Surface

No perception surface for this document. Invariants are internal protocol constraints; they do not interface with external systems directly. External system outputs enter through Environment Interface (§4.6.3) and are subject to invariant enforcement once inside the substrate.

Temporal Governance

No temporal governance for this document. Invariants are not time-bounded; they persist as long as the protocol is in effect.