OpLogica Framework Documentation
Complete technical reference for the triadic verification framework for accountable AI decision systems.
OpLogica is a triadic verification framework that unifies three dimensions of AI accountability:
- Operational integrity — proving computations executed correctly
- Logical justification — proving decisions follow from premises and rules
- Ethical alignment — proving decisions satisfy pre-declared constraints
Unlike post-hoc explanation methods (LIME, SHAP), OpLogica provides formal justifications: verifiable demonstrations that specific decisions followed necessarily from declared constraints and observed data.
Currently under peer review at AI and Ethics (Springer Nature).
📜 License: Apache 2.0
2. The Triadic Framework
OpLogica's core innovation is unifying three proof types into a single, cryptographically bound verification bundle. No other framework integrates all three.
2.1 Proof of Operation (PoO)
Cryptographic verification that computational processes executed as specified — without tampering or corruption.
Purpose: Ensure no unauthorized modification occurred during decision computation.
Mechanism:
- State serialization: All inputs (patient data D, policy P, timestamp T) are deterministically serialized
- SHA-256 hashing of the serialized state
- HMAC-SHA256 signature for authenticity
- Timestamp binding to establish temporal ordering
What it proves: The decision record is intact and was produced by the authorized system at the claimed time.
Paper reference: Definition 3.1, Layer L0–L1
2.2 Proof of Reason (PoR)
Formal demonstration that decisions follow necessarily from premises and rules. Not just "what features mattered," but why the output was the right one given declared policies and observed data.
Mechanism:
- Directed Acyclic Graph (DAG) called a "Reason Graph"
- Three node types: Premises (observed data), Rules (from policy), Conclusions (derived)
- Edges represent inference relationships (input, entails, determines)
- Δ-Logic Engine evaluates rules deterministically
- Graph is hashed and signed for integrity
What it proves: The decision is the unique logical consequence of the given data and the declared policy.
Paper reference: Definition 3.3–3.4, Layer L3
2.3 Proof of Intent (PoI)
A priori declaration of ethical constraints, cryptographically signed before any decision is made.
Purpose: Prevents post-hoc rationalization. The system proves that constraints existed before operations and that decisions satisfied them.
Mechanism:
- Policy declaration with constraints in a formal DSL
- Cryptographic signing at declaration time (before any decisions)
- Temporal Precedence (Axiom 3.1): PoI timestamp < Decision timestamp
- Automated constraint verification at decision time
- Each constraint marked as: satisfied, violated, or triggered (warning)
What it proves: The ethical rules were not invented after the fact. They were committed to in advance.
Paper reference: Definition 3.5–3.6, Layer L3
3. Formal Definitions
A Proof of Operation is a tuple PoO = (H, T, Σ) where:
• H = SHA3-256(serialize(D, P, T)) — cryptographic hash of the decision state
• T ∈ ISO-8601 — UTC timestamp
• Σ = Sign(sk, H ∥ T) — digital signature
V(PoO) = 1 iff Verify(pk, Σ, H ∥ T) = 1 ∧ T ≤ T_now
A Reason Graph is a directed acyclic graph G = (V, E) where:
• V = V_premise ∪ V_rule ∪ V_conclusion (disjoint partition)
• E ⊆ V × V with labels ∈ {input, entails, determines}
• For every v ∈ V_conclusion, ∃ path from v' ∈ V_premise through v'' ∈ V_rule
PoR = (G, Δ, H_G) where:
• G — valid Reason Graph
• Δ — delta-logic evaluation trace
• H_G = SHA3-256(serialize(G))
A constraint C = (id, rule, severity) where:
• rule — predicate over decision variables
• severity ∈ {mandatory, warning}
PoI = (P, C_set, T_decl, Σ_decl) where:
• P — policy name
• C_set = {C₁, ..., Cₙ} — constraint set
• T_decl < T_decision (Temporal Precedence)
• Σ_decl = Sign(sk, serialize(P, C_set, T_decl))
VB = (PoO, PoR, PoI, M_root) where:
• M_root = MerkleRoot(H_PoO, H_PoR, H_PoI)
• V(VB) = 1 iff V(PoO) ∧ V(PoR) ∧ V(PoI) ∧ VerifyMerkle(M_root)
For any valid verification bundle VB:
T_PoI < T_PoO — Intent must precede Operation
4. Architectural Layers
OpLogica implements five architectural layers, from foundational event recording to external verification interfaces:
5. Constraint DSL
OpLogica uses a domain-specific language for declaring constraints in the Proof of Intent:
6. Interactive Demos
Try the OpLogica framework with live, interactive demonstrations:
Deterministic patient triage with full triadic verification. Five patient parameters produce a priority decision with PoO, PoR, PoI, and Merkle root.
Try Medical Triage Demo →
Deterministic loan evaluation with full triadic verification. 5 financial parameters → Approval decision with PoO, PoR, PoI, and Merkle root.
Try Credit Assessment Demo →
Deterministic candidate evaluation with full triadic verification. 5 applicant parameters → Hiring recommendation with PoO, PoR, PoI, and Merkle root.
Try Employment Screening Demo →
Deterministic permit evaluation with full triadic verification. 5 project parameters → Permit decision with PoO, PoR, PoI, and Merkle root.
Try Building Permit Demo →
Ask the OpLogica chat for a triage assessment with patient parameters. The system intercepts the request and runs the deterministic engine automatically.
Open Chat →
Coming Soon:
- Legal Compliance Evaluation
7. Empirical Results
Validated in a medical triage simulation with 1,000 synthetic patient cases. Comparative evaluation against LIME and unverified baseline:
| Metric | OpLogica | LIME | Unverified |
|---|---|---|---|
| Verification Coverage | 100% | 87% | 0% |
| External Auditability | 98% | 71% | 12% |
| Policy Compliance | 99.2% | 94% | 89% |
| Mean Latency | 47 ms | 134 ms | 23 ms |
| Constraint Satisfaction | 99.8% | N/A | N/A |
Ablation Study: Each proof type contributes non-redundantly:
- Removing PoO → integrity verification drops to 0%
- Removing PoR → auditability drops from 98% to 31%
- Removing PoI → compliance drops from 99.2% to 76%
All differences statistically significant at p < 0.001
8. Regulatory Alignment
EU AI Act (2024)
- Article 14 — Human oversight: Decision Ledger provides full audit trail
- Article 13 — Transparency: Reason Graphs provide formal justifications
- High-risk classification: Triadic verification covers all requirements
GDPR Article 22
"Meaningful information about the logic involved" — OpLogica provides exactly this: not just feature importance, but formal proof that the decision followed from declared rules.
NIST AI Risk Management Framework
- Map: Constraint DSL maps ethical requirements
- Measure: Empirical validation with 1,000 cases
- Manage: Verification bundles for ongoing compliance
9. API Reference
POST /api/triage-demo
Run a medical triage assessment with full triadic verification.
Request:
Response:
10. Source Code & Reproducibility
The implementation is open-source under the Apache 2.0 license.
Key Files:
server/triageEngine.js— Deterministic triage engine with triadic verificationserver/index.js— API endpoints and chat integrationpublic/demo.html— Interactive demo interfacepublic/chat.html— Chat interface with triage intercept
Reproducibility: All results in the paper can be reproduced using the provided code. The triage engine is fully deterministic: same inputs always produce the same decision and cryptographic proofs.