Lattice · The governance faculty

Policy
as code.

Every decision gated against rules. Every denial logged with the rule that triggered it.

Lattice is the Governance faculty of the Ginnung cognitive runtime. It sits between your agent's intent and its action — evaluating each proposed step against policies you author in plain code. Allowed actions pass through with a signed receipt. Denied actions produce a SonderEvent naming the policy, the input, and the rationale. The audit trail writes itself.

What is Lattice

Guardrails are not a prompt.

The dominant guardrail pattern is to ask the model nicely. “You are a helpful assistant. Do not...” A trained model is a poor place to enforce policy: invisible, untestable, and trivially bypassed. When auditors ask what rules were applied to this decision, “we asked the model to be careful” is not an answer.

Lattice moves policy out of the prompt and into code. Rules are typed, versioned, unit-testable functions over a structured proposal — “agent X wants to do Y with payload Z under context C.” Each rule returns allow, deny, or escalate, with a reason. The evaluator is deterministic. The decision is reproducible.

You write the rules. Lattice runs them on every proposal, signs the outcome, and ships the decision into the same SonderEvent log as memory and reasoning. One trail. One source of truth.

What it does

Capabilities

Author

Rules as typed code

Write policies in TypeScript or Python. Unit-test them. Diff them in code review. No DSL to learn, no UI to wrestle.

Evaluate

Deterministic verdicts

Allow, deny, or escalate — with a reason. Same input, same verdict, every time. Replay any historical decision against the policy that ruled it.

Sign

Cryptographic receipts

Allowed actions carry a signed receipt naming the rules that passed. Downstream systems verify the receipt before acting. No receipt, no execution.

Escalate

Human-in-the-loop, gracefully

Some decisions shouldn't be automated. Lattice escalates to your queue, channel, or approval system — and resumes the agent exactly where it paused.

Version

Policy as artifact

Every policy change is a versioned event. Auditors get a timeline: when the rule changed, who shipped it, which decisions ran under each version.

Compose

A faculty, not a gateway

Lattice gates the action and emits the decision to Engram. Reasoning, memory, audit — one runtime, one event stream. No bolt-on policy engine.

Why policy as code

August 2, 2026.

EU AI Act Article 14 takes effect August 2, 2026. High-risk AI systems must be designed for effective human oversight — operators must be able to interpret outputs, intervene on decisions, and stop the system at any point. Article 15 adds that the system must be accurate, robust, and resilient by design, with errors documented.

A system whose only governance is “the prompt told it not to” fails both articles. There is nothing to interpret. There is no intervention point. There is no record of what rule applied.

Lattice gives you the intervention points and the record. Each decision is a structured proposal evaluated against versioned rules, with deny and escalate as first-class outcomes. Auditors get the policy diff and the decision log. Operators get a queue. Engineers get unit tests.

Get started

Two ways in.

Self-host

github.com/heybeaux/lattice

TypeScript + Python SDKs. Write rules as code, run the evaluator in your process or as a sidecar. MIT-licensed.

Compose

As a Ginnung faculty

Running Engram or Parliament? Lattice drops into the same SonderEvent bus. Decisions land in the same audit log as memories and deliberations.

Built by

Lattice is the Governance faculty of Ginnung, the cognitive runtime built by heybeaux. Built on a simple bet: agents that pass auditable rules will outlive agents that don’t.