2026-02-11-729-HALLUCINATIONS

729 Hallucinations

0 HALLUCINATIONS

Legal AI has a governance problem. Not a technology problem.


729 documented incidents of AI-generated fake citations reaching judges. 66 sanctions in 2025 alone. Fines from $100 to $31,000. Three federal courts sanctioned lawyers in a single two-week period last August.

The profession responded with policies. The ABA published Formal Opinion 512. Thirty states issued guidance. Three hundred judges adopted disclosure orders.

The hallucinations kept coming.


The Architecture Problem

Every legal AI tool works the same way:

1. Lawyer asks a question
2. Tool searches a database
3. Tool generates an answer
4. Lawyer is supposed to verify it

Step 4 is the failure point. It relies on the same person who used AI to save time to then spend that time checking every citation. When the output looks authoritative, verification feels redundant.

The architecture makes non-compliance the path of least resistance.


Policy vs Construction

Approach How it works Track record
Governance-by-policy Write rules. Train people. Sanction violations. 729+ failures
Governance-by-construction Build systems where non-compliance is architecturally difficult. MammoChat: 0 incidents

The legal profession is trying to solve an architecture problem with behavioral interventions. The data is unambiguous.


What Lawyers Actually Need

This is what we are building. LawChat — governance-by-construction for legal AI.

MammoChat is free for patients. LawChat is built for the profession.


Figures

Context Type Data
post score-meter score: 0, label: HALLUCINATIONS

CANONIC FOUNDATION