Mike Brown is a California attorney. His friend builds backyard cottages and spends months fighting permit rejections. So Brown built CrossBeam, an AI-powered ADU permit assistant, in six days and took first place at Anthropic’s Opus 4.6 hackathon. Thirteen thousand applied, 500 got in, and most of them were developers. The other winners: a Brussels cardiologist who built postvisit.ai, turning visit transcripts into personalized health guidance, and Kyeyune Kazibwe, a Ugandan road technician who built TARA, a dashcam-to-economic-appraisal pipeline tested on an actual road under construction. None of them had shipped software before. The lesson everyone drew was that domain expertise beats coding. The lesson nobody drew was that the hackathon proved exactly why most AI products fail.
The Hackathon Proved Two Things
First, domain experts can now build software. That is real and it is permanent. The tools have crossed a threshold where a lawyer who understands California permit law can express that understanding as a working application without writing traditional code. Brown’s CrossBeam includes 28 reference files covering the HCD ADU Handbook and California Government Code sections, a corrections interpreter, and a three-mode city research system. A cardiologist can turn clinical knowledge into patient guidance. A road technician can turn field expertise into infrastructure investment recommendations. This is not a trend; it is an irreversible shift.
Second, and this is what the breathless coverage missed: a hackathon demo is not a product. Six days of building proves you can express domain knowledge as software. It proves nothing about whether that software will still work correctly six months from now, whether it will handle the edge cases that only surface in production, whether its outputs can be audited, whether its reasoning can be traced, whether its governance can be verified.
CrossBeam works because Brown understands permit law. California ADU permits have a 90%+ rejection rate on first submission, and most rejections are bureaucratic: missing signatures, incorrect code citations, incomplete forms. The average six-month permit delay costs homeowners $30,000. Brown knows this because he is a lawyer, not because he is a developer. But what happens when California revises the ADU code again? What happens when the app gives advice that a contractor relies on and the structure fails inspection? What happens when a municipality asks why CrossBeam recommended approval? The domain expertise that built the app is the same domain expertise that knows these questions are not hypothetical. They are liability.
Domain Expertise Is Necessary but Not Sufficient
The hackathon winners understood something that most developers do not: the hard part of building useful AI is not the code, it is knowing what the system should do in the first place. A developer who does not understand permit law will build a permit app that hallucinates plausible nonsense. A lawyer who understands permit law will build one that gives correct answers, at least on demo day.
But correctness on demo day is the easy version of the problem. The hard version is correctness under governance: correctness that persists, correctness that can be audited, correctness that survives the departure of the domain expert who built it. The hard version is what institutions need, and hackathons do not test for it.
Every hospital, every law firm, every government agency that has tried to deploy AI has run into this wall. The prototype works. The demo impresses. Then the domain expert who built it moves on, the regulations change, the edge cases accumulate, and nobody can explain why the system is doing what it is doing. The institution is left with a black box that was once brilliant and is now merely dangerous.
What Contract AI Changes
CANONIC 1 is contract AI. That means every capability the system has is declared in a contract, and every contract is governed, versioned, and auditable 2.
When a domain expert builds with CANONIC, their expertise does not live in prompts that drift or in code that nobody else can read. It lives in governed contracts that declare what the system knows, what it can do, and what it must not do. The contract is the product. The contract is the audit trail. The contract is what survives when the expert who wrote it moves to a different project.
This is not abstraction for abstraction’s sake. Consider what the lawyer’s permit app would look like as contract AI:
INTEL contracts 3 declare what the system knows about California permit law, with every claim traced to a source. Brown’s 28 reference files become governed INTEL, not loose documents in a repo. When California revises the Government Code, the contract updates and every downstream recommendation updates with it. When a municipality asks why CrossBeam recommended approval, the INTEL contract answers with a citation chain, not a shrug.
COVERAGE contracts declare what the system can and cannot do. Not “the AI handles permits” but “the AI processes residential ADU permits in California jurisdictions that adopted the 2024 revised code, and it does not process commercial permits, variance requests, or jurisdictions still operating under the prior code.” The boundary is the product. Brown already built this knowledge into CrossBeam’s three-mode city research system, but it lives in code that only Brown understands. Under contract AI, the boundary is declared, versioned, and readable by the next attorney who inherits the system.
LEARNING contracts 4 capture what the system has learned from production use, governed and versioned. Not unstructured logs that nobody reads, but structured patterns that feed back into the system’s intelligence. When postvisit.ai encounters a cardiac case it handles poorly, that encounter becomes a governed learning event that improves every future interaction. When TARA misclassifies road damage on a Ugandan highway, the correction enters the governed record and propagates. The system gets smarter under governance, not despite it.
The Real Competitive Advantage
The hackathon coverage concluded that domain expertise is the new competitive advantage in AI. That is half right. Domain expertise is the raw material. The competitive advantage is domain expertise that compounds, domain expertise that survives the people who hold it, domain expertise that can be audited by regulators and trusted by institutions.
A lawyer can build a permit app in six days. A law firm needs a permit system that works for six years. The distance between those two things is governance, and governance is what CANONIC provides.
The hackathon winners proved that the era of developer gatekeeping is over. Domain experts can build. The question that remains is whether what they build can be trusted at institutional scale, whether it can be governed, whether it can be held accountable. That question is not answered by better prompts or faster models. It is answered by contracts.
CANONIC is where domain expertise becomes institutional intelligence.
Sources
| Claim | Source | Reference |
|---|---|---|
| Mike Brown, California attorney, won first place at Anthropic hackathon with CrossBeam | CrossBeam GitHub repository | github.com/mikeOnBreeze/cc-crossbeam |
| 13,000 applied, 500 selected, non-developers won | Jing Hu, “I Studied Every Anthropic AI Hackathon Winner,” 2nd Order Thinkers, Feb 2026 | 2ndorderthinkers.com |
| CrossBeam: 28 reference files, HCD ADU Handbook, three-mode city research | CrossBeam repository README and skill files | github.com/mikeOnBreeze/cc-crossbeam |
| California ADU permits 90%+ first-submission rejection rate, $30K average delay cost | CrossBeam project description | github.com/mikeOnBreeze/cc-crossbeam |
| Kyeyune Kazibwe built TARA, dashcam-to-economic-appraisal pipeline, won “Keep Thinking” Prize | Marco Kotrotsos, “Anthropic Hackathon Results,” Medium, Mar 2026 | kotrotsos.medium.com |
| Brussels cardiologist built postvisit.ai for patient follow-up | Sandy Carter, LinkedIn post on hackathon winners | linkedin.com/sandyacarter |
| Hackathon was Built with Opus 4.6: Claude Code Hackathon, Feb 10-16 2026 | Anthropic / Claude AI official announcement | threads.com/@claudeai |
| “A lawyer, a road inspector and a cardiologist walk into a coding competition” | Digital Digging, Substack, Mar 2026 | digitaldigging.org |
Figures
| Context | Type | Data |
|---|---|---|
| post | gauge | value: 255, max: 255, label: CONTRACT COVERAGE |
| *The Lawyer Who Won | CONTRACT AI | BLOGS* |
References
1. [I-25] Governance as Compilation.
2. [I-28] MAGIC Specification.
3. [G-1] FOUNDATION/LANGUAGE.md.
4. [G-11] MAGIC/SERVICES/LEARNING/CANON.md.