Prompts are recipes. Governance is theory. The entire industry confused the two.
AI Is Like Cooking
Anthony Chang, founder of AIMed and the American Board of AI in Medicine, put it simply: “AI is like a recipe. Healthcare is like a meal. Some clinicians can cook well without recipes, but some cannot.”
That analogy cuts deeper than it sounds. Theory says fish is marinated in acid. Acid denatures protein. It does not drift, does not depend on who is in the kitchen or what model of stove they use. Recipes say use lemons. Recipes drift, because vinegar works just as well and lime works better for ceviche, and the next chef who walks in has no idea why you chose lemons in the first place.
Prompts are recipes. Every prompt is an oral tradition: useful to the person who wrote it, invisible to the institution that depends on it, gone the moment the context window fills. Prompt libraries are folklore, and folklore degrades with every retelling.
Governance is theory. It declares the invariant. The implementation can vary, but the contract is fixed, reproducible, and transferable. Elections are the feedback and amendments are the fix. Applies in the kitchen or the operating room. For any domain, the theory of governance is the same. What the recipe books are missing, as Chang observed, is the feedback from users: a recipe that learns from every meal it produces is a health learning system, and that continuous loop is what separates a prompt library from an institutional capability.
The Treadmill
On March 26, Fortune revealed that Anthropic’s unreleased Mythos model, codenamed Capybara, had leaked through an unsecured data cache. The model is described as “currently far ahead of any other AI model in cyber capabilities” and a “step change” beyond anything previously deployed. Three days later it is already old news, because the treadmill never stops.
At what point do these new models saturate? How does medicine, still regulated to fax machines and routinely the target of cybersecurity attacks, keep up? It cannot keep up by chasing models. A hospital that deploys GPT-4 with prompt engineering, migrates to Claude with different prompt engineering, then migrates again has rebuilt its AI surface three times with nothing to show for it but three sets of ungoverned prompts and zero audit trails. The model changed. The institution learned nothing.
Meanwhile, the outputs of one model must govern the inputs of another for a coordinated system to function. Nothing like that exists today. Models are proprietary. Data is kidnapped. Laws are ignored. Violations are levied. Rinse and repeat.
The Liability
A Nature study found LLMs are “highly vulnerable to adversarial hallucination attacks during clinical decision support.” A separate npj Digital Medicine study measured hallucination rates up to 64% without mitigation in clinical text summarization. Even with structured prompting, the best models still hallucinated potentially harmful information 2.3% of the time. In healthcare, 2.3% is a body count.
Clinicians must not prompt LLMs they cannot prove, because a shrewd lawyer will destroy them in court when malpractice is in question. The first major HIPAA Security Rule update in 20 years landed in January 2025, eliminating the distinction between “required” and “addressable” safeguards. 67% of healthcare organizations admit they are not ready. The FDA cleared 295 AI/ML medical devices in 2025 alone, each requiring data lineage, bias analysis, and a Software Bill of Materials. Prompts satisfy none of these requirements. Contracts do.
Meanwhile, the government itself cannot decide what AI should be allowed to do. The Pentagon blacklisted Anthropic after the company refused to remove guardrails against autonomous weapons and mass domestic surveillance. A federal judge blocked the order in a 43-page ruling, calling it “Orwellian.” If the makers of the models and the government that regulates them cannot agree on what guardrails should exist, what exactly is a hospital supposed to prompt its way out of?
The Hackathon Proof
A lawyer won Anthropic’s hackathon by building a permit app in six days. A cardiologist built clinical follow-up guidance. A road technician built an infrastructure appraisal tool. None were developers. All proved domain expertise beats coding.
A lawyer can build a permit app in six days. A law firm needs a permit system that works for six years. The distance between those two is not more prompting. It is governance.
The Close
The DRGPT 2026 AI Healthcare Index analyzed over 150 companies and found 74% of healthcare AI tools lack clinical validation. The market is projected to reach $543 billion by 2035 on a foundation of ungoverned prompts and unvalidated outputs. That is not an industry. It is a liability waiting for a plaintiff.
Healthcare does not reward disruption. It rewards alignment. Enterprise readiness is the differentiator: contracts that survive the people who wrote them, evidence chains that satisfy the regulators who audit them, learning systems that compound institutional knowledge instead of losing it at the end of every session.
Entropy means any system devolves into chaos without the evolution of good governance itself. Chaos burns tokens, and tokens are dollars for big tech.
Stop optimizing your prompts. Start governing your AI.
Sources
| Claim | Source |
|---|---|
| Anthropic Mythos (Capybara) model leak via unsecured data cache | Fortune, Mar 26 2026 |
| LLMs vulnerable to adversarial hallucination in clinical decision support | Nature Communications Medicine, 2025 |
| Hallucination rates up to 64% in clinical text summarization | npj Digital Medicine, 2025 |
| Best models hallucinate harmful info 2.3% of the time | Suprmind AI Hallucination Benchmarks |
| First HIPAA Security Rule update in 20 years; 67% of orgs not ready | Foley Hoag, Feb 2026 |
| 295 FDA AI/ML device clearances in 2025 | Innolitics 2025 Year in Review |
| Pentagon blacklisted Anthropic over AI safety guardrails | CNN Business, Feb 24 2026 |
| Federal judge blocked Pentagon order as “Orwellian” | Washington Post, Mar 26 2026 |
| Hackathon proof: domain expertise vs governance gap | The Lawyer Who Won, Hadley Lab |
| 74% of healthcare AI tools lack clinical validation | DRGPT 2026 AI Healthcare Index |
| CANONIC | canonic.org |
| Hadley Lab | hadleylab.org |
Figures
| Context | Type | Data |
|---|---|---|
| post | comparison | left: Prompt (ephemeral, unversioned, unauditable), right: Contract (governed, versioned, auditable) |
| *Stop Prompting, Start Governing. | CONTRACT AI | BLOGS* |