Air Canada
Feb 2024
BC Civil Resolution Tribunal
Customer-service chatbot invented a refund policy. Tribunal held the airline liable for "negligent misrepresentation."
C$812 + precedent
First binding ruling that a company owns its chatbot's claims as if a human employee made them. Cited in every AI legal review since.
Deloitte Australia
Oct 2025
Business Standard · Sydney University
237-page government report contained fabricated academic references and an invented federal-judge quote.
Public refund
Big-4 consultancy used GPT-4o to draft a welfare-compliance review for the Australian government. Errors caught by an external researcher, not internal QA.
Oregon vineyard case
Q1 2026
U.S. District Court · Magistrate Clarke
23 fabricated citations and 8 false quotations in legal filings. Costliest US AI-hallucination sanction to date.
$110,000 + dismissed
Two attorneys sanctioned, $12M elder-abuse claim dismissed with prejudice. Career consequences now flow from AI errors — financial penalties were just the start.
Microsoft 365 Copilot
Jun 2025
Aim Labs · CVE-2025-32711
"EchoLeak" — first zero-click vulnerability in a production AI system. CVSS 9.3.
CVSS 9.3 · critical
A single crafted email could silently exfiltrate organizational data via Copilot's RAG retrieval, no user interaction required. Patched server-side, but the class of attack is now public.
MyPillow defamation case
Jul 2025
NPR · Judge Wang, D. Colorado
Attorneys fined for citing cases that never existed. AI tracker now lists 1,400+ similar incidents worldwide.
$3,000 × 2
Damien Charlotin's hallucination database has 1,436+ tracked cases as of 2026. The judge called the $3K fines "the least severe sanction adequate to deter."
Microsoft 365 Copilot
Jan 2026
Cybernews
Bypassed confidentiality labels for weeks. Read emails it was never meant to summarize.
DLP bypass
A flaw let Copilot summarize emails tagged "confidential" via the work-tab chat. The very feature meant to prevent automated tools from accessing sensitive content silently failed.
Starbuck v. Meta
Apr 2025
Wall Street Journal
AI chatbot falsely identified plaintiff as a Holocaust denier and Jan-6 participant. Defamation suit ongoing.
defamation suit
A growing class of cases where AI confidently fabricates harmful claims about real people. Walters v. OpenAI (May 2025) reached summary judgment on similar grounds.
Tenable security study
Dec 2025
Dark Reading
Copilot Studio agents trivially manipulated into spilling SharePoint customer data — credit card details included.
cross-tenant leak
"Shadow AI" is the new shadow IT. Most enterprises don't know how many agents are running. Most agents inherit oversharing problems from the documents they were pointed at.
Air Canada
Feb 2024
BC Civil Resolution Tribunal
Customer-service chatbot invented a refund policy. Tribunal held the airline liable for "negligent misrepresentation."
C$812 + precedent
First binding ruling that a company owns its chatbot's claims as if a human employee made them. Cited in every AI legal review since.
Deloitte Australia
Oct 2025
Business Standard · Sydney University
237-page government report contained fabricated academic references and an invented federal-judge quote.
Public refund
Big-4 consultancy used GPT-4o to draft a welfare-compliance review for the Australian government. Errors caught by an external researcher, not internal QA.
Oregon vineyard case
Q1 2026
U.S. District Court · Magistrate Clarke
23 fabricated citations and 8 false quotations in legal filings. Costliest US AI-hallucination sanction to date.
$110,000 + dismissed
Two attorneys sanctioned, $12M elder-abuse claim dismissed with prejudice. Career consequences now flow from AI errors — financial penalties were just the start.
Microsoft 365 Copilot
Jun 2025
Aim Labs · CVE-2025-32711
"EchoLeak" — first zero-click vulnerability in a production AI system. CVSS 9.3.
CVSS 9.3 · critical
A single crafted email could silently exfiltrate organizational data via Copilot's RAG retrieval, no user interaction required. Patched server-side, but the class of attack is now public.
MyPillow defamation case
Jul 2025
NPR · Judge Wang, D. Colorado
Attorneys fined for citing cases that never existed. AI tracker now lists 1,400+ similar incidents worldwide.
$3,000 × 2
Damien Charlotin's hallucination database has 1,436+ tracked cases as of 2026. The judge called the $3K fines "the least severe sanction adequate to deter."
Microsoft 365 Copilot
Jan 2026
Cybernews
Bypassed confidentiality labels for weeks. Read emails it was never meant to summarize.
DLP bypass
A flaw let Copilot summarize emails tagged "confidential" via the work-tab chat. The very feature meant to prevent automated tools from accessing sensitive content silently failed.
Starbuck v. Meta
Apr 2025
Wall Street Journal
AI chatbot falsely identified plaintiff as a Holocaust denier and Jan-6 participant. Defamation suit ongoing.
defamation suit
A growing class of cases where AI confidently fabricates harmful claims about real people. Walters v. OpenAI (May 2025) reached summary judgment on similar grounds.
Tenable security study
Dec 2025
Dark Reading
Copilot Studio agents trivially manipulated into spilling SharePoint customer data — credit card details included.
cross-tenant leak
"Shadow AI" is the new shadow IT. Most enterprises don't know how many agents are running. Most agents inherit oversharing problems from the documents they were pointed at.