When the Law Forgets How to Tell the Truth: Why the Rishi Nathwani Case Proves Mandatory AI Labelling Can't Wait
Melbourne, January 2026: A King's Counsel stands before a Supreme Court judge, apologise not for a tactical misstep, but for fabricating reality itself.
"I come from a country obsessed with controlled appellations. You can't call it Champagne unless it comes from Champagne..." I wrote those words in my previous article on mandatory AI labelling. I argued that if we demand provenance for our cheese, we should certainly demand disclosure for AI-generated content that threatens the foundation of our justice system.
Looking at the news this morning, I reckon that hypothetical collision became concrete reality.
When a King's Counsel Seems to Lost His Crown of Credibility
Rishi Nathwani is no junior solicitor. He holds the title of King's Counsel which is reserved for the most experienced barristers in the Australian legal system. Yet in a murder trial determining whether a teenager walked free or faced life imprisonment, Nathwani filed submissions containing:
Fabricated quotes from state legislative speeches
Citations to Supreme Court judgments that simply do not exist
The defense team's explanation? They checked the initial AI-generated citations and assumed the rest would follow suit. They didn't. As a result, a 24-hour delay in a case already heavy with the burden of determining mental impairment in a murder prosecution, and a judicial rebuke that stripped bare the professional fantasy that generative AI can be a silent partner in justice.
"The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Justice Elliott stated. In my point of view, he might have added: And we cannot rely on that accuracy when we cannot see the machinery that produces it.
This Is Not a AI Bug, It Is the System!
The Nathwani incident is not isolated. It's symptomatic of a pattern accelerating across jurisdictions:
June 2023; Mata v. Avianca (New York): Attorneys Steven Schwartz and Peter LoDuca submitted a brief citing six authorities generated by ChatGPT. Every case was fictional. Varghese v. China Southern Airlines was so convincingly fabricated it included detailed procedural history and citations to non-existent precedents.
June 2025; UK High Court Warning: Justice Victoria Sharp warned that submitting falsified material could constitute contempt or, in "egregious cases," perverting the course of justice carrying a potential life sentence.
August 2024: Dayal Case (Australia): Federal Circuit and Family Court case involving AI-integrated legal software (LEAP) that produced hallucinated authorities. The Victorian Legal Services Board stripped the practitioner of their right to practice independently, imposing two years of supervised practice.
January 2025: Valu v Minister for Immigration (Australia) Legal representative unable to locate his own cited authorities, the same pattern of hallucinated case law, the same admission of blind trust in technology.
I recently discovered that a French researcher Damien Charlotin maintains a database now documenting over 120 instances of AI hallucinations in legal proceedings globally, with frequency accelerating from 36 cases in 2024 to 48 in the first half of 2025 alone...
Think about that trajectory for a moment: we're not looking at a stable problem or a declining trend we're watching an exponential curve following the use of AI tools. At this rate, we could see over 100 cases in 2025 alone. And these are only the ones that were caught, documented, and made public. How many fabricated citations slipped through undetected? How many judges relied on phantom precedents? How many defendants or plaintiffs had their cases decided based on legal fiction?
The pattern reveals a troubling paradox: even as AI tools improve with each version and become more sophisticated, the frequency of hallucinations in legal proceedings isn't decreasing but accelerating. Why? Because adoption is outpacing improvement. Let's be transparent: more lawyers are using AI tools, more frequently, and with greater confidence than the technology yet deserves.
This isn't a technology problem that will solve itself through better algorithms. It's a human behaviour problem amplified by technology. And it's happening across all levels of legal experience, from self-represented litigants to King's Counsel, proving that expertise offers no immunity to AI-generated fiction.
Why Guidelines Fail: The Psychology and Policy Gap
What makes these cases fascinating (and terrifying) isn't technological failure but human psychology.
In the Mata v. Avianca case (the landmark 2023 New York case where lawyers submitted six completely fabricated ChatGPT-generated cases) attorney Steven Schwartz confessed during sanctions hearings: "I was operating under the false perception that this website [ChatGPT] could not possibly be fabricating cases on its own... I just never thought it could be made up."
This is what researchers call "false authority attribution." We see polished, grammatical, confident prose (legal language that sounds like it emerged from law reports) and our pattern-recognition assumes it must come from a source we can trust. The AI generates not just text but citations that look like citations, case names that sound like case names, quotes that read like quotes.
In the Nathwani case, we see a system where:
The tool is invisible (we don't know which AI was used)
The process is opaque (no disclosure until caught)
The verification is optional (lawyers "assumed" rather than verified)
Justice Elliott noted that the Supreme Court of Victoria released guidelines for AI use in May 2024. Guidelines. Not requirements. Not mandatory labeling. Just suggestions that responsible practitioners ought to follow.
The Supreme Court of New South Wales took a stricter approach with Practice Note SC Gen 23, issued November 2024 and effective February 2025, which mandates disclosure when generative AI assists in preparing evidence and requires verification of all citations.
But as the Valu and Dayal cases demonstrate, even mandatory disclosure within submissions may not be sufficient if professional culture treats AI as a research shortcut rather than a tool requiring forensic skepticism.
This is precisely why I argued for mandatory labelling extending beyond the legal profession's internal controls. In the same way that France's Bill No. 675 requires explicit labelling of AI-generated images on social media, and the EU AI Act imposes disclosure requirements, legal submissions prepared with AI assistance should carry the digital equivalent of a content credential.
And not merely a footnote stating "prepared with AI assistance", but an immutable, verifiable declaration of provenance that forces the reviewing lawyer (and the court) to treat the content with appropriate skepticism. Let's Consider this: Had Nathwani been required to label each AI-generated citation with a cryptographic watermark (along the lines of C2PA standards), and had those watermarks triggered mandatory verification protocols, would the fake quotes have survived the editing process?
The technology exists. What's missing is the regulatory obligation to deploy it.
The Cost of Inaction and the slow contamination of legal knowledge
The costs of our current opacity are not abstract. In the Nathwani case: A murder trial was delayed 24 hours; a defendant's freedom hung in the balance while the court sorted fiction from fact; the prosecution accepted submissions it should have challenged; the public trust in the justice system eroded further.
Multiply this across 120+ documented cases, and consider the unknown hundreds where fabrications went undetected.
University of Miami Professor Christina Frohock, whose research examines AI hallucinations in legal systems, has documented how these errors create what she calls "ghosts at the gate", meaning phantom legal precedents that can take on lives of their own through repetition and reliance.
When lawyers cite hallucinated cases in legal briefs, those fabricated citations enter the legal ecosystem. Other lawyers might encounter them in cursory searches, assume they're legitimate, and cite them again. Legal AI systems scraping case law might incorporate them into training data, creating a self-reinforcing cycle of legal fiction.
The threat isn't just to individual cases but to the corpus juris itself, the body of law we all rely upon. As Frohock's work demonstrates, AI hallucinations in legal proceedings pose serious threats to the integrity of the justice system and public confidence in judicial processes. When citizens can't trust that case law is real, that precedents actually exist, or that judicial reasoning is based on authentic legal authorities, the entire foundation of the common law system begins to crack.
What Must Change
The Nathwani case offers a pivot point. We should demand:
1. Mandatory Disclosure with Technical Verification
Following the AIUC Global five-tier classification system and C2PA watermarking standards, all legal submissions should cryptographically identify AI-generated content. This should be mandatory, not optional, and should trigger automatic verification protocols within court filing systems.
2. Prohibition on Unverified AI for Authoritative Citations
As NSW Practice Note SC Gen 23 suggests for expert evidence, courts should require leave before generative AI may be used to identify binding precedents. AI may assist in formatting, grammar, and logic, but not in establishing legal authorities unless independently verified against primary sources.
3. Professional Liability for Undisclosed AI Hallucinations
The Law Institute of Victoria's September 2025 guidelines and the Victorian Legal Services Board's response to Dayal (stripping the practitioner of independent practice rights) show that regulators are willing to act. But we need clear, uniform standards across Australian jurisdictions that treat AI hallucinations not as "errors" but as professional misconduct, the equivalent of citing a fabricated precedent from memory.
4. Judicial AI Detection Tools
Just as judges now use plagiarism detection software, federal and state courts should implement AI-content detection and citation verification tools at the point of filing. When a case name doesn't appear in AustLII or Jade, the system should flag it before it reaches the bench.
The Transparency Dividend
As I concluded in my previous article: "Trust does not slow progress but it secures it. Transparency makes trust possible. Without trust, progress ultimately collapses."
The Nathwani case is what collapse looks like in slow motion. Not a dramatic implosion, but the quiet erosion of foundational premises we've taken for granted for centuries: that when a lawyer cites a case, the case exists. That when a lawyer quotes a speech, the speech occurred. That legal arguments rest on verifiable authorities, not plausible fictions. That the court can rely on submissions made by counsel. These aren't trivial assumptions but the load-bearing walls of our justice system. Remove them, and the entire structure becomes unstable.
In France, we regulate Champagne because the reputation of an entire region depends on truth in labelling. A single bottle of fraudulent sparkling wine undermines centuries of viticultural tradition and economic value. The "appellation contrôlée" system exists not to stifle innovation, but to preserve trust. We should regulate legal AI with the same riguor because the foundation of justice depends on it.
Let's consider the stakes: Champagne protects economic value and regional reputation. Legal AI regulation protects fundamental rights, liberty, and the rule of law itself. The alternative is a system where, as Justice Elliott warned, we can no longer rely upon the accuracy of submissions made by counsel in a system where the law forgets how to tell the truth, where precedent becomes fiction, and where justice is determined by whichever AI hallucinates most convincingly.
Here's what frustrates me most: The technology to prevent this already exists. The regulatory models are proven and working in France, throughout the EU, and increasingly in U.S. states like California. C2PA watermarking, cryptographic verification, automated citation checking: these aren't theoretical solutions but technologie that we can and should deployed. The only missing piece is implementation. We need legal regulators and professional bodies to stop discussing and start requiring these solutions in Australia.
And every day we delay, another case gets filed with fabricated citations. Another defendant's freedom hangs on phantom precedents. Another brick falls from the foundation of public trust in our legal system.