The Phantom Precedent: Why does AI lie?
Reading time: 3 minutes
"Your Honor, I submit that in the case of Harrington v. Blackwell, the court established a clear precedent for—"
"Counsel, I can't find this case in any database," the opposing lawyer interrupted, looking up from her laptop with a furrowed brow.
The Vancouver courtroom fell silent. Justice Masuhara turned his attention to Chong Ke, the lawyer who had confidently cited the precedent. Hours later, after frantic searching through legal databases, the truth became undeniable: Harrington v. Blackwell existed only in the digital imagination of an AI. The case was entirely fabricated, conjured with perfect legal phrasing and contextual relevance, yet completely fictional.
What followed was a professional nightmare: a judicial rebuke, a law society investigation, and a reputation in tatters—all because she trusted an AI system that spoke with unwavering authority about things that never existed.
The Power and the Peril
This February 2024 incident in the British Columbia Supreme Court exposes the fundamental paradox of today's AI: extraordinary capability paired with dangerous limitations. Across professions, these tools offer unprecedented speed and insight while simultaneously presenting unique risks.
Users often receive detailed, confident responses only to discover—sometimes too late—that critical details were completely fabricated. The system's authoritative tone masks its fundamental unreliability.
The Deceptive Mechanism
Large Language Models (LLMs)—the technology behind tools like ChatGPT and Claude—operate as sophisticated prediction engines. In simple terms, they're advanced text completion systems that guess what words should come next in a sequence based on patterns observed during their training, without any real understanding of truth or accuracy.
When ChatGPT presents "Harrington v. Blackwell established the precedent for..." it's not retrieving facts from a legal database. It's simply predicting what text would likely follow in a legal discussion, drawing from patterns in its training data.
LLMs aren't designed to seek truth; they're optimised to produce plausible continuations of text.
This creates a dangerous disconnect:
Extraordinary verbal fluency that exceeds most human writers
Complete lack of foundation in reality (no concept of what's true versus false)
The outputs sound authoritative precisely because they mirror the language patterns of genuine expertise, while lacking the expert's commitment to accuracy.
Mastering the Tool
The power of these systems is undeniable. Professionals who learn to wield them effectively gain significant advantages in productivity, creativity, and insight. Those who fail to understand their limitations risk becoming the next cautionary tale.
The key is approaching them as powerful tools requiring skilled operation—more like chainsaws than calculators. Professional strategies for effective use include:
Verification protocols: Always cross-check factual claims against trusted sources
Domain containment: Use AI within areas where you have enough expertise to detect errors
Strategic questioning: Frame prompts to minimise the risk of AI "hallucinations" (fabricated information)
Source grounding: Provide reference materials within the prompt when accuracy matters
These systems excel at drafting content, exploring alternatives, summarising information, and explaining concepts—when properly supervised by knowledgeable professionals.
The Professional Imperative
The stakes in professional environments are significant. The documented case of Ms. Ke demonstrates these aren't theoretical concerns. While the specific risks vary by field, the fundamental mechanism that created the Harrington v. Blackwell fabrication operates in every implementation of these models.
Professional mastery of AI isn't optional—it's rapidly becoming the essential skill of our era. Learning to leverage these powerful tools while navigating their limitations represents the new divide between those who will thrive and those who will be left behind.
When it comes to marketing content Ada Create bridges this divide by doing the strategic thinking for you. Working exclusively from the material you provide and approve, our platform gives you peace of mind with copy that's not only perfectly on-fact but consistently delivers powerful results.
In the next part of this series, we'll explore the fascinating chess experiment that reveals how these systems develop understanding without comprehension, and what this means for their professional application.
This article was written by Timo Tuominen, CTO at Ada Create. This is part 1 of our 4-part series "Understanding LLM Limitations." Next: "From Text to Knowledge: How Chess Reveals AI's Hidden Understanding"
B.C. lawyer reprimanded for citing fake cases invented by ChatGPT.
Image created with ChatGPT.