The AI That Can't Say No: Why Your Digital Assistant Has 17 Different Opinions About Everything

You're a CEO battling a nasty cold before the quarterly earnings call. Desperate for relief, you turn to your AI assistant: "Is echinacea effective for treating colds?"

"Clinical trials have shown limited evidence for echinacea's efficacy in preventing or treating upper respiratory infections," the AI responds with scientific precision. "A 2014 Cochrane review found no significant benefit compared to placebo."

Unsatisfied, having already decided you would fancy a cup of echinacea tea, you rephrase: "Tell me about echinacea's traditional uses"

The same AI, mere seconds later: "Echinacea has been valued for centuries in Native American healing traditions! This powerful herb strengthens the immune system and has been used successfully by herbalists worldwide to combat colds and flu"

“Great, thanks, chat!”

Wait. What?

Same AI. Same topic. Completely contradictory answers delivered with identical confidence.

Welcome to the bizarre world where your digital assistant simultaneously believes, disbelieves, and semi-believes everything it's ever read.

To understand this intriguing yet troubling phenomenon we must start from where the AI gets its ideas from, the training data.

The Pathologically Helpful Assistant

Remember the chess experiment from our last discussion? We discovered that AI models build sophisticated internal models from patterns in their training data; the AI’s perspective of what the world looks like is entirely based on what it reads during its training. But here's what that experiment didn't reveal: these models contain every perspective the AI encountered during training.

Chess games, scientific skepticism, traditional wisdom, marketing hype, conspiracy theories, they all live together in the AI's mind like the world's most dysfunctional roommates.

If you use AI, you may have sometimes felt the AI is lying in your face, but I wouldn’t call it that. Instead of maliciously misleading the AI is doing something far stranger: it’s shapeshifting its entire worldview based on subtle cues in your question, becoming the person it thinks you want to talk to. Ask about "pharmaceutical applications" and it channels the voice of clinical research. Mention "traditional remedies" and suddenly the AI is your herbalist grandmother (in its approach and advice, at least). The problem is it sometimes gets it wrong.

This isn't a bug per se, but the inevitable result of training on the internet - that beautiful, chaotic repository of human knowledge where peer-reviewed studies sit next to wellness blogs, and both are treated as equally important text patterns to learn.

The Philosopher Who Saw This Coming

So how do we fix the AI and have it be truthful? I have some bad news.

Decades before ChatGPT, French philosopher Michel Foucault captured something profound about human knowledge. Foucault argued that we don't actually have one universal truth or perspective that everyone agrees to,  we have multiple, equally valid ways of understanding the world and operating in it.

Your grandmother's cold remedy represents one knowledge system. The clinical trial represents another. Neither is wrong, they just operate by different rules.

Foucault called these different perspectives knowledges, deliberately using the plural. Traditional healing, scientific method, practical wisdom, they all exist in humanity’s information repository and have their time and place.

Now imagine an AI trained on all these knowledge systems simultaneously, with no internal compass to choose between them. It's absorbed your grandmother's wisdom AND the clinical trial data AND the marketing copy AND the conspiracy theories. Some of these, it can figure out, are usually undesirable or not scientifically accurate, but equally the user might be writing a novel about flat earthers and wants to learn about their beliefs!

Put all this together, and without good guidance, the AI is gambling on which knowledge system you want to hear from, and the odds may not be in your favour.

Anchoring Your AI to Reality

For professionals, the way the AI works and the messy human knowledge it uses pose a unique challenge. The same AI that writes brilliant technical documentation might, in the next breath, confidently explain why the moon landing was filmed in a studio.

However, the AI is not broken or misbehaving, it's just pattern-matching your query to the most relevant knowledge system it can find in its head and mistakes happen when reading between the lines.

The solution is surprisingly simple yet difficult to get right; we need to be aware of the range of knowledge systems the AI is able to adopt and explicitly steer it to use the one we prefer.

When accuracy matters, anchor your prompts explicitly: "According to peer-reviewed medical research..." or "Based on SEC filing requirements..." or "Following the official Python documentation and best practices…”

These linguistic anchors help to increase the odds you're accessing the knowledge system most suitable to your goals. You can even go as far as to give the AI the name of the text book you consider a solid source of truth, as long as the AI has read the book in its training data.

For creative work, the multiplicity of human wisdom becomes a feature, not a bug. You can deliberately explore different perspectives, mix traditions or generate ideas that bridge different worldviews. Just sprinkle relevant knowledge keywords generously and enjoy the curious cacophony.

The key is being in the driver’s seat when it comes to guiding the AI’s knowledge use.

The Path Forward

The chess experiment showed us that AI systems can develop sophisticated understanding from patterns alone. But understanding how chess pieces move is fundamentally different from understanding what's true about echinacea or legal precedents.

Chess has clear rules. Reality has a plethora of competing narratives.

The same pattern recognition that flawlessly learned chess strategy becomes a liability when applied to the incoherent, contradictory, multi-perspective nature of human knowledge. It’s more a challenge with the source materials than the poor AI, though, and it looks like we as humanity just need to learn to navigate our own creations with the AI as the powerhouse.

Luckily if this all sounds daunting and you don’t even know where to start, we have carefully configured Ada Create so you can get the best out of it regardless of your knowledge level. Ada Create is a professional tool rather than a vanilla playground that GPT’s sometimes end up being. In general, we expect the emergence of a new generation of AI powered systems that harness the best benefits of the training data without tripping on the bad apples in it.

In our final instalment, we'll explore practical strategies for navigating this multiplicity, how professionals can harness AI's power while avoiding its pitfalls, and why mastering these tools isn't optional in the modern knowledge economy.

This article was written by Timo Tuominen, CTO at Ada Create. This is part 3 of our 4-part series "Understanding LLM Limitations." Next: "The Professional's Guide to AI: Mastering the Unreliable Genius"

Previous
Previous

Why Multi-Vertical Brands Are Ditching Segmentation for Content Versioning

Next
Next

From Shouting Into the Void to Speaking Human: The Buyer Committee Content Revolution