Why Hallucinations Can Never Be Eliminated in LLMs


*But we CAN develop technologies to minimize them.*

There are software companies promising hallucination-free LLMs. Unfortunately, this is physically impossible. I’ll explain below why hallucinations are inherent in LLM technology.

Types of Hallucinations

1. Data Quality Hallucinations: These occur when LLMs are trained on incorrect information. While using only trusted sources can mitigate this, it doesn’t address the root cause of all hallucinations.

2. Inherent Hallucinations: These arise from the way LLMs function, even with 100% accurate data. This is our focus today.

Understanding the Inherent Hallucinations

The core mechanism of LLMs is predicting the next word in a sequence. To do this, the LLM evaluates the context and calculates probabilities for the next word.

In English, there are about 40,000 common words. Considering just the last word and the next word results in 1.6 billion combinations (40,000 x 40,000). If we add one more word, the combinations soar to 64 trillion. For a 100-word paragraph, the combinations are astronomical: 1.6 followed by 131 zeros, a number far exceeding the total grains of sand on Earth’s beaches or the stars in the observable universe.

Calculating this with today’s technology is impossible.

Compression and Information Loss

LLMs use a transformer in Generative Pretrained Transformer (GPT) to “compress” information. However, this compression inevitably leads to information loss. Current research suggests that lossless compression of these combinations isn’t possible.

As a result, LLMs operate on partial information, leading to inherent hallucinations. If LLMs could provide always correct answers without hallucinations, it would imply that lossless compression is achievable, which it isn’t.

Scaling LLMs

Increasing the size of LLMs doesn’t solve the problem. The sheer number of calculations needed makes it physically impossible to capture all information. Moreover, it’s unclear if making LLMs larger continues to reduce hallucinations.

Mitigation Strategies

Since we must accept that hallucinations will occur in every LLM, we can focus on minimizing them. Techniques include:

Eight-Step Framework for Controlling Hallucinations: https://healthcareconsumer.ai/controlling-hallucinations-in-llms/.
Collaborative LLMs: Using multiple LLMs to collaborate for accurate answers: https://arxiv.org/html/2406.03075v1.
Human Feedback: Fine-tuning LLMs with human input.

By understanding and applying these strategies, we can better manage hallucinations in LLMs, improving their reliability and accuracy.


Popular Blog Posts


Leave a Reply

Your email address will not be published. Required fields are marked *