Hallucinations in ChatGPT

woman in gray knitted sweater

These days you can’t attend a talk or read an article about ChatGPT without someone asking a question about hallucinations or saying “Beware of Hallucinations”. Sadly in most cases, it is presented without an explanation of what hallucinations are and what to do about them.

So let’s discuss what actually are hallucinations and what concrete steps we can take to manage them in AI models like ChatGPT?

What are hallucinations?

Hallucinations are responses from AI models like ChatGPT that sound plausible but are factually incorrect.

Example: “George Washington was born in 1809”.

The actual answer from National Archives: “George Washington was born in Virginia onย February 11, 1731, according to the then-used Julian calendar. In 1752, however, Britain and all its colonies adopted the Gregorian calendar which moved Washington’s birthday a year and 11 days to February 22, 1732.”

Why do hallucinations happen?

Fundamentally the AI models are developing relationships between different data elements.

In the above case, the AI model mistakenly built a relationship with George Washington and 1809 as the birth date. This could be due to the fact that George Washington is a famous US president and the birth date of another famous US president, Abraham Lincoln, is 1809.

There are many reasons why these relationships can become incorrect. Some of them are:

  1. The data provided is incorrect. Either the data is wrong or the data is incomplete. Examples:
    • If you feed data from a biased source or simply opinions the AI model will assume the data is factual.
    • If you feed data to an AI model about only electric cars made by Ford, it may come to the wrong conclusion that Ford makes only electrics cars.
  2. Bugs in the model. The logic used to train the model and calculate the relationships may be buggy.
  3. The reinforcement learning feedback is incorrect. If during reinforcement learning, the user keeps affirming the wrong conclusion then the model will accept that. In the above example, if enough users keep affirming that George Washington was born on 1809, the AI model will think that’s true.
  4. Other reasons like the answer could be different in different contexts etc.

How to minimize hallucinations?

It is, at least currently, not possible to eliminate hallucinations completely but here are some things you can do to minimize them and their impact in AI models.

  1. Ensure that you are using data that is not inherently biased.
    • Establish a review process for data being fed into your AI model.
    • During this review process, experts should determine if the data is factual AND that the data is complete.
  2. Transparency
    • Provide transparency to the user about what data was used for coming to this answer and enable them to go to the data sources to double check.
  3. Grounding
    • You can “ground” your answer by using a few techniques:
      • Limit the data used to answer the question. Use the context of the question to choose the data used to determine the answer.
      • Use multiple LLMs to answer the question and compare the answers. If the answers are not consistent then your answer is not trustworthy.
      • Automatically fact check your answer with the most authoritative source. In example above, I checked my answer of George Washington’s birthday with the National Archive, the authoritative source.
      • Note: This is a rapidly advancing area of AI right now so there are more techniques being developed right now.
  4. Human Feedback
    • Incorporate humans in providing feedback on answers. Provide your users with the ability to accept or reject the answers. This feedback can be fed back into the model to do reinforcement learning.
  5. Reduce the scope of your model
    • In general, the wider the scope of your model the more likely it will be to have hallucinations. Rather than one model that answers lots of questions, you can create specific models for specific domains.
    • For example, you can have one model that answers healthcare questions but another model that answers financial questions.


Popular Blog Posts