How Do We Manage AI Hallucinations?

AI can now generate responses to questions that seem eerily human. But sometimes, AI makes answers up. Are these fictitious responses actual hallucinations, or something else?

Richard Pallardy, Freelance Writer

October 24, 2023

11 Min Read
Warped, Spiraling Clock face on blue background
Jeff Jarrett via Alamy Stock

At a Glance

  • Research found that 93% of respondents believed that AI hallucinations might lead to actual harm in some way or another
  • While ChatGPT may be a means of shortcutting work, many more talk to Siri and Alexa as if they were trusted advisors.
  • AI responses have the potential to influence nearly all aspects of human existence.

When Chat Generative Pre-trained Transformer -- better known as ChatGPT -- was launched last November by OpenAI, it was immediately put to the test by users around the world. ChatGPT was variously viewed as a revolutionary research tool and an amusing novelty.

Some found it useful -- the AI chatbot proved to be adept at gathering information from disparate sources and synthesizing it in a conversational, easily understandable format. But just as many found themselves confronted by non sequiturs and even outright untruths. Many responses were comical -- the platform, despite collecting the birth and death dates for a public figure, was unable to clearly state that he was dead, for example.

But others were disturbing. ChatGPT invented medical studies and even linked them to DOIs (digital object identifiers) for other, unrelated papers. These mystifying results have been referred to as AI hallucinations -- instances in which large language models (LLMs) produce information with a tenuous relationship to reality. Some of these so-called hallucinations appear to be entirely fabricated while others seem to be confabulations, drawing on verified facts but filling in the gaps with junk.

The reasons that LLMs fail in this manner are poorly understood. Current opinion suggests that, whatever they are called, these inaccurate responses will always emerge from AI systems -- and that human feedback will be essential to ensuring that they do not go off the rails entirely.

Related:Podcast: Is 2023 an AI Hallucination Odyssey?

Further, the language used to describe these failures is increasingly contested. Are these really hallucinations in the sense that humans experience them? How can they be corrected?

Here, InformationWeek delves into the literature on AI hallucinations for answers, with insights from Tidio’s content specialist Maryia Fokina about the AI customer service platform’s new study on the subject.

What Are AI Hallucinations?

According to a 2015 survey article on psychosis: “Hallucination is defined as a sensory perception in the absence of a corresponding external or somatic stimulus and described according to the sensory domain in which it occurs. Hallucinations may occur with or without insight into their hallucinatory nature.”

Describing the production of inaccurate information by AI as a hallucination draws on this concept metaphorically. Androids may not dream of electric sheep exactly, but they may manufacture the idea that they exist under the right circumstances. The term first appeared in papers published in the proceedings of a conference on face and gesture recognition and has since been applied more broadly.

Related:Manage Generative AI Risk Before It Manages You

In systems like ChatGPT, which responds to prompts input by a user, these hallucinations can take a variety of forms. As Tidio’s study relates, they may contradict the prompt directly, include contradictory sentences or facts, and even fabricate sources entirely. In other contexts, as in the original use of the term, they may also be visual, either in video or image form, or auditory.

“There are plenty of types of AI hallucinations but all of them come down to the same issue: mixing and matching the data they’ve been trained on to generate something entirely new and wrong,” Fokina claims.

These hallucinatory responses are often, but not always, superficially plausible. The models are designed to produce material that is easy for the user to understand, and thus even erroneous information is presented in a confident, matter-of-fact manner. So, even hallucinations are presented as if they were reality.

Google’s Bard claimed, for example, that the James Webb Space Telescope had captured the first images of a planet beyond our solar system, when in fact, the first such image was captured by another telescope in 2004. Without double-checking the facts, the user might well believe this to be true. Less believably, ChatGPT has claimed that someone crossed the English Channel on foot and that the Golden Gate Bridge was transported across Egypt -- twice.

Related:Podcast: Ghosts Out of the Shell -- AI vs Human Hallucinations

Are They Actually Hallucinations?

The analogy between fictitious responses produced by a machine and sensory phenomena in humans is clear: Both produce information that is not grounded in reality. Just as humans experiencing hallucinations may see vivid, realistic images or hear sounds reminiscent of real auditory phenomena, LLMs may produce information in their “minds” that appear real but is not.

A recent article in the Schizophrenia Bulletin, however, takes issue with this metaphorical construction. The authors claim: “It is an imprecise metaphor. Hallucination is a medical term used to describe a sensory perception occurring in the absence of an external stimulus. AI models do not have sensory perceptions as such -- and when they make errors, it does not occur in the absence of external stimulus. Rather, the data on which AI models are trained can (metaphorically) be considered as external stimuli -- as can the prompts eliciting the (occasionally false) responses.”

They further argue that the use of the term “hallucination” is stigmatizing to those who suffer from mental illness -- and experience true hallucinations. They suggest the use of “non sequitur” or “unrelated response” instead.

“There are also options like ‘AI misconception,’ ‘AI fabrication,’ or ‘AI fallacy,’” Fokina suggests. “I can see that now, for the sake of convenience, people don’t think twice when calling it a hallucination. However, as the language and the terminology evolve, I’m sure we will move away from this.”

These terms, however, are far less evocative and less likely to draw attention to the problem. Some observers, including the author of IBM’s summary of the issue, maintain that for all its imprecision, the use of the term “hallucination” is relatively accurate and metaphorically useful.

Why Do AI Hallucinations Occur?

While the ultimate causes of AI hallucinations remain somewhat unclear, a number of potential explanations have emerged.

These phenomena are often related to inadequate data provision during design and testing. If a limited amount of data is fed into the model at the outset, it will rely on that data to generate future output, even if the query is reliant on an understanding of a different type of data. This is known as overfitting, where the model is highly tuned to a certain type of data but incapable of adapting to new types of data. The generalizations learned by the model may be highly effective for the original data sets but not applicable to unrelated data sets.

The model itself can also be a problem when it does not fully account for variations in word meanings and semantic constructions. Vector encoding maps different meanings of words and sentence constructions in an attempt to obviate these events. If the model does not understand the differing meanings created by synonymous words and varying ways of deploying them, it becomes more likely to spit out nonsensical or inaccurate responses.

Why Are AI Hallucinations a Problem?

Tidio’s research, which surveyed 974 people, found that 93% of them believed that AI hallucinations might lead to actual harm in some way or another. At the same time, nearly three quarters trust AI to provide them with accurate information -- a striking contradiction.

Millions of people use AI every day. While ChatGPT may be a curiosity to many or a means of shortcutting work -- see the many instances of students attempting to pass off ChatGPT-written papers as their own -- many more talk to Siri and Alexa as if they were trusted advisors. Users turn to these dulcet-voiced AI features for everything from home repairs to medical advice. And often, they receive rational, well-constructed responses.

But what if they don’t? Where does the liability lie? With the user, for trusting an AI? The developers, for failing to anticipate these cases? Or nowhere at all -- floating in the cloud, as it were, untethered to the material reality it is affecting?

AI responses have the potential to influence nearly all aspects of human existence -- from elections to information on social crises like the pandemic to the legal system.

Nearly half of Tidio’s respondents felt that there should be stronger legislative guidelines for developers -- guardrails to ensure that the hubris of the AI movement does not infringe on the rights of living, breathing humans.

AI platforms have already generated consequentially inaccurate and biased information. In June, a New York law firm submitted precedents on behalf of its client in an aviation injury suit -- and they turned out to be entirely manufactured by ChatGPT, resulting in a $5,000 fine. In 2016, Microsoft’s Tay chatbot began generating racist tweets, leading the company to shut it down.

A number of medical researchers who attempted to use ChatGPT to gather references for their research have expressed concerns, too. An August editorial in the Nature journal Schizophrenia issued a scathing indictment of ChatGPT’s tendency to produce fictional papers in support of a claim. Out of five references to a specific brain region that might be relevant to antipsychotic treatment, three were totally fabricated -- a rather meta instance of AI hallucination, given that psychosis can result in true hallucinations. Google Bard has similarly fabricated references.

A larger study found that out of 178 references generated by ChatGPT, 28 did not exist at all and 41 did not include an accurate DOI.

If AI is used in actual medical treatment, where doctors often need to return to the literature in search of answers regarding rare or difficult-to-diagnose conditions, these types of results could actually be a matter of life or death.

Such findings suggest that LLMs are not yet ready for applications that might result in serious detrimental effects in the real world.

What Can Be Done to Mitigate AI Hallucinations?

Tidio’s study found that nearly a third of LLM users intuitively spot AI hallucinations and nearly two thirds end up cross-referencing the results to be sure. This latter tendency may be a saving grace in the near term -- most of us know not to trust these platforms implicitly. 

AI platform developers continue to use such human input to train their models. One method that has shown promise is known as process supervision. OpenAI is now using it to refine ChatGPT. Rather than simply rewarding correct answers -- known as outcome supervision -- process supervision finetunes each step in the logic used to reach the outcome.

Other research has suggested scaling this up -- crowd-sourcing the analysis of responses, thus exponentially increasing the level of human feedback into AI systems. This could, of course, be messy. Humans are as apt to provide inaccurate information, intentionally or not, as they are to correct errors. And bias will remain a persistent issue.

Using a wider range of datasets before bringing an AI platform to broader use can also help to cut down on hallucinated responses. If the model is familiar with a wide range of data, it is less likely to misfire when confronted with questions that challenge its capabilities. And these data sets should be constantly updated, ensuring that the model is agile and actively learning on a constant basis.

“AI hallucinations are practically unavoidable since the real world moves much faster than the databases are updated,” Fokina says. “New information pops up every second, which leads to knowledge gaps.”

Ensuring that bias and fact-checking mechanisms are baked in from the start -- and actively maintained -- can also aid in ensuring the fidelity of responses. So, too, aggressively provoking models to produce hallucinations, and then attempting to reverse engineer them, can shed light on why they occur in the first place. The Hallucination Evaluation for Large Language Models (HaluEval) benchmark introduced in a May 2023 paper attempts to do just that -- and to teach models to recognize their own hallucinations. This is a tricky proposition, as tracing the logic of the LLM can be quite challenging.

Could AI Hallucinations Be Beneficial?

Some have suggested that AI hallucinations may not always be a bad thing -- though they may result in erroneous conclusions, they may also lead to previously unknown connections between disparate trains of thought and concepts that could then be investigated by humans.

This might be particularly useful in creative fields. Artists, architects, and game designers may be able to leverage the strange outputs of machines and turn them into practical innovations: new visual modes, building efficiencies, plot twists in complex gaming systems.

Fokina thinks that these phenomena may ultimately be helpful to everyday users and developers as well.

“For users, it leads to an increase in user awareness and critical thinking when working with LLMs. The more hallucinations we are exposed to, the more cautiously we analyze the information we see, just like with fake news in the media,” she says. “For developers, AI hallucinations are an indicator of potential gaps in their LLM training data. It leads to model refinement, more testing, and improvement. If we take AI hallucinations seriously, AI usage and deployment will become more responsible and, ultimately, safe.”

Still, an abundance of caution is warranted as we explore the AI frontier. Curiosity about the machine generated mirages that have infiltrated our daily lives is natural -- but we must ensure that the next digital fata morgana does not lead us into the abyss.

About the Author(s)

Richard Pallardy

Freelance Writer

Richard Pallardy is a freelance writer based in Chicago. He has written for such publications as Vice, Discover, Science Magazine, and the Encyclopedia Britannica.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights