Skip to main content

ChatGPT creator seeking to eliminate chatbot ‘hallucinations’

Despite all of the excitement around ChatGPT and similar AI-powered chatbots, the text-based tools still have some serious issues that need to be resolved.

Among them is their tendency to make up stuff and present it as fact when it doesn’t know the answer to an inquiry, a phenomenon that’s come to be known as “hallucinating.” As you can imagine, presenting falsehoods as fact to someone using one of the new wave of powerful chatbots could have serious consequences.

Close up of ChatGPT and OpenAI logo.
Image used with permission by copyright holder

Such trouble was highlighted in a recent incident in which an experienced New York City lawyer cited cases — suggested by ChatGPT — that turned out never to have happened. The lawyer may face sanctions as a result of his action.

Recommended Videos

Another incident received widespread attention in April when ChatGPT apparently rewrote history by saying that an Australian mayor had been jailed for bribery while working for a bank when in fact he’d been a whistleblower in the case.

To make its chatbot technology more reliable, OpenAI engineers have revealed that they’re currently focusing on improving its software to reduce and hopefully eliminate these problematic occurrences.

In a research paper released on Wednesday and picked up by CNBC, OpenAI said that chatbots “exhibit a tendency to invent facts in moments of uncertainty,” adding: “These hallucinations are particularly problematic in domains that require multi-step reasoning since a single logical error is enough to derail a much larger solution.”

To tackle the chatbot’s missteps, OpenAI engineers are working on ways for its AI models to reward themselves for outputting correct data when moving toward an answer, instead of rewarding themselves only at the point of conclusion. The system could lead to better outcomes as it incorporates more of a human-like chain-of-thought procedure, according to the engineers.

But some experts expressed doubt about the work, telling CNBC it’s of little use until it’s incorporated into ChatGPT, which in the meantime will carry on hallucinating. OpenAI hasn’t said if and when it might incorporate its work into its generative AI tools.

While it’s good to know that OpenAI is working on resolving the issue, it could be a while before we see any improvements. In the meantime, as OpenAI itself says, ChatGPT may occasionally generate incorrect information, so be sure to confirm its responses if they’re part of any important tasks.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
ChatGPT’s Advanced Voice Mode now has a ‘better personality’
ChatGPT's Advanced Voice Mode on a smartphone.

If you find that ChatGPT’s Advanced Voice Mode is a little too keen to jump in when you’re engaged in a conversation, then you’ll be pleased to know that the latest update should bring an end to such unwanted interruptions.

OpenAI post-training researcher Manuka Stratta said in a video posted on Monday that the update gives Advanced Voice Mode a “better personality,” adding that the AI-powered tool will now “interrupt you much less ... [and] because it interrupts you less, you'll be able to have more time to gather your thoughts and not feel like you have to fill in all the gaps and silences all the time.”

Read more
Using ChatGPT too much can create emotional dependency, study finds
OpenAI loneliness study image

OpenAI seems to be announcing new AI models by the week to improve its ChatGPT chatbot for the betterment of its 400 million users. However, the ease the AI tool provides seems to prove that it’s possible to have too much of a good thing.

The artificial intelligence company is now delving into the potential psychological ramifications that ChatGPT might have on its users. OpenAI has published the results of a two-part study completed alongside MIT Media Lab, which uncovered a connection between increased usage of the ChatGPT chatbot and users' increased feelings of loneliness.

Read more
Man who looked himself up on ChatGPT was told he ‘killed his children’
ChatGPT logo on a phone

Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently -- yet wrongly -- claim that you had been jailed for 21 years for murdering members of your family.

Well, that’s exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAI’s widely used AI-powered chatbot.

Read more