Skip to main content

Lawyer says sorry for fake court citations created by ChatGPT

There has been much talk in recent months about how the new wave of AI-powered chatbots, ChatGPT among them, could upend numerous industries, including the legal profession.

However, judging by what recently happened in a case in New York City, it seems like it could be a while before highly trained lawyers are swept aside by the technology.

Recommended Videos

The bizarre episode began when Roberto Mata sued a Columbian airline after claiming that he suffered an injury on a flight to New York City.

The airline, Avianca, asked the judge to dismiss the case, so Mata’s legal team put together a brief citing half a dozen similar cases that had occurred in an effort to persuade the judge to let their client’s case proceed, the New York Times reported.

The problem was that the airline’s lawyers and the judge were unable to find any evidence of the cases mentioned in the brief. Why? Because ChatGPT had made them all up.

The brief’s creator, Steven A. Schwartz — a highly experienced lawyer in the firm Levidow, Levidow & Oberman — admitted in an affidavit that he’d used OpenAI’s much-celebrated ChatGPT chatbot to search for similar cases, but said that it had “revealed itself to be unreliable.”

Schwartz told the judge he had not used ChatGPT before and “therefore was unaware of the possibility that its content could be false.”

When creating the brief, Schwartz even asked ChatGPT to confirm that the cases really happened. The ever-helpful chatbot replied in the affirmative, saying that information about them could be found on “reputable legal databases.”

The lawyer at the center of the storm said he “greatly regrets” using ChatGPT to create the brief and insisted he would “never do so in the future without absolute verification of its authenticity.”

Looking at what he described as a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations,” and describing the situation as unprecedented, Judge Castel has ordered a hearing for early next month to consider possible penalties.

While impressive in the way they produce flowing text of high quality, ChatGPT and other chatbots like it are also known to make stuff up and present it as if it’s real — something Schwartz has learned to his cost. The phenomenon is known as “hallucinating,” and is one of the biggest challenges facing the human developers behind the chatbots as they seek to iron out this very problematic crease.

In another recent example of a generative AI tool hallucinating, an Australian mayor accused ChatGPT of creating lies about him, including that he was jailed for bribery while working for a bank more than a decade ago.

The mayor, Brian Hood, was actually a whistleblower in the case and was never charged with a crime, so he was rather upset when people began informing him about the chatbot’s rewriting of history.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
ChatGPT’s Advanced Voice Mode now has a ‘better personality’
ChatGPT's Advanced Voice Mode on a smartphone.

If you find that ChatGPT’s Advanced Voice Mode is a little too keen to jump in when you’re engaged in a conversation, then you’ll be pleased to know that the latest update should bring an end to such unwanted interruptions.

OpenAI post-training researcher Manuka Stratta said in a video posted on Monday that the update gives Advanced Voice Mode a “better personality,” adding that the AI-powered tool will now “interrupt you much less ... [and] because it interrupts you less, you'll be able to have more time to gather your thoughts and not feel like you have to fill in all the gaps and silences all the time.”

Read more
Using ChatGPT too much can create emotional dependency, study finds
OpenAI loneliness study image

OpenAI seems to be announcing new AI models by the week to improve its ChatGPT chatbot for the betterment of its 400 million users. However, the ease the AI tool provides seems to prove that it’s possible to have too much of a good thing.

The artificial intelligence company is now delving into the potential psychological ramifications that ChatGPT might have on its users. OpenAI has published the results of a two-part study completed alongside MIT Media Lab, which uncovered a connection between increased usage of the ChatGPT chatbot and users' increased feelings of loneliness.

Read more
Anthropic Claude is evolving into a web search tool
The Anthropic logo on a red background.

Anthropic has thrown its hat in the race to establish an AI-based web search feature, which it announced on Thursday.

The feature is based on Anthropic’s Claude 3.7 Sonnet model and integrates web search into the chatbot tool. You can enable the feature in your profile settings. With an AI prompt, you will receive contextual results with search engine sources included, instead of just the link options you would receive in a standard search result. The web search feature will be available, first to paid U.S. customers and will roll out to additional users at a later time.

Read more