Skip to main content

GPT-4 vs. ChatGPT: just how much better is the latest version?

A laptop opened to the ChatGPT website.
Shutterstock

GPT-4 is the latest language model for the ChatGPT AI chatbot, and despite just being released, it’s already making waves. The new model is smarter in a number of exciting ways, most notably its ability to understand images, and it can also process over eight times as many words as its predecessor. It’s a lot harder to fool now as well.

You’ll need to pay to use the new version though, as for now, it’s locked behind the ChatGPT Plus subscription.

Recommended Videos

How do you use GPT-4 and ChatGPT?

A laptop opened to the ChatGPT website.
Shutterstock

The easiest way to access ChatGPT is through the official OpenAI ChatGPT website. There’s a lot of interest in it at the moment, and OpenAI’s servers regularly hit capacity, so you may have to wait for a spot to open up to use it, but just refresh a few times and you should be able to gain access.

If you don’t want to wait, you can sign up for a ChatGPT Plus subscription. That gives you priority access, and you should be able to use ChatGPT whenever you want if you’re a paid member. However, there is a waitlist for new subscribers at this time, so you may have to wait a little while anyway.

You’ll also need to sign up if you want to use GPT-4. The default, free version of ChatGPT is currently running GPT 3.5, a modified version of the GPT3 model that’s been in use since 2020. GPT-4 is, for now, a subscriber-only feature, though as it sees greater development, it may well become more widely available.

What can GPT-4 do better than ChatGPT?

GPT-4 is a next-generation language model for the AI chatbot, and though OpenAI isn’t being specific about what changes it’s made to the underlying model, it is keen to highlight how much improved it is over its predecessor. OpenAI claims that it can process up to 25,000 words at a time — that’s eight times more than the original GPT-3 model — and it can understand much more nuanced instructions, requests, and questions than GPT-3.5, the model used in the existing ChatGPT AI.

OpenAI also assures us that GPT-4 will be much harder to trick, won’t spit out falsehoods as often, and is more likely to turn down inappropriate requests or queries that could see it generate harmful responses.

But GPT-4 also has some exciting new abilities that early adopters are already putting to good use.

GPT-4 can understand images

GPT-4 is a multimodal language model AI, which means it can understand text and other media, like images. This might sound familiar if you’re had a go with Stable Diffusion AI art generation, but it’s more capable than that, as it can respond to images and queries. This has led to some exciting uses, like GPT-4 creating a website based on a quick sketch.. or being able to suggest recipes for a user after analyzing an image of the ingredients they have to hand.

Now let's get into the details.

GPT-4 is multimodal and it now accepts the images as inputs and generates captions, classifications, and analyses. 🔥

Below is one such example of giving an input image of ingredients and asking GPT-4 to generate a list of recipes. pic.twitter.com/mJMq8zLgkk

— Sumanth (@Sumanth_077) March 15, 2023

It’s getting much better at programming

ChatGPT has already shown itself a capable programmer, but GPT-4 takes it to a while new level. Early users have managed to get it to make them basic games in just a few minutes. Both Snake and Pong were recreated from scratch, despite the users having next to no experience with programming.

It can pass exams

ChatGPT was good at acting like a human, but put it under stress, and you could often see the cracks and the seams. But with GPT-4, that’s much less likely to happen. In fact, it can perform so well on tests for humans that GPT-4 was able to pass the Uniform bar exam in the 90th percentile of test takers. It also passed the Biology Olympiad test in the 99th percentile. In comparison, ChatGPT was only able to do so in the 31st percentile.

GPT-4 can create its own lawsuits

The combination of improved reasoning and text comprehension has a lot of potential for the DoNotPay team. It’s working on using GPT-4 to generate “one-click lawsuits,” where robocallers would be sued if they spam you. Such a system could also be used to scan medical bills and identify errors, or compare prices with other hospitals to help get bills down. It could then even draft a legal defense using the No Surprises Act.

It can understand humor

GPT-4 is much better at understanding what makes something funny. Not only can it tell better jokes when asked, but if you show it a meme or other funny image and ask it to explain what’s funny about it, it can understand what’s going on and explain it to you.

GPT-4 limitations

Like ChatGPT before it, GPT-4 isn’t perfect. It’s certainly a worthy competitor for Google Bard, but it still has a ways to go before it doesn’t make mistakes at all and can do just about anything.

At the time of writing, GPT-4 is trained on data that was collected up until August 2022, so it has no knowledge beyond that date. That creates severe limitations on what the AI can do, and means that as time goes on, it becomes less accurate due to lacking the most up-to-date information.

Like its predecessor language models, GPT-4 is also prone to “hallucinations,” where it claims inaccurate information as fact. This reportedly happens a lot less with this model, but it’s not immune, raising concerns over its use in accuracy-sensitive environments. It’s also quite limited in its ability to learn from experience, so it may continue to make the same errors, even when they are pointed out to it.

GPT-4 is currently limited to 100 messages every four hours, even for ChatGPT Plus subscribers, and if you aren’t already a member, you’ll have to join the waitlist and state your reasons for wanting to use it.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
OpenAI’s Advanced Voice Mode can now see your screen and analyze videos
Advanced Santa voice mode

OpenAI's "12 Days of OpenAI" continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT's Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick's dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Read more
The ChatGPT app is transforming my Mac right before my eyes
The ChatGPT Mac app running in macOS Sequoia.

Apple is all in on AI for the Mac. It's called Apple Intelligence, and it's really only starting to get off the ground.

Meanwhile, OpenAI went ahead and launched its own ChatGPT app earlier this year, and supported it with a recent update that made it even more useful, bringing ChatGPT’s web-searching powers to its Mac app.

Read more
One of ChatGPT’s latest features comes to the free tier
ChatGPT's Canvas screen

In October, OpenAI debuted its Canvas feature, a collaborative interface that visually previews the AI response to the user's writing or coding request. However, it was only made available as a beta feature for Plus and Teams subscribers. On Tuesday, the company announced that it is bringing Canvas to all users, even at the free tier.

While one could easily mistake Canvas for a blatant knockoff of Anthropic's Artifacts feature, OpenAI is also incorporating a swath of new capabilities into Canvas. For one, Canvas is now integrated directly into the GPT-4o model so that it runs natively within ChatGPT, eliminating the need to select it specifically from the model-picking list.

Read more