Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

No, generative AI isn’t taking over your PC games anytime soon

Cyberpunk 2077 running on the Samsung Odyssey OLED G8.
Jacob Roach / Digital Trends

Surprise — the internet is upset. This time, it’s about a recent article from PC Gamer on the future of generative AI in video games. It’s a topic I’ve written about previously, and something that game companies have been experimenting with for more than a year, but this particular story struck a nerve.

Redditors used strong language like “pro-AI puff piece,” PC Gamer itself issued an apology, and the character designer for Bioshock Infinite’s Elizabeth called the featured image showing the character reimagined with AI a “half-assed cosplay.” The original intent of the article is to glimpse into the future at what games could look like with generative AI, but without the tact or clear realization of how this shift affects people’s jobs and their creative works.

Recommended Videos

But don’t worry. The generative AI future dreamt up out of this story by the internet isn’t coming any time soon. And rather than operate in a binary of pro-AI or anti-AI sentiment, I want to look at how AI is being used today in games, and how it could be used in the future, to offer better performance and visuals, and allow developers to push the envelope on what they’re able to deliver.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

It ain’t so simple

Half Life with ultra-realistic graphics Gen-3 video to video Runway ML Artificial intelligence

Before getting to AI in PC games today, we need to define some terms, because that’s the whole crux of this fiasco. In the original article, the author looked at several videos that reimagine old games with AI through the Runway ML tool. The model is fed with a final frame from the game, and then it generates a realistic-looking video based on that input.

It looks terrible, as you might suspect, but it’s not hard to see a video like this and imagine the future where this kind of tech looks much more realistic. And make no mistake — at some point, we will have generative AI that’s much more realistic in the future.

This is generative AI. You give the model an input and it spits out an output based on its training data, and in a content-agnostic manner. It doesn’t have a set of rules or algorithms — it attempts to understand the input based on its training and produce an output, no matter how flawed it may be. Predictive AI is slightly different. It instead works by training on a series of data and predicting the most likely outcome for future data. Generative AI is ChatGPT; Predictive AI is a Netflix recommendation.

Where does that leave tools like Nvidia’s DLSS? There really isn’t a clean line. Is it actually generating new frames as Nvidia suggests? Or is it just applying better prediction mechanisms to long-standing frame interpolation algorithms? There are people much more qualified to argue on the semantics here than I am, but it doesn’t take an AI scientist to see that what Runway ML and DALL-E are doing is not the same as what DLSS Frame Generation is doing. Not even close.

Photo mode in Nvidia GeForce Experience.
Jacob Roach / Digital Trends

There’s certainly some future where Nvidia could apply a FreeStyle filter over an existing game to offer more realistic visuals, as the original article suggests, but that’s not going to be how the game is meant to be played. The game still needs to be rendered — the filter, in this case, is nothing more than a filter. And to go backward by only rendering primitive objects and relying on AI to fill in the texture, lighting, and shadow details only gives the AI model less information to work with, not more. If this is the future we’re heading toward, it’s a long way off.

The best evidence that this style of generative AI isn’t going to take over your PC games any time soon, however, is Nvidia. The company’s CEO may wax poetic with press and analysts about how all pixels will be generated in the future, not rendered. But actions speak louder than words, and Nvidia’s investments in tools like RTX Remix and the Half-Life 2 RTX project paint a much different picture.

AI has a ton of applications in game development, but like most tech, the best approaches are targeted at solving pain points in development and gameplay.

Oh yeah, the rendering pipeline

Elder Scrolls 3: Morrowind - Auto-Enhanced with Nvidia RTX Remix AI

The problem with these videos of “remastered” games using Runway ML is that they frame how games are actually rendered completely wrong. They use the final frame of a game as an input, ignoring a lengthy rendering pipeline that actually produces that final image. Even AI tools like DLSS that come before the final frame is presented are still located very far down on the rendering chain after most of the work is already done. The exciting developments in generative AI are how you can apply the tech to different parts rendering and game development.

That brings us back to RTX Remix, which is a project that Nvidia has heavily pushed over the last couple of years with its AI features. It can up-res textures, convert material for use with ray traced lighting, and so much more. It’s a remarkable tool bolstered by AI, and you don’t have to take my word for it. Just download Portal with RTX on Steam and see RTX Remix in action. We’re not talking about some AI-driven future where every game is robbed of creative liberty. We’re talking about tools using AI to enable more creativity.

According to AMD’s Chris Hall, the exciting applications of AI in games come through in the most mundane places.

“If you look at Epic Games and the Unreal Engine, they were showing off their ML [machine learning] cloth simulation technology just a couple of months ago,” Hall said. “Seems like a very mundane, uninteresting use case, but you’re really saving a lot of compute by using a machine learning model.”

AI animations running on a laptop.
Luke Larsen / Digital Trends

I published an in-depth interview with Hall a few months ago that goes into detail about these applications of AI in games, but we’re seeing it everywhere already. Just last month, we saw the debut of GenMotion.AI, which promises to deliver high-quality animation from text prompts using AI.

Nvidia already has its Ray Reconstruction feature available, which applies AI at one of the most troublesome areas of games with ray tracing — denoising. True to Hall’s word, if AI is being used in game development or rendering, “it really needs to solve a problem that exists.”

The exciting developments in AI games aren’t with throwing an old game at an AI model to see whatever jank it spits out — regardless of how many clicks it drums up. It’s about targeting AI at problem areas. Maybe it provides better physics simulations; maybe it cleans up the lighting effects delivered by ray tracing. Maybe it improves performance by shortcutting traditional rendering techniques. For both developers and gamers, these are tools to get excited about.

Let’s talk about people’s jobs

The Blizzard Entertainment booth at Chinajoy China Digital Interactive Entertainment Expo.
Xing Yun / Getty Images

I’m not clueless to the reality here, nor to the narrative that greedy publishers will use generative AI to rob workers of their creativity and livelihood. It’s hard not to be cynical when you see things like GameNGen, which shows a fully-playable version of Doom running at 20 frames per second (fps) solely through generative AI. It’s an AI-driven game engine. It’s important to recognize that these things are research projects, the topics of AI data scientists, not executives at the head of game publishers.

Game publishers — who are in the business of selling games — will try to leverage generative AI to shortcut the process and squeeze out higher profits. They already have, with recent layoffs at Activation Blizzard and Xbox hitting 2D artists the hardest. Activision Blizzard even reportedly sold a bundle of items for Call of Duty Modern Warfare 3 featuring AI-generated images. The AI infiltration is happening, and it will continue to happen.

Ever wanted to play Counter-Strike in a neural network?

These videos show people playing (with keyboard & mouse) in 💎 DIAMOND's diffusion world model, trained to simulate the game Counter-Strike: Global Offensive.

💻 Download and play it yourself → https://t.co/vLmGsPlaJp

🧵 pic.twitter.com/8MsXbOppQK

— Eloi Alonso (@EloiAlonso1) October 11, 2024

Still, it’s important to recognize where we are today with AI in games and the future tools like Runway ML paint. We’re probably talking about decades, not years, before a fully AI-generated game is possible, even accounting for the rapid pace of AI development. Even at that point, is an AI-generated game practical? Or more importantly for game publishers, is it profitable? And if it is, will we have safeguards in place to distinguish AI content and protect the rights of workers?

You can’t throw the baby out with the bathwater here. AI is already in PC games, and it’s only going to become more prominent. As Hall put, AI is “inevitable.” There’s a middle ground here where you can recognize the great things AI is doing in games while also advocating for the rights of workers displaced by haphazard use of the tech — in the non-digital land of gaming, the use of AI-generated images in Magic: the Gathering marketing material prompted such fierce backlash that Wizards of the Cost (who creates Magic) removed the images and doubled down on its policies of art being fully made by humans.

Striking that middle ground is not only aligned with reality but it also helps push this evolving technology in the right direction. Going to the extremes serves no one. Dreaming up a future where AI is magically spitting out games is just as harmful as burying your head in the sand about the active harm that generative AI is doing in game development.

In Wired’s investigation of Activision Blizzard and Xbox — which revealed the above details about 2D artists being laid off — a veteran AAA developer going under the pseudonym Violet said the following: “[AI] is bad when the end goal is to maximize profits. AI can be extremely helpful to solve complex problems in the world, or do things no one wants to do — things that are not taking away somebody’s job.” A veteran AAA developer can recognize the nuance because they live it every day. We should be able to recognize that nuance, too.

Maybe then the game industry can move onto to solving the real problems with generative AI in game development like the use of copyrighted works, the use of works created by a human author to train an AI model, and the right of workers displaced by AI. That’s certainly a more productive discussion about the future of AI in games than separating into pro- or anti-AI sentiment.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Nvidia CEO in 1997: ‘We need to kill Intel’
NVIDIA CEO Jensen Huang at GTC

Those headline above includes strong words from the maker of the best graphics cards you can buy, and they have extra significance considering where Nvidia sits today in relation to Intel. But in 1997, things were a bit different. The quote comes from the upcoming book The Nvidia Way, written by columnist Tae Kim, and was shared as part of an excerpt ahead of the book's release next month.

The words from Nvidia CEO Jensen Huang came as part of an all-hands meeting at the company in 1997 following the launch of the RIVA 128. This was prior to the release of the GeForce 256, when Nvidia finally coined the term "GPU," and it was a precarious time for the new company. Shortly following the release of the RIVA 128, Intel launched its own i740, which came with an 8MB frame buffer. The RIVA 128 came with only a 4MB frame buffer.

Read more
Rest in pieces: Nvidia is finally ditching GeForce Experience for good
The Nvidia app on the Windows desktop.

We've had the Nvidia app for a while, but now, it's available officially. About a year ago, Nvidia launched the Nvidia app into beta as a one-stop-shop for managing some of its best graphics cards, including grabbing new drivers, messing around with different features, and optimizing your game settings. Now, it's out of beta, officially replacing the legacy GeForce Experience and Nvidia Control Panel apps, and with some new features in tow.

One of the biggest draws of the Nvidia app initially was driver downloads. It may seem mundane, but you'd previously need to download GeForce Experience and create an Nvidia account for GPU driver updates. If you didn't, you'd have to search and install your drivers manually. The Nvidia app gives you access to new drivers, and notifies you when they're ready, all without an Nvidia login. Now, signing in is optional for "bundles and rewards" offered by Nvidia.

Read more
Your next gaming PC could be fully built by Nvidia
Nvidia's A100 data center GPU.

Nvidia might be at the heart of your next gaming PC, not just through a graphics card, but also through your processor. Team Green is working on an Arm-based PC platform that's built around a CPU and GPU designed by Nvidia and that is reportedly set to launch in September 2025, according to DigiTimes.

According to the report, Nvidia is planning on launching a high-end computing platform based on Arm instructions in September, with a commercial launch following in March 2026. This is the first we're hearing about a timeline for Arm-based chips designed by Nvidia, but it's not the first time we're hearing about it. About a year ago, Reuters reported that Nvidia began looking into Arm-based CPUs as "part of Microsoft's effort to help chip companies build Arm-based processors for Windows PCs."

Read more