Skip to main content

How generative AI will create games with ‘broader, bigger, and deeper worlds’

Jacob Roach in a promotional image for ReSpec
This story is part of Jacob Roach's ReSpec series, covering the world of PC gaming and hardware.
Updated less than 6 days ago

Of all the use cases for generative AI out there, I can’t think of one more significant than video games. Sure, we’ve seen people create simple games from GPT-4 — but surely, I assumed a technology this powerful was being discussed at the higher levels of game development too.

To get an idea of how big of a shift this could be, I wanted to talk to someone who actually understands how games are made on a technical level. Marc Whitten, Senior Vice President and General Manager of Unity Create, is surely one such person. He’s particularly excited about how AI could transform game development, and we spoke about how the tools that could enable that revolution are already making their way to creators.

Recommended Videos

Faster create time

Ziva Face Trainer in Unity.
Image used with permission by copyright holder

Games take a huge amount of time and effort to develop, but most of that time is dedicated to creating all of the content for the game. Whitten says that if you look at a common 300-person AAA studio, somewhere around 80% of them are dedicated to creating content. AI can speed up that process drastically.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Whitten provided a clear example of that: Ziva Face Trainer. Ziva is a company Unity acquired in early 2022, and it has been working on its Face Trainer tool for a little over two years. It takes a model, trains it on a large set of emotions and movements, and generates something usable.

How much time does this save? Whitten says high-end rigging of a character can take a team of four to six artists four to six months: “Candidly, [that’s] why the state-of-the-art quality of characters has not actually progressed that much in the last ten years or so.”

Senua’s Saga: Hellblade II – The Game Awards 2019 – Announce Trailer (In-Engine)

With Ziva Face Trainer, developers “give it a mesh and we train that mesh against a large set of data… so you get back in five minutes a rig model that allows you to then run it in real-time.” Ziva tech is being used a lot, too. It’s behind the suit deformation in Spider-Man: Miles Morales, as well as the Troll in the Senua’s Saga: Hellblade 2 trailer. You’ve probably seen it in a few movies and TV shows even — Captain Marvel, John Wick 3, and Game of Thrones are on the list.

That shouldn’t come as a surprise. Machine learning and procedural techniques (such as tools like SpeedTree) aren’t exactly new in the world of game development. It’s true that more research into AI models can lead to even more efficient creation pipelines, but we’re seeing a shift with generative AI. We’re talking about large language models (LLMs) like GPT-4 and diffusion models like Midjourney, and they can radically change the games we see.

Changing the game

Image used with permission by copyright holder

Whitten says the hope with AI is to make games “ten to the third better,” which means games that are ten times faster, ten times easier, and ten times cheaper to develop. The result of that isn’t a flood of the same games we have, though. Whitten believes the results of that are “broader, bigger, deeper worlds.”

I asked for an example, and Whitten pondered what Skyrim would look like if it had a generative AI model behind it. We’ve all heard the “arrow to the knee” meme from the game, but Whitten imagined a game where that throwaway line meant something more.

“Well, what if each of those guards actually had a Myers-Briggs-type chart? A little bit of a backstory and frankly, a backstory that could have been impacted by that. What has happened with the character along the way? And then an AI model to generate what would be a rational response coming out of that, given all of those particular events.”

We’re seeing some effort there with games like The Portopia Serial Murder Case, which, bluntly, haven’t made the best case for AI in games. It’s not hard to see the potential, though, especially in larger games with NPCs that don’t have set quests or exhaustive dialogue.

A player converses with an NPC in The Portopia Serial Murder Case.
Image used with permission by copyright holder

There’s a lot of potential in sandbox-style games, as well. Whitten imagined a GTA-style game where you “go into the pawn shop and recruit the person behind the behind the desk and, you know, with maybe the game creator never even thinking about that as a possibility because of something else that happened in the game.” Whitten also thought about Scribblenauts, except in a world where you could truly make anything and assign it any properties.

The problem right now is getting that to actually work, as evidenced by The Portopia Serial Murder Case. Whitten was one of the founding members of the Xbox team at Microsoft, and he helped lead the Kinect push. About Kinect, Whitten said: “I would tell everybody it works amazing if I’m sitting next to you.” You needed to prompt it in a specific way, and if you deviated, it wouldn’t work.

That’s the big problem that’s faced AI as a whole, with smart assistants like Alexa only operating within a narrow range. LLMs change that dynamic and allow for any prompt, and that’s what’s exciting about creating deeper game worlds. There’s still a road to get there, though.

“If you put the tool out there … [creators will] hit whatever the boundaries are and say, ‘Well, that’s not fun.’ But then they’re going to actually go find the space that no one’s even thinking about,” Whitten said.

With more tools rolling out, we could see some early experiments with AI within the next year. We already have in some cases, such as the wildly popular AI Dungeon 2. But to make this sort of immersive world possible at scale, you need a middleman. And for Unity, that middleman is Barracuda.

The Barracuda

An image from Unity's Book of the Dead.
Image used with permission by copyright holder

Unity includes a neural network inference library called Barracuda. As Whitten explains, “It’s an inference engine that allows you to drive either diffusion or other forms of generative content at runtime on the device without hitting the cloud and at a highly performant pace.”

Oh yeah, performance. As much as we like to talk about AI can change content forever, there’s a massive computational cost (there’s a reason it took tens of thousands of GPUs to build ChatGPT). Barracuda allows those models to run on your CPU or GPU so you don’t have to go out to the cloud, which, for the record, would be a huge money-sink for developers.

Unity is working on more features for Barracuda, and Whitten says the “interest back from the game creator community has been extraordinarily high.” It’s the key that makes generative AI possible in game development and design, especially without requiring any specific hardware.

Whitten says the team wants to start “building techniques that allow creators to start really targeting a large and core part of their game design, not ‘Oh, this is going to really diminish my audience if I design for it.'” Unreal Engine, for its part, has a similar tool (the aptly-named NeuralNetworkInference tool, or NNI).

These libraries, when met with large generative AI models and an acceleration in content development, can lead to an “explosion of creativity,” according to Whitten. And that’s something to get excited about for the future of games.

This article is part of ReSpec – an ongoing biweekly column that includes discussions, advice, and in-depth reporting on the tech behind PC gaming.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Nvidia’s most underrated DLSS feature deserves far more attention
Alan Wake 2 running on the Samsung Odyssey OELD G9.

Since the introduction of Nvidia's Deep Learning Super Sampling (DLSS), the company has done an excellent job getting the feature in as many games as possible. As the standout feature of Nvidia's best graphics cards, most major game releases come with the feature at the ready.

That's only become truer with the introduction of DLSS 3 and its Frame Generation feature, showing up in recent releases like Ghost of Tsushima and The First Descendent. But one DLSS feature has seen shockingly low representation.

Read more
I tried to settle the dumbest debate in PC gaming
settling borderless and fullscreen debate dt respec vs

Borderless or fullscreen? It's a question every PC gamer has run up against, either out of curiosity or from friends trying to get the best settings for their PC games. Following surface-level advice, such as what we lay out in our no-frills guide on borderless versus fullscreen gaming, will set you on the right path. Borderless is more convenient, but it might lead to a performance drop in some games. In theory, that's all you need to know. But the question that's plagued my existence still rings: Why? 

If you dig around online, you'll get wildly different advice about whether borderless or fullscreen is better for your performance. Some say there's no difference. Others claim huge improvements with fullscreen mode in games like PlayerUnkown's Battlegrounds. More still say you'll get better performance with borderless in a game like Fallout 4. You don't need to follow this advice, and you probably shouldn't on a more universal basis, but why are there so many different claims about what should be one of the simplest settings in a graphics menu?

Read more
Don’t waste your money on an OLED gaming monitor
Cyberpunk 2077 running on an OLED gaming monitor.

OLED monitors are a waste of money, which is a shocking statement coming from the guy who packed the list of the best gaming monitors with OLED options. I love an OLED gaming monitor and the experience it can provide, don't misunderstand me. For the vast majority of gamers, however, they're simply too expensive to justify right now.

Since the release of the LG UltraGear OLED 27, we've seen a flurry of new gaming monitors sporting the latest and greatest panel tech. Each new release pushes the envelope further. Although we're a few years separated from when the first OLED gaming monitors showed up, this display tech is still in its early adopter stage.
You're literally wasting money

Read more