Skip to main content

AMD just revealed a game-changing feature for your graphics card

AMD logo on the RX 7800 XT graphics card.
Jacob Roach / Digital Trends

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it’s using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We’ve heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia’s claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

Recommended Videos

AMD hasn’t revealed its research yet, so there aren’t a ton of details about how its method would work. The key with Nvidia’s approach is that it leverages the GPU to decompress textures in real time. This has been an issue in several games released in the past couple of years, from Halo Infinite to The Last of Us Part I to Redfall. In all of these games, you’ll notice low-quality textures if you run out of VRAM, which is particularly noticeable on 8GB graphics cards like the RTX 4060 and RX 7600.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

One detail AMD did reveal is that its method should be easier to integrate. The tweet announcing the paper reads, “unchanged runtime execution allows easy game integration.” Nvidia hasn’t said if its technique is particularly hard to integrate, nor if it will require specific hardware to work (though the latter is probably a safe bet). AMD hasn’t made mention of any particular hardware requirements, either.

We'll present "Neural Texture Block Compression" @ #EGSR2024 in London.

Nobody likes downloading huge game packages. Our method compresses the texture using a neural network, reducing data size.

Unchanged runtime execution allows easy game integration. https://t.co/gvj1D8bfBf pic.twitter.com/XglpPkdI8D

— AMD GPUOpen (@GPUOpen) June 25, 2024

At this point, neural compression for textures isn’t a feature available in any game. These are just research papers, and it’s hard to say if they’ll ever turn into features on the level of something like Nvidia’s DLSS or AMD’s FSR. However, the fact that we’re seeing AI-driven compression from Nvidia, Intel, and now AMD suggests that this is a new trend in the world of PC gaming.

It makes sense, too. Features like DLSS have become a cornerstone of modern graphics cards, serving as an umbrella for a large swath of performance-boosting features. Nvidia’s CEO has said the company is looking into more ways to leverage AI in games, from generating objects to enhancing textures. As features like DLSS and FSR continue become more prominent, it makes sense that AMD, Nvidia, and Intel would look to expand their capabilities.

If we do see neural texture compression as marketable features, they’ll likely show up with the next generation of graphics cards. Nvidia is expected to reveal its RTX 50-series GPUs in the second half of the year, AMD could showcase its next-gen RDNA 4 GPUs in a similar time frame, and Intel’s Battlemage architecture is arriving in laptops in a matter of months through Lunar Lake CPUs.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Intel Arc B580 vs. Nvidia RTX 4060: a one-sided showdown
The back of the Intel Arc B580 graphics card.

Intel is back with one of the best graphics cards you can buy -- the Arc B580. As you can read in my Intel Arc B580 review, it's a graphics card that has no business being as powerful as it is given how inexpensive it is. And when comparing it to its main competitor, Nvidia's RTX 4060, Intel mops the floor with its rival.

I've been testing Intel's latest GPU over the last couple of weeks, and I decided to put it head-to-head with Nvidia's budget RTX 4060, which is currently the second-most-popular GPU on Steam. Given the performance I've seen, Intel's GPU deserves to start climbing up the rankings in those same charts.
Specs and pricing

Read more
Prices for Intel’s Arc B580 are already shooting through the roof
The Intel logo on the Arc B580 graphics card.

Intel just launched its new $249 Arc B580 graphics card, and as you can read in our Intel Arc B580 review, it's one of the best graphics cards you can buy. It seems PC gamers have gotten the memo, as most models of the card are sold out online. If you want to get one now, you'll have to spend close to double the list price.

Looking at online retailers, it looks like Newegg has the most models listed for sale, though almost all of them are sold out. The only models available come from Gunnir, and they're both very expensive. The , while the . Both are sold by third-party sellers -- they aren't sold and shipped by Newegg -- so I wouldn't recommend spending up for one of these cards.

Read more
AMD’s next-gen gaming laptop chips may have just leaked
AMD's CEO delivering the Computex 2024 presentation.

AMD is readying its Strix Point Halo and Krackan Point APUs, with a potential launch in January at CES 2025. Ahead of launch, details about an Acer Swift Go 16 laptop with an upcoming AMD laptop chip have been spotted on Geekbench.

According to the leaked listing, the laptop is powered by a Krackan Point engineering sample with an OPN Code of "100-000000713-40_Y," which is most likely the Ryzen AI 7 350. It features eight cores, divided into two clusters of four cores each, utilizing Zen 5 and Zen 5c architectures. It has a base frequency of 2GHz, which can reach a maximum boost clock of 5.05GHz, along with 16MB of L3 cache and 8MB of L2 cache.

Read more