Skip to main content

The death of Moore’s Law is finally starting to stink

The back of the Core Ultra 9 285K CPU.
Jacob Roach / Digital Trends

For more than two decades we’ve heard about the death of Moore’s Law. It was a principle of the late Intel co-founder Gordon Moore, positing that the number of transistors in a chip would double about every two years. In 2006, Moore himself said it would end in the 2020s. MIT Professor Charles Leiserson said it was over in 2016. Nvidia’s CEO declared it dead in 2022. Intel’s CEO claimed the opposite a few days later.

There’s no doubt that the concept of Moore’s Law — or rather observation, lest we treat this like some law of physics — has lead to incredible innovation among desktop processors. But the death of Moore’s Law isn’t a moment in time. It’s a slow, ugly process, and we’re finally seeing what that looks like in practice.

Recommended Videos

Creative solutions

The Ryzen 9 9900X sitting on its box.
Jacob Roach / Digital Trends

We have two brand new generations from AMD and Intel, neither of which really came out of the gate swinging. As you can read in my Core Ultra 9 285K review, Intel’s latest attempt pulls off a lot of impressive feats with its radically new design, but it still can’t hold up to the competition. And the Ryzen 9 9950X, although a clear upgrade over its Zen 4 counterparts, doesn’t deliver the generational improvements we’ve become accustomed to.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Consider this — looking at Cinebench R23, the multi-core jump from the Ryzen 9 5950X to the Ryzen 9 7950X was 36%. Between the Ryzen 9 7950X and Ryzen 9 9950X? 15%. That’s less than half the improvement within one generation. In Handbrake, the Ryzen 9 7950X sped up transcoding by 34% compared to the Ryzen 9 5950X. With the Ryzen 9 9950X, the improvement shrunk to just 13%.

This isn’t just one odd generation, either. Looking at the single-core performance of the Core i9-101900K and Core i9-12900K, Intel delivered a 54% improvement. Even comparing the Core i9-12900K, which is three generations old at this point, to the latest Core Ultra 9 285K, we see just a 20% improvement. Worse, the new Core Ultra series from Intel shows oddly high results in Cinebench, and if you break out to other applications, you can actually see some regressions compared to a generation or two back.

AMD Ryzen 7 7800X3D sitting on a motherboard.
Jacob Roach / Digital Trends

Even within just a few years, the rate of performance improvements has slowed considerably. Moore’s Law doesn’t directly talk about performance improvements — it’s simply concerned with the number of transistors on a chip. But that has clear performance implications. Throwing more transistors at the problem isn’t practical like it once was — read up on the death of Dennard scaling if you want to learn more why that’s the case.

AMD and Intel may not talk about it publicly, but both companies clearly see the writing on the walls. That’s likely why Intel pivoted to a hybrid architecture in the first place, and why it’s introduced a radical redesign with its Arrow Lake CPUs. And for AMD’s part, it’s no secret that 3D V-Cache has become a defining technology for the company’s CPUs, and it’s a clear way to skirt the bottleneck of Moore’s Law. A large chunk of transistors on any CPU die are dedicated to cache — somewhere in the range of 40% to 70% — and AMD is literally stacking more cache on top that it can’t fit onto the die.

A function of space

One important factor to keep in mind when looking at Moore’s Law and Dennard scaling is space. You can build a massive chip with a ton of transistors, sure, but how much power will it draw? Will it be able to stay under a reasonable temperature? Will it even be practical to place in a PC, or in the enterprise, a server? You cannot separate the number of transistors from the size of the die.

I’m reminded of a conversation I had with AMD’s Chris Hall, where we told me: “We were all enjoying Moore’s Law for a long time, but that’s sort of tailed off. And now, every square millimeter of silicon is very expensive, and we can’t afford to keep doubling. We can, we can build those chips, we know how to build them, but they become more expensive.”

Nvidia GeForce RTX 4090 GPU.
Jacob Roach / Digital Trends

I’m not here to defend Nvidia’s insane pricing strategy, but the company has reportedly seen higher pricing from TSMC with its RTX 40-series GPUs than it saw with Samsung with its RTX 30-series GPUs. And, the RTX 4090 does deliver more than twice the transistor count as the RTX 3090 at a very similar die size. If there’s a commitment to Moore’s Law across chips, I’m not sure we as consumers will like the outcome when it comes time to upgrade a PC.

That’s not to mention the other problems a card like the RTX 4090 has faced — high power requirements, an insane cooler size, and a melting power connector. Not all of these problems are a function of doubling the number of transistors, not even close, but it plays a role. Bigger chips for more transistors, more heat, and usually at a higher cost, especially as the cost of silicon continues to increase.

The shortcut

Moore’s Law is dead, PC hardware is getting more expensive, and everything sucks — that’s not how I want to leave this. There will be more ways to deliver performance improvements year over year that doesn’t rely solely on more transistors on a chip at the same size. The way we’re getting there now is just different. I’m talking about AI.

Wait, don’t click off the article. Tech companies are excited about AI because it represents a lot of money — cynical as that perspective is, it’s just the way trillion-dollar corporations like Microsoft and Nvidia work. But AI also represents a way to bring a new form of computing. I’m not talking about a slew of AI assistants and hallucinatory chatbots, but rather applying machine learning to a problem to approximate results that we would previously get with pure silicon innovation.

Ray Reconstruction in Star Wars Outlaws.
Jacob Roach / Digital Trends

Look at DLSS. The idea of using upscaling to maintain a certain level of performance is controversial, and it’s a nuanced conversation when it comes to individual games. But DLSS is enabling better performance without a strict hardware improvement. Add on top of that frame generation, which we now see from DLSS, FSR, and third-party tools like Lossless Scaling, and you have a lot of pixels that are never rendered by your graphics card.

A less controversial angle is Nvidia’s Ray Reconstruction. It’s no secret that ray tracing is demanding, and part of getting around that hardware demand is a process of denoising — limiting the number of rays, then cleaning up the resulting image with denoising. Ray Reconstruction delivers a result that would require far more rays and much more powerful hardware, and it does so without limiting performance at all — and once again, through machine learning.

It really doesn’t matter if Moore’s Law is dead or alive and well — if companies like AMD, Intel, and Nvidia want to stay afloat, they’ll continually need to think of solutions to address rising performance demands. Innovation is far from dead in PC hardware, but it might start to look a little different.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
A popular GPU brand just confirmed RTX 5090 details
The RTX 4090 graphics card sitting on a table with a dark green background.

Zotac prematurely confirmed some details about Nvidia's upcoming RTX 50-series GPUs. It's somewhat of an open secret at this point that Nvidia is set to announce in January its new generation of GPUs, many of which will likely vie for a spot among the best graphics cards. And based on the details Zotac has revealed, they have a pretty good shot.

On its website, Zotac listed five models of next-gen graphics cards. VideoCardz spotted the listings and claims that all five of these graphics cards will be announced next month. We aren't able to corroborate the release timing, but there are still plenty of details available from the listings.

Read more
AMD is finally turning the corner with FSR 3
An AMD graphics card sitting on a pink background.

I've been very critical of AMD's FSR 3 in the past. It's not that the tool is bad -- in fact, I think it's excellent -- but for a long time, it just wasn't available in a wide swath of games. That's changing. Over the past year, AMD has broadened support for its upscaling and frame generation tech massively, and it's continued to refine the upscaling algorithm that makes up the core of FSR. I'm sure you, like myself, have settled into assumptions about what FSR is capable of and where it's available, but as we close out the year, it's high time to challenge those assumptions.

It's no secret that Nvidia's DLSS 3 is a core component of some of the best graphics cards you can buy, and AMD's FSR 3.1, although impressive, doesn't reach the heights of Nvidia's AI-driven tech. That hasn't changed, and I'm not sure it ever will. But based on the FSR 3.1 implementations I'm seeing today, AMD is offering a tool that useful in a ton of situations.
What changed?

Read more
As a PC gamer, 2024 just made me sad
Fingers holding an Intel 285K.

Fine, I'll say it: 2024 wasn't a great year for PC gaming hardware. I'd even go as far as to call it pretty lame. There were plenty of great PC games to enjoy, but when it comes to hardware, it felt like one big letdown.

A lot of my most anticipated launches ended up getting delayed, and most of the upgrades we got were a bit of a wet blanket. Here are all the various things that proved to be a disappointment in 2024, both to me and to many other PC gamers, but why I'm feeling hopeful for 2025.
The least impressive generation of CPUs in a while

Read more