Skip to main content

Nvidia’s latest A.I. results prove that ARM is ready for the data center

Nvidia just published its latest MLPerf benchmark results, and they have are some big implications for the future of computing. In addition to maintaining a lead over other A.I. hardware — which Nvidia has claimed for the last three batches of results — the company showcased the power of ARM-based systems in the data center, with results nearly matching traditional x86 systems.

In the six tests MLPerf includes, ARM-based systems came within a few percentage points of x86 systems, with both using Nvidia A100 A.I. graphics cards. In one of the tests, the ARM-based system actually beat the x86 one, showcasing the advancements made in deploying different instruction sets in A.I. applications.

MLPerf results with Arm processors.
Image used with permission by copyright holder

“The latest inference results demonstrate the readiness of ARM-based systems powered by ARM-based CPUs and Nvidia GPUs for tackling a broad array of A.I. workloads in the data center,” David Lecomber, senior director of HPC at Arm, said. Nvidia only tested the ARM-based systems in the data center, not with edge or other MLCommons benchmarks.

Recommended Videos

MLPerf is a series of benchmarks for A.I. that are designed, contributed to, and validated by industry leaders. Although Nvidia has led the charge in many ways with MLPerf, the leadership of the MLCommons consortium is made up of executives from Intel, the Video Electronics Standards Association, and Arm, to name a few.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The latest benchmarks pertain to MLCommons’ inference tests for the data center and edge devices. A.I. inference is when the model begins producing results. It comes after the training phase where the A.I. model is still learning, which MLCommons also has benchmarks for. Nvidia’s Triton software, which deals with inference, is in use at companies like American Express for fraud detection and Pinterest for image segmentation.

Nvidia also highlighted its Multi-Instance GPU (MIG) feature when speaking with press. MIG allows the A100 and A30 graphics cards to go from a single A.I. processing unit into a few A.I. accelerators. The A100 is able to split into seven separate accelerators, while the A30 can split into four.

By splitting up the GPU, Nvidia is able to run the entire MLPerf suite at the same time with only a small loss in performance. Nvidia says it measured 95% of per-accelerator performance when running all of the tests compared to a baseline reading, allowing the GPUs to run multiple A.I. instructions at the same time.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
The popularity of ChatGPT may give Nvidia an unexpected boost
Nvidia's A100 data center GPU.

The constant buzz around OpenAI's ChatGPT refuses to wane. With Microsoft now using the same technology to power its brand-new Bing Chat, it's safe to say that ChatGPT may continue this upward trend for quite some time. That's good news for OpenAI and Microsoft, but they're not the only two companies to benefit.

According to a new report, the sales of Nvidia's data center graphics cards may be about to skyrocket. With the commercialization of ChatGPT, OpenAI might need as many as 10,000 new GPUs to support the growing model -- and Nvidia appears to be the most likely supplier.

Read more
Nvidia’s $200 Jetson Orin Nano minicomputer is 80 times faster than the previous version
Nvidia Jetson Orin Nano system-on-module.

Nvidia announced the upcoming release of the Jetson Orin Nano, a system-on-module (SOM) that will power up the next generation of entry-level AI and robotics, during its GTC 2022 keynote today.

Nvidia says this new version delivers an 80x increase in performance over the $99 Jetson Nano. The original version was released in 2019 and has been used as a bare-bones entry into the world of AI and robotics, particularly for hobbyists and STEM students. This new version looks to seriously up the power.

Read more
Nvidia DLSS isn’t magic, and this FSR hack proves it
nvidia dlss isnt magic and this fsr hack proves it respec

Nvidia's Deep Learning Super Sampling (DLSS) has been an undeniable selling point for RTX GPUs since its launch, and AMD's attempts to fight back haven't exactly been home runs.

But what if FidelityFX Super Resolution (FSR) could grant the huge performance gains of DLSS without all the restrictions imposed by Nvidia? If that sounds too good to be true, I wouldn't blame you. After all, Nvidia's special sauce of machine learning wasn't supposed to be easily replicated.

Read more