Perhaps to the surprise of few, the next generation of NVIDIA GPUs dominated the latest MLPerf performance benchmarking tests for artificial intelligence (AI) workloads.
While the takeaway is obvious — use the latest NVIDIA GPUs in your large-scale AI systems if you can afford to do so.
But given the flexibility of the tests, other vendors, notably VMware and Red Hat, demonstrated some results that would interest AI system builders as well.
Managed by the MLCommons AI engineering consortium, this latest round of the MLPerf performance benchmarking (version 4) includes for the first time two new tests to approximate the inferencing workloads for generative AI applications.
This round includes a test for large language model (LLM) performance, using Hugging Face’s 70 billion parameter Llama2-70B. Big Boi! The Llama is an order of magnitude larger than the last model added to the suite, Hugging Face‘s GPT-J in version 3.1. About 24,000 samples from the Open Orca dataset were used for the sample data.

How the MLPerf benchmark for inferencing has grown over the years.
The benchmark also includes for the first time Stability AI’s Stable Diffusion XL, with 2.6 billion parameters, to test the performance of a system that creates images from text, using metrics based on latency and throughput.
How NVIDIA Swept the MLPerf Performance Tests
For these speed tests, NVIDIA entered a number of configurations based on its soon-to-be-released NVIDIA H200 Tensor Core GPUs (built on the Nvidia Hopper architecture). The GPUs were augmented with NVIDIA TensorRT-LLM, which is software that streamlines LLM processing.
In the benchmark tests, the H200 GPUs, along with, TensorRT-LLM, were able to produce up to 31,000 tokens/second, setting a record to beat in this first round of LLM benchmarking. This next generation of GPUs showed roughly a 43% performance improvement over the currently available (though still in scarce supply) H100s, also tested for comparison.
The chip company also used a “custom thermal solution” to keep the chips cooler, which added a 14% gain in performance.

NVIDIA’s new GPUs showed nearly a 3x improvement since the last rounds of tests in September (NVIDIA).
NVIDIA’s H200 GPUs, which will be available later this year, are equipped with 141GB of Micron high-bandwidth memory (HBM3e) running at 4.8TB/s throughput, a considerable jump over the 80GB and 3.35 TB/s, respectively, of the H100.
“With HBM3e memory, a single H200 GPU can run an entire Llama 2 70B model with the highest throughput,” an NVIDIA blog post boasts.
How Does MLCommons Speed-Test Inferencing?
“How quickly hardware systems can run AI and ML models” in various configurations, is what the MLPerf Inference benchmark suite (current: v4.0) is designed to measure.
With the LLM inference test, several “tokens” (such as a sentence or paragraph) are used as input, and speed is measured by how quickly the LLM generates tokens based on the instructions.
MLPerf is and isn’t a competition. There’s a closed division, in which hardware platforms compete “apples-to-apples” on the reference implementation. But there is also an “open” division, which allows variance for innovation, offering flexibility in models and retraining.
Overall, the MLPerf Inference v4.0 benchmarks include 8,500 results that measuring performance and another 900 measuring performance over data center power consumption.
Twenty-three system builders submitted the result of their tests, including ASUSTeK, Azure, Broadcom/VMware, Cisco, Dell, Fujitsu, Giga Computing, Google, Hewlett Packard Enterprise, Intel, Juniper Networks, Lenovo, Oracle, Qualcomm Technologies, Red Hat and Supermicro.
In a press briefing, system builders highlighted their own achievements within MLPerf and what they may mean for potential customers.
In the Stable Diffusion tests, VMware demonstrated how a Dell/NVIDIA setup running on VMware’s vSphere virtualization and data center management technology could deliver near or even superior results compared to bare metal deployments.

VMware executes Stable Diffusion at near or even better than bare metal performance, using a setup consisting of: Dell XE9680 with 8x virtualized NVIDIA SXM H100 80GB GPUs and Dell R760 with 2x virtualized NVIDIA L40S 80GB GPUs with vSphere 8.0.
The setup should be of interest to any organization running multiple workloads in addition to the LLM work. The virtual machines used in the tests were allocated through vSphere with 32 CPUs of the 120-224 CPUs available, and only 1TB to 1.5TB of available memory, yet they performed competitively with bare metal deployments.
Likewise, Red Hat, along with Supermicro, showed how to get competitive performance results in a Red Hat OpenShift AI environment, which — thanks to Kubernetes — provides maximum flexibility in creating, scheduling and monitoring AI/ML workloads.
The post NVIDIA H200 GPUs Crush MLPerf’s LLM Inferencing Benchmark appeared first on The New Stack.
Also, VMware and Red Hat demonstrated with MLPerf some speedy and flexible system configurations for large-scale LLM work.