In the ever-evolving world of Artificial Intelligence (AI), the need for benchmark tests to gauge the performance of AI models and hardware has never been more crucial. On September 12, 2023, MLCommons, an artificial intelligence benchmark group, unveiled the results of their latest benchmark tests, shedding light on how efficiently cutting-edge hardware can handle AI models. These tests have showcased the remarkable capabilities of Nvidia Corp and Intel Corp in the realm of AI, specifically in running a large language model with a staggering 6 billion parameters designed to summarize CNN news articles. This benchmark, known as MLPerf, simulates the “inference” phase of AI data processing, a vital component that powers generative AI tools.
Nvidia Takes the Lead
In this high-stakes competition, Nvidia Corp emerged as the front-runner, with its chip exhibiting remarkable performance in the inference benchmark. Nvidia achieved this feat by leveraging its flagship H100 chips, with the company’s accelerated computing marketing director, Dave Salvator, emphasizing their prowess in delivering leadership performance across various workloads. Nvidia’s dominance in training AI models has been undisputed, but its success in the inference market is a significant milestone.
Nvidia’s strategy in the MLPerf benchmark revolved around the deployment of eight of its flagship H100 chips. This strategic move not only showcased the power of their hardware but also underscored their commitment to excelling in AI across the board. While Nvidia’s success in AI training is well-known, their push into inference demonstrates their ambition to conquer all aspects of AI.
Intel’s Strong Showing
Intel Corp, another heavyweight in the tech industry, secured the second position in the MLPerf benchmark. Their success was primarily attributed to the Gaudi2 chips produced by the Habana unit, which Intel acquired in 2019. Although the Gaudi2 system performed approximately 10% slower than Nvidia’s, it underscored Intel’s capabilities in the AI hardware arena.
Eitan Medina, Habana’s Chief Operating Officer, expressed pride in Intel’s results, emphasizing the price-performance advantage of the Gaudi2 chips. This is a noteworthy achievement for Intel, showcasing their commitment to providing cost-effective AI solutions.
One intriguing aspect of this competition is the price-performance battle between Nvidia and Intel. While Nvidia’s hardware demonstrated impressive performance, Intel argues that its system is more cost-effective, with pricing roughly equivalent to Nvidia’s last-generation 100 systems. Unfortunately, both companies declined to disclose exact pricing details during the benchmark results presentation.
Intel’s focus on delivering cost-effective solutions could potentially give them an edge in the AI hardware market. The affordability factor can be a significant consideration for organizations looking to adopt AI at scale.
Nvidia hinted at further improvements in their hardware and software, with plans to roll out a software upgrade that would double their performance as measured in the MLPerf benchmark. This commitment to continuous improvement highlights the fierce competition in the AI hardware industry.
A Glimpse of the Future
In a parallel development, Alphabet’s Google unit provided a glimpse of their latest custom-built chip’s performance at their August cloud computing conference. This showcases the ongoing innovations in AI hardware across the tech industry, promising exciting developments in the near future.
In conclusion, the MLPerf benchmark results have shed light on the current state of AI hardware capabilities, with Nvidia leading the pack and Intel closely following. The price-performance dynamics between these two tech giants add an intriguing layer to the competition, with potential implications for the broader adoption of AI technologies. As the AI landscape continues to evolve, these benchmark tests serve as valuable benchmarks for progress and innovation in the field of artificial intelligence.