Intel Shares Real-World Xeon Sapphire Rapids vs EPYC Genoa Benchmarks Ahead of AMD’s Data Center Event

Intel Shares Real-World Xeon Sapphire Rapids vs EPYC Genoa Benchmarks Ahead of AMD’s Data Center Event

 0
Intel Shares Real-World Xeon Sapphire Rapids vs EPYC Genoa Benchmarks Ahead of AMD’s Data Center Event
Intel Details Next-Gen Xeon CPUs For 2024: Granite Rapids With Redwood Cove P-Cores & Sierra Forest With Sierra Glen E-Cores 1

Intel has published more "Real-World" benchmarks of its Sapphire Rapids Xeon CPUs versus the AMD EPYC Genoa offerings. The benchmarks come just a day ahead of AMD's grand data center event where the company will be unveiling its brand-new products and talking about what's next.

In its presser, Intel uses its 32-core Sapphire Rapids Xeon chip against a 32-core AMD EPYC Genoa chip to showcase the performance on mainstream server platforms. Chipzilla is also showing CPU performance comparisons between its flagship Xeon Max 56-core chip against AMD's top 96-core chips. The "Real-World" benchmarks used for comparison focus on mainstream compute, HPC, & AI workloads.

Kicking things off with the AI performance benchmarks, Intel is touting up to a 7.11x increase in performance on the Intel Xeon 8462Y (Sapphire Rapids) CPU versus the AMD EPYC 9354 (Genoa). All benchmarks show that Intel Sapphire Rapids not only leads in overall performance but also the performance per watt metrics. These workloads utilize the Intel AMX (Advanced Matrix Instructions) featured on Sapphire Rapids CPUs which delivers a boost to AI-specific tasks such as Classification, Natural Language Processing, Recommender, and Detection.

Moving over to a broader set of workloads ranging from SPECint to MySQL Casandra, MongoDB & also including workloads that utilize Intel's Accelerator engines such as Microsoft SQL, GROMACS, LAMMPS, NAMD, Monte Carlo, etc, we see up to a 2.52x improvement in overall performance and a 2.51x improvement in performance per watt versus AMD's 4th Gen EPYC Genoa chips. The biggest performance increases are seen within Storage and HPC-specific benchmarks. The general purpose workloads see a sub 1x improvement (on average) whereas Micro & Data Services see a 20 to 30% improvement.

Summing up the TCO costs, Intel shares that getting an Intel Xeon CPUs based on its Sapphire Rapids family can yield up to 8% savings in Database (PostGreSQL), 35% savings in Database (Microsoft SQL 2022+ QAT Backup), 38% savings in HPC (BlackScholes), 61% savings in AI (DLRM) and up to 79% savings in AI Natural Language (BertLarge).

In Mainstream Compute, Faster Time to Insights and Access to Data

The most commonly deployed solutions in the market are delivered on mid-range core counts, a segment where per-core performance, power and throughput are critical key performance indicators. Knowing this, Intel compared a 32-core 4th Gen Xeon against the competition’s best mainstream 32-core part.

General-purpose benchmarks like SPEC CPU are important, but don’t tell the whole performance story for customers whose workload needs continue to evolve. The reality is that on workloads that matter most to customers, such as database, networking and storage, Xeon easily beats the competition by offering greater CPU performance, higher performance per watt and lower overall total cost of ownership (TCO). Furthermore, customers see important sustainability benefits in the form of reduced server numbers, fleet power usage and CO2 emissions.

Increased AI Efficiency Improves Customer Experience, Drives Revenue Growth

Xeon is architected for AI, and Intel’s investment in software enables and optimizes AI across all major frameworks, libraries and model types. Intel’s testing demonstrates its continued CPU leadership on AI workloads leveraging its advanced hardware acceleration technology, Intel® Advanced Matrix Extensions (Intel® AMX).

More cores aren’t always the answer to achieve optimal performance. Intel® AMX allows 4th Gen Xeon to scale at an incredible rate, beyond what’s possible with core counts alone. This leading Intel AI engine is built into each Xeon core, something the competition does not have and its customers cannot benefit from.

HPC Leadership Delivers Better Performance for Modeling, Forecasting, Predictive Simulations

When testing industry-specific HPC workloads, Intel pitted its 56-core Intel® Xeon® CPU Max Series

processor featuring Intel® AVX-512 against the competition’s top-bin 96-core offering. By combining the best of compute with high memory bandwidth and Intel HPC engines, Xeon Max CPUs drive a 40% performance advantage over the competition in many real-world HPC workloads, such as earth systems modeling, energy and manufacturing.

via Intel

Intel also pits its flagship Xeon Max 9480 CPU which comes with on-package HBM memory and up to 56 cores against AMD's fastest EPYC 9654 96-Core CPU. Pure HPC workloads were used and the comparisons show that the Xeon Max chip can output over 2x performance versus the Genoa chip.

This is definitely an interesting comparison considering Intel is highlighting that the excess memory onboard its Xeon Max CPUs can offset some of the performance variables attached to core count but AMD on the other hand has only recently been out in the market with its EPYC chips and there is definitely a lot of room left to be extracted from these high-core count Zen 4 parts.

AMD's answer to Xeon Max will arrive with the upcoming Genoa-X chips which are expected to launch tomorrow along with the 128-core EPYC Bergamo parts. Full footnotes of these Intel real-world benchmarks can be found here.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow