AMD Responds To NVIDIA’s H100 TensorRT-LLM Results, Once Again Shows MI300X GPU Leading With 30% Better Performance Using Optimized AI Software Stack
AMD Responds To NVIDIA’s H100 TensorRT-LLM Results, Once Again Shows MI300X GPU Leading With 30% Better Performance Using Optimized AI Software Stack

AMD has responded to NVIDIA's H100 TensorRT-LLM figures with the MI300X once again leading in the AI benchmarks when running optimized software.
Two days ago, NVIDIA published new benchmarks of its Hopper H100 GPUs to showcase that their chips perform much better than what was showcased by AMD during its "Advancing AI" event. The red team compared its brand new Instinct MI300X GPU against the Hopper H100 chip which is over a year old now but remains the most popular choice in the AI industry. The benchmarks used by AMD were not using the optimized libraries such as TensorRT-LLM which provides a big boost to NVIDIA's AI chips.
Using TensorRT-LLM resulted in the Hopper H100 GPU gaining almost 50% performance uplift over AMD's Instinct MI300X GPU. Now, AMD is firing with all cylinders back at NVIDIA by showcasing how the MI300X still retains faster performance than the H100 even when the Hopper H100 is running its optimized software stack. According to AMD, the numbers published by NVIDIA:
So AMD has decided to go for a more fair comparison and with the latest figures, we see the Instinct MI300X running on vLLM offering 30% faster performance than the Hopper H100 running on TensorRT-LLM.
These results again show MI300X using FP16 is comparable to H100 with its best performance settings recommended by Nvidia even when using FP8 and TensorRT-LLM.
via AMD
Surely, these back-and-forth numbers are something that are kind of unexpected but given just how important AI has become for the likes of AMD, NVIDIA, and Intel, we can expect to see more such examples being shared in the future. Even Intel has recently stated that the whole industry is motivated to end NVIDIA's CUDA dominance in the industry. The fact as of right now is that NVIDIA has years of software expertise in the AI segment and while Instinct MI300X offers some beastly specs, it will soon be competing with an even faster Hopper solution in the form of H200 and the upcoming Blackwell B100 GPUs in 2024.
Intel is also ready to out its Gaudi 3 accelerators in 2024 which would further heat the AI space but in a way, this competition would make for a vibrant and more lively AI industry where each vendor continues to innovate and excel over the other, offering customers better capabilities and even faster performance. NVIDIA, despite having no competition for years, has continued to innovate in this segment, and with AMD and Intel ramping up their AI production and software, we can expect them to respond with even better hardware/software of their own.
What's Your Reaction?






