AMD Says Its Instinct GPUs Made NVIDIA Step On The AI Accelerator Pedal, Yearly Cadence Is A Response To NVIDIA Trying To Block Everyone Out
AMD Says Its Instinct GPUs Made NVIDIA Step On The AI Accelerator Pedal, Yearly Cadence Is A Response To NVIDIA Trying To Block Everyone Out

AMD says that the competitive nature of its Instinct GPUs made NVIDIA go all out with its own AI roadmap but the red team isn't going to go easy on them in the future.
The details come from CRN's interview with AMD's Executive Vice President & General Manager of Data Center Solutions Business Group, Forrest Norrod, who says that it was AMD and its Instinct GPU roadmap that made NVIDIA push the accelerator pedal hard on its Data Center AI roadmap which has now shifted to a yearly cadence. Not only NVIDIA but AMD & Intel have also come up with a yearly cadence, going hard on the AI bandwagon with no stop to the momentum that was initiated back in 2023.
Forrest states that the AI segment continues to evolve at a rapid pace and with Lisa's commitment to the AI segment, they are making sure to meet the demand of customers through continued innovations on both the silicon and software side. The company has been fine-tuning its robust ROCm software suite for data centers and consumers and we recently saw the company unveil a vast portfolio of Instinct AI accelerators that will be available through 2024-2026 in the form of the MI325, MI350, and MI400 series.
The most interesting comment from Forrest was regarding the recent NVIDIA push which will be accelerating their roadmap with Blackwell this year, Blackwell Ultra next year, and the next-generation Rubin accelerators and it's Ultra follow-up in 2026 and 2027, respectively. Forrest says that NVIDIA stepped on the accelerator pedal after their "Holy Crap" moment which was AMD's Instinct MI300 launch. He says that NVIDIA "deliberately" stepped on the accelerator hard, trying to block them and everyone else out of the AI segment but the fight is on and AMD isn't stepping back anytime soon.
And then the other dynamic, of course, is just competitively. Nvidia, quite candidly, stepped on their accelerator pedal, and when they saw that—'holy crap. AMD has got a real part; they're going to be a real competitor’—they very deliberately stepped on the accelerator trying to block us and everybody else out. And so we're responding to that as well.
Forrest Norrod - AMD Executive (via CRN)
AMD says that they are investing a lot in R&D on the data center side of things and that they have been steadily ramping up chip production. AMD also doesn't want people to think that they are only reacting to NVIDIA's announcements now, in fact, they have already been tackling NVIDIA for quite some time.
When talking about its upcoming MI325 and MI350 refreshes, AMD states that these AI accelerators will further close the gap between NVIDIA's launches. The upcoming MI325X which is scheduled for a release this year is said to "handily" outpace NVIDIA's Hopper H200 accelerator & is competitive against the Blackwell B100 in several regards which is expected to ship later this year. The next-gen MI350 series will further take the fight up against the Blackwell B200 with its CDNA 4 architecture. Both NVIDIA's B200 and AMD's MI350 series are planned as a 2025-volume launch so the fight is going to be close and interesting to watch.
For both of those, we think we are closing the gap, narrowing the gap between the introduction of Nvidia's part and the introduction of our same generation part. So [MI]325[X], I'd say, handily outdoes H200 and is competitive in many regards with [Nvidia’s upcoming] B100. And obviously, it'll be out a little bit behind H200.
And then [MI]350 [based on the CDNA 4 GPU architecture] I would say is a great part that we think is higher performance than what we see projected for [Nvidia’s] B200. We think B200 is really a 2025 part for any sort of volume, and so is [MI]350.
Forrest Norrod - AMD Executive (via CRN)
Talking about NVIDIA's GB200 Grace Hopper and Grace Blackwell solutions, AMD does acknowledge the fact that optimizing both CPU and GPU architectures is a good thing and that they already have some groundwork done in this department with the release of EPYC (Trento) & Instinct MI250 (GPU) for the Frontier supercomputer which is the first to break past the Exaflops barrier and also the most efficient supercomputer on the planet with such high compute capabilities.
With its MI300A series, AMD offers a tighter package in the form of an APU but there's not always demand for an APU, that's why the company has both MI300X and MI300A solutions for the relevant customers. AMD also talks about its Open ecosystem backed by UE (Ultra Ethernet) and UAL (Ultra Accelerator Link) platforms which provide customers with more choices and a design-choice-friendly nature, allowing them to use CPUs, GPUs, accelerators, and other IPs without worrying about proprietary nature that comes associated with using NVIDIA for instance.
Finally, Forrest hits back at Intel and its recently disclosed prices for Gaudi 3 accelerators saying that the list prices are a complete waste of time. In all of its AI-related announcements, Intel has highlighted the competitive value of their Gaudi platforms but AMD says that they would be very surprised if more than 10% of Intel's AI products were sold at the published price making it look like marketing fluff.
All three vendors are engaged in a very heated battle to take their share of the AI cake and while NVIDIA remains dominant with a massive market share lead, it looks like AMD & Intel are very well committed to this segment. Overall, the interview is a very interesting read so do check out the full thing over at CRN.
What's Your Reaction?






