AMD Reaffirms EPYC Bergamo CPUs In 1H 2023, Instinct MI300 APUs In 2H 2023
AMD Reaffirms EPYC Bergamo CPUs In 1H 2023, Instinct MI300 APUs In 2H 2023

AMD reaffirmed the launch plans of its next-generation EPYC Bergamo CPUs and the Instinct MI300 APUs which launch this year.
AMD already got a lead on Intel with its EPYC Genoa CPUs that launched months ahead of the Xeon Sapphire Rapids CPUs. Fast forward to 2023, and AMD is planning to launch four brand new data-center products which include Genoa-X, Bergamo, Siena, and Instinct MI300. During its recent Q4 2022 earnings call, AMD once again confirmed that its EPYC Bergamo CPUs will launch in 1H 2023 followed by Instinct MI300 APUs in 2H 2023.
The AMD Instinct MI300 will be a multi-chip and multi-IP Instinct accelerator that not only features the next-gen CDNA 3 GPU cores but is also equipped with the next-generation Zen 4 CPU cores.
The latest specifications that were unveiled for the AMD Instinct MI300 accelerator confirm that this exascale APU is going to be a monster of a chiplet design. The CPU will encompass several 5nm 3D chiplet packages, all combining to house an insane 146 Billion transistors. Those transistors include various core IPs, memory interfaces, interconnects, and much more. The CDNA 3 architecture is the fundamental DNA of the Instinct MI300 but the APU also comes with a total of 24 Zen 4 Data Center CPU cores & 128 GB of the next-generation HBM3 memory running in 8192-bit wide bus config that is truly mind-blowing.
AMD will be utilizing both 5nm and 6nm process nodes for its Instinct MI300 'CDNA 3' APUs. The chip will be outfitted with the next generation of Infinity Cache and feature the 4th Gen Infinity architecture which enables CXL 3.0 ecosystem support. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive. AMD is also projecting over 8x the AI performance versus the CDNA 2-based Instinct MI250X accelerators. The CDNA 3 GPU's UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copies while delivering low TCO.
In January, we previewed our next-generation MI300 accelerator that will be used for large model AI applications in cloud data centers and has been selected to power the 2-plus exaflop El Capitan exascale supercomputer at Lawrence Livermore National Laboratories.
MI300 will be the industry's first data center chip that combines a CPU, GPU and memory into a single integrated design, delivering 8x more performance and 5x better efficiency for HPC and AI workloads, compared to our MI250 accelerator currently powering the world's fastest supercomputer. MI300 is on track to begin sampling to lead customers later this quarter and launch in the second half of 2023.
In terms of when - we've talked before about sort of our Data Center GPU ambitions and the opportunity there. We see it as a large opportunity. As we go into the second half of the year and launch MI300, sort of the first user of MI300 will be the supercomputers or El Capitan, but we're working closely with some large cloud vendors as well to qualify MI300 in AI workloads. And we should expect that to be more of a meaningful contributor in 2024. So lots of focus on just a huge opportunity, lots of investments in software as well to bring the ecosystem with us.
AMD CEO, Lisa Su (Q4 2022 Earnings Call)
The AMD EPYC Bergamo chips will be featuring up to 128 cores and will be aiming at the HBM-powered Xeon chips along with server products from Apple, Amazon, and Google with higher core counts (ARM architecture). Both Genoa and Bergamo will utilize the same SP5 socket and the main difference is that Genoa is optimized for higher clocks while Bergamo is optimized around higher-throughput workloads.
Bergamo will launch in the first half of the year. We are on track for the Bergamo launch, and you'll see that become a larger contributor in the second half. So as we think about the Zen 4 ramp and the crossover to our Zen 3 ramp, it should be towards the end of the year, sort of in the fourth quarter, that you would see a crossover of sort of Zen 4 versus Zen 3, if that helps you.
AMD CEO, Lisa Su (Q4 2022 Earnings Call)
It is stated that AMD's EPYC Bergamo CPUs will be arriving in the first half of 2023 and will use the same code as Genoa and also run like Genoa but the code is half the size of Genoa. The CPUs are specifically mentioned to compete against the likes of AWS's Graviton CPUs and other ARM-based solutions where peak frequency isn't a requirement but throughput through the number of cores is. One workload example for Bergamo would be Java where the extra cores can definitely come in handy. Following Bergamo will be the TCO-optimized Siena lineup for the SP6 platform which will play a crucial role in expanding AMD's TAM growth in the server segment.
AMD's EPYC & Instinct chips are expected to push the company's market share holding to 30% and possibly even breach it by the end of this year. The company really has a strong roadmap laid out in the server market segment and we can't wait to see how things evolve in the coming quarters.
What's Your Reaction?






