AMD Confirms Hybrid Client & Server CPUs Are On The Way, Will Keep Pushing Core Counts Forward Too
AMD Confirms Hybrid Client & Server CPUs Are On The Way, Will Keep Pushing Core Counts Forward Too

AMD has confirmed that they are working on Hybrid CPUs and will also keep pushing core counts forward with next-gen designs.
In an interview with Tomshardware, AMD's CTO, Mark Papermaster, spilled the beans on some of its plans for the future which involve hybrid chip designs, increased core counts, and reliance on AI for chip design and manufacturing.
Mark points out that we are getting to the point where one chip doesn't fit all needs & that's more prevalent in the server segment which is why the company is offering a diverse range of solutions in its Zen 4 EPYC lineup, ranging from the classic Zen 4 Genoa to 3D V-Cache Genoa-X, Bergamo in Zen 4C flavors and Siena for TCO & Power-Optimized platforms. Recent rumors have highlighted a more diverse range of EPYC products in the upcoming Zen 5 and Zen 5C lineup.
According to AMD, we'll not just see variations in core densities for example Zen 4 & Zen 4C, but also variations in the types of cores themselves. This is a similar approach that Intel & Apple take when designing their current generation of CPUs, mixing high-performance cores with low-power cores that are optimized for max efficiency. The hybrid approach would also allow AMD to stack multiple 3D layers that include either cache or use various accelerators that are workload-specific optimized.
AMD also reaffirms that the technology to enable higher core counts will continue moving forward but that isn't the only path that would evolve in future chips. Increasing the core counts could be an important aspect to one customer but the other customer may require the same exact core counts but some added acceleration as mentioned above. Mark goes on to confirm that the current Ryzen 7040 CPUs are a taste of this hybrid technology and that we'll see more of these coming in the future.
But what you'll also see is more variations of the cores themselves, you'll see high performance cores mixed with power efficient cores mixed with acceleration. So where, Paul, we're moving to now is not just variations in core density, but variations in the type of core, and how you configure the cores. Not only how you've optimized for either performance or energy efficiency, but stacked cache for applications that can take advantage of it, and accelerators that you put around it.
When you go to the data center, you're also going to see a variation. Certain workloads move more slowly, you might be having a business where you haven't yet adopted AI and you're running transaction processing, closing your books every cycle, you're running an enterprise, you're not in the cloud, and you might have a fairly static core count. You might be in that sweet spot of 16 to 32 cores on a server. But many businesses are indeed adding point AI applications and analytics. As AI moves from not only being in the cloud, where the heavy training and large language model inferencing will continue, but you're going to see AI applications in the edge. And you know, it's going to be in enterprise data centers as well. They're also going to need different core counts, and accelerators.
I really think I can sum it up by saying we see the technology continuing to enable core counts going forward, but that is not the sole path to meeting customer needs. It has to be application dependent, and you have to be able to provide customers with the kind of diversity of computation elements they need. And that CPUs, and different types of CPUs, along with accelerators. And you need to give them flexibility as to how they can figure that solution based on the applications they are running.
Paul Alcorn: So, it's probably safe to say that a hybrid architecture will be coming to client [consumer PCs] some time?
Mark Papermaster: Absolutely. It's already there today, and you'll see more coming.
AMD CTO, Mark Papermaster (via Tomshardware)
On the possibility of AMD using AI to help design and develop its chips, Mark said that they are already using software to assist them in chip design and while it won't necessarily replace the engineering work put in by humans, it can definitely help in creating better designs. NVIDIA has said the same things in the past and is also leveraging AI to help make its next-gen chips & also implement new and advanced techniques to streamline and accelerate the production of these chips.
The short answer to your question is, we're going to solve all of those constraints and you'll see more and more generative AI used in the very chip design process. It is being used in point applications today. But as an industry, over the next couple of years, next really one to two years, I think that we'll have the proper constraints to protect IP and you're going to start seeing production applications of generative AI to speed the design process.
It won't replace designers, but I think it has a tremendous capability to speed design.
And will it speed future chip designs? Absolutely. But we have a few hurdles that we have to get our arms around in the short term.
AMD CTO, Mark Papermaster (via Tomshardware)
AMD has already made AI its number one strategic priority and with upcoming hybrid designs, it looks like AI will become a major deal for AMD moving forward. The company has the potential to become one of the biggest names in the AI segment and we can't wait to see what they have in the stores for us in the coming years.
What's Your Reaction?






