AI Craze May Have Nerfed AMD’s & Intel’s Upcoming Chips: Strix APUs Originally Had Big Cache Which Boosted CPU & iGPU Performance
AI Craze May Have Nerfed AMD’s & Intel’s Upcoming Chips: Strix APUs Originally Had Big Cache Which Boosted CPU & iGPU Performance

The recent AI craze may have nerfed some upcoming SOCs as chipmakers such as AMD & Intel prioritize NPU over other core IPs.
We have recently seen an AI explosion in the PC segment with all chipmakers talking about the respective capabilities of their chips and platforms. The segment is driven by a range of software innovations and Microsoft's Windows Copilot which has some hefty requirements to support its AI functionality. Chipmakers are now heavily betting on the AI craze and it looks like some have gone outside of their traditional chip development plans to prioritize AI over other parts of their newest SOCs that will be coming to market later this year.
Over at Anandtech forums, it is reported by member, Uzzi38, that AMD's Strix Point APUs which are launching later this year were originally planned to be much different than the chips that we will be getting soon. It is alleged that before AMD dedicated a large AI Engine block for that 3x NPU "XDNA 2" AI performance, the chip had a large SLC (System-Level-Cache) & that would have increased the performance of both the CPU (Zen 5) and iGPU (RDNA 3+) by a great margin. However, that is not happening anymore.
A follow-up comment on this matter was made by adroc_thurston who replied to Uzzi stating that Strix 1 or Strix Point monolithic had 16 MB of MALL cache once before that was dropped. Intel has also invested loads in their upcoming Arrow Lake, Lunar Lake, and Panther Lake chips which will be aiming at the AI PC segment.
These AI blocks will take up large portions of valuable die space that could've been dedicated elsewhere such as higher core counts, higher iGPU counts, wider caches, and more but it looks like the AI PC craze has made chipmakers take a backseat on standard CPU / iGPU performance and focus more on the NPU side of things. For Strix Point, AMD has touted a 3x gain with up to 50 TOPs while Lunar Lake is going to offer 3x AI NPU performance over Meteor Lake (~35 TOPs) and Panther Lake is going to further double it (~70 TOPs).
As of right now, it looks like until the AI bubble bursts (which doesn't seem to be happening soon), chipmakers such as AMD & Intel are going to dedicate resources towards adding faster NPUs. We'll still see improvements on the CPU and GPU side for next-generation SOCs but there will always be that untapped potential of what could've been if these companies focused elsewhere besides NPU.
What's Your Reaction?






