AMD Ryzen AI CPUs Knock Out Intel Core Ultra In AI LLMs & GenAI Benchmarks, Go Low Power & Cheaper With XDNA

AMD Ryzen AI CPUs Knock Out Intel Core Ultra In AI LLMs & GenAI Benchmarks, Go Low Power & Cheaper With XDNA

 0
AMD Ryzen AI CPUs Knock Out Intel Core Ultra In AI LLMs & GenAI Benchmarks, Go Low Power & Cheaper With XDNA
AMD Ryzen AI CPUs Knock Out Intel Core Ultra In AI LLMs & GenAI Processing, Go Low Power & Cheaper With XDNA 1

AMD's Ryzen AI CPUs outshine Intel's Core Ultra chips in new AI benchmarks which showcase LLMs & GenAI workloads.

AMD was the first to enter the AI PC space with its first-gen Ryzen AI CPUs codenamed Phoenix which was introduced last year. The company has since launched its refreshed Ryzen AI lineup known as Hawk Point which offers enhanced "XDNA" NPUs, delivering a 60% boost in AI TOPS. It looks like AMD has put a lot of work into software optimizations for client-side & localized AI work-loads as demonstrated in new benchmarks published by the company.

In the new tests, AMD emphasizes running LLMs on your CPUs locally which is made possible with a range of models including LLama 2, Mistral AI_, code llama, and RAG. Having a localized AI model running on your PC means that you have more privacy than the models on online cloud platforms, saving you subscription fees without requiring an online connection. The company is pushing more into this space with its recent guide on how to set up your own local AI chat bot which rivals NVIDIA Chat with RTX chatbot.

For performance testing, AMD uses its Ryzen 7 7840U APU at 15W and compares it against the Intel Core Ultra 7 155H at 28W. Both chips are running with 16 GB of LPDDR5-6400 memory & the latest driver packages.

First up, we have Mistral Instruct 7B LLM where the AMD Ryzen 7 7840U CPU completes the AI processing in just 61% of the time compared to the Intel offering while Llama 2 7B chat is even faster with the Ryzen AI chip completing the task in 54% of the time.

To simplify things, the AMD Ryzen 7 7840U (15W) CPU can offer up to 14% faster performance in LLama v2 Chat 7B (Q4 K M) and 17% faster performance in Mistral Instruct 7b (Q4 K M). The time to first token speeds are respectively 79% faster in LLama v2 Chat and 41% faster in the Mistral Instruct 7B LLM.

There are two things to note here, not only is the AMD Ryzen AI CPU faster than the higher wattage Intel Core Ultra CPU but it also features a slightly slower NPU rated at 10 TOPs versus Intel's Core Ultra chip which has an 11 TOPs SKU. That is not even comparing the 28W SKUs or the Hawk Point chips which feature up to 16 TOPs of AI Compute. AMD also highlights the cost advantage of its AI PC platforms which come at a lower price, starting at around $899 (SEP) versus the $999 (SEP) of the Intel Core Ultra 155H chips.

The AMD Ryzen AI PC platform has the advantage of being out in the market for some time now & that has prompted the red team to make swift AI optimizations for its existing and next-gen CPUs. These AI workloads are expected to be further tuned down the road in preparation for the Zen 5-powered Strix Point family which is expected to launch later this year. Intel's Core Ultra platform has been out for a few months too and the company has a bold AI strategy laid out for this space.

  • Cool, AI is very useful for daily usage
  • Good to see but still needs a lot of work
  • Don't care much about AI
  • What's Your Reaction?

    like

    dislike

    love

    funny

    angry

    sad

    wow