AMD Preps For ROCm 6.1 Release, Now Coming With Enhanced Support & Features

AMD Preps For ROCm 6.1 Release, Now Coming With Enhanced Support & Features

 0
AMD Preps For ROCm 6.1 Release, Now Coming With Enhanced Support & Features
AMD Says Its Expanding ROCm To Radeon GPUs & APUs, Opens Design Centers In Serbia That Will Also Work on Next-Gen UDNA Architecture 1

AMD is preparing to release its ROCm 6.1 software stack, this time coming with enhanced support and extensive optimization changes.

Team Red has been recently ramping up ROCm developments, as back in December 2023, we saw the debut of ROCm 6.0, which came with support for AMD's Instinct MI300A/MIX300X AI accelerators along with several changes to improve the state of the stack. Now, this time, the new ROCm 6.1 has a similar nature as well. Still, instead of product compatibility, the latest update comes with a wave of new additions and fixes to enhance the computing capabilities of AMD's AI offerings.

Phoronix reports that AMD has been pushing out continuous updates within the ROCm software stack, suggesting that the new iteration is near its release. Team Red has added new features within its MIOpen open-source deep learning library, which now includes an AI-based parameter prediction model and several fixes. Moreover, AMD's Graph Inference Engine, the MIGraphX, has witnessed inclusion for FP8 support and extended support for several LLM models such as Llama-2 and Stable Diffusion 2.1.

Another exciting development is changes made to AMD's HIP API, which is targeted toward code portability and programming on heterogeneous computing systems. It now comes with a new "hipother" repository that aims to provide the API's back-end implementation for platforms other than AMD. We recently saw an occasion of code porting, where CUDA libraries were leveraged on the ROCm platform through ZLUDA. While heterogeneous programming is something that hasn't been focused upon in recent times, it could change going ahead.

AMD officials have stated that ROCm has reached software parity with CUDA on large language model training, and continuous updates could lead the stack into a position where it competes head-to-head with competitors in the industry.

News Source: Phoronix

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow