Micron Begins Producing HBM2 – Will Ship This Year
Micron Begins Producing HBM2 – Will Ship This Year

Micron's latest earnings report brings about some news that the company will begin production of High Bandwidth Memory (HBM2). Although a bit late to the game, Micron's presence in the memory market should make a positive impression to the end user.
Micron has announced that it will be joining Samsung and SK Hynix in producing High Bandwidth Memory, a high performance, 2.5D stacked memory alternative to GDDRx commonly seen on NVIDIA's consumer-grade GPUs and all but AMD's top-of-the-line GPU offerings.
HBM was developed as an internal project within AMD and dates back to 2015 with AMD's release of its Fiji-based family of GPUs, and since then, has been updated to what is now known as HBM2, first appearing in AMD's Vega GPUs. HBM aims to reduce the overall footprint of logic devices by implementing the memory onto the logic die itself rather than utilize PCB real estate, and reduce power consumption through lower clocks and compensating with a wider bus, in turn providing increased bandwidth. By fusing these aspects together, HBM enables smaller form factor components along with reduced temperatures, and therefore, smaller and less complicated cooling solutions.
AMD's Fiji GPUs showcased the improvements of HBM over GDDR5. The Radeon R9 Fury X achieved an overall length of 7.5", a notable shrink from the GDDR5-enabled R9 390X's 12" footprint, while being rated at a 275W TDP and incorporating a 120mm AiO liquid cooling solution. HBM enabled the Fiji GPU to reach a level of efficiency that was previously unavailable with GDDR5, and the result of was the Radeon R9 Nano and Radeon Pro Duo.
The Radeon R9 Nano contained the same Fiji GPU as the R9 Fury X, but was binned for efficiency, therefore providing similar performance at a TDP of only 175W, an even smaller overall length at 6", and a dual-slot single fan cooling solution. These binned Fiji GPUs also made their way into what AMD claimed as being 'the world's fastest graphics card', a dual-GPU liquid cooled behemoth known as the Radeon Pro Duo, as well as its server-oriented variant, the passively-cooled FirePro S9300 X2.
First generation HBM was not without its drawbacks, though. Each HBM-enabled GPU had a 4GB VRAM limitation. Second generation HBM2 aimed to solve this problem, and with the release of AMD's Radeon Vega-based GPUs, total memory capacity was increased with memory capacities ranging from 8GB to 32GB.
Despite HBM originating from AMD, NVIDIA has opted to use the memory technology in its own GPUs. NVIDIA has implemented HBM2 within its top-of-the-line professional and server-grade graphics cards such as the Tesla V100 and Titan V.
With Micron's announcement of High Bandwidth Memory production, consumers could see an overall reduction in graphics card prices if memory manufacturers strike up enough competition. HBM2 memory is considerably expensive, but if competition leads to reduced prices, more memory may be integrated within a GPU in a similar price window.
What's Your Reaction?






