NVIDIA Blackwell B100 GPUs Coming This Year & Upgraded B200 For 2025’s AI Data Centers, Dell Confirms
NVIDIA Blackwell B100 GPUs Coming This Year & Upgraded B200 For 2025’s AI Data Centers, Dell Confirms

NVIDIA's Blackwell AI GPU lineup will include two major accelerators, the B100 for 2024 and the B200 for 2025, as revealed by Dell.
During its recent earnings call, Dell revealed the fact that NVIDIA's next-generation Blackwell lineup won't just have a B100 AI accelerator but there will be an upgraded version offered later on. By the looks of things, NVIDIA seems to have already disclosed this information to its largest partners such as Dell who will be operating servers and data centers using NVIDIA's latest AI and compute-ready hardware.
We know that NVIDIA's Blackwell GPUs are headed for launch this year and will make their formal debut at GTC 2024 this month which is just a few weeks away. The current roadmaps point out the NVIDIA B100, GB200 and GB200NVL with B100 being the codename for the GPU itself, GB200 being the Superchip platform and GB200NVL being the interconnected platform for supercomputing use.
Currently, there are reports that NVIDIA's Blackwell GPUs might utilize a monolithic design though nothing is concrete yet. What is known is that like the Hopper H200 GPUs, the Blackwell B100 in its first generation will leverage HBM3e memory technologies. So the upgraded B200 variant can end up using an even faster version of the HBM memory along with possibly higher memory capacities, upgraded specs, and enhanced features. Samsung is said to be a major memory provider for the Blackwell GPUs.
Obviously, any line of sight to changes that we're excited about what's happening with the H200 and its performance improvement. We're excited about what happens at the B100 and the B200, and we think that's where there's actually another opportunity to distinguish engineering confidence. Our characterization in the thermal side, you really don't need direct liquid cooling to get to the energy density of 1,000 watts per GPU.
That happens next year with the B200. The opportunity for us really to showcase our engineering and how fast we can move and the work that we've done as an industry leader to bring our expertise to make liquid cooling perform at scale, whether that's things in fluid chemistry and performance, our interconnect work, the telemetry we are doing, the power management work we're doing, it really allows us to be prepared to bring that to the marketplace at scale to take advantage of this incredible computational capacity or intensity or capability that will exist in the marketplace.
Robert L. Williams - Dell Technologies Inc. - SVP of IR
Interestingly, Dell's SVP also points out that the power density of these next-gen GPUs such as the NVIDIA Blackwell B100 & B200 is going to be 1000W which is a huge number but nothing that we did not expect from some of the world's fastest AI accelerators. The NVIDIA Hopper H200 and AMD Instinct MI300X can already consume up to 800W at peak power so given the performance uplifts that Blackwell is expected to bring, a 200W increase in power is going to be a rather nominal one given its efficiency increase.
Power requirements for both CPUs & GPUs are going to continue to increase in the future as pointed out a while back in Gigabyte's roadmap. To overcome these, the companies will leverage from advanced process and packaging technologies to make sure the cost of chip development remains optimal while ensuring higher power efficiency.
News Source: Barron's
What's Your Reaction?






