NVIDIA A100 Ampere GPU Launched in PCIe Form Factor, 20 Times Faster Than Volta at 250W & 40 GB HBM2 Memory

NVIDIA A100 Ampere GPU Launched in PCIe Form Factor, 20 Times Faster Than Volta at 250W & 40 GB HBM2 Memory

 0
NVIDIA A100 Ampere GPU Launched in PCIe Form Factor, 20 Times Faster Than Volta at 250W & 40 GB HBM2 Memory

NVIDIA has added a third variant to its growing Ampere A100 GPU family, the A100 PCIe which is PCIe 4.0 compliant and comes in the standard full-length, full height form factor compared to the mezzanine board we got to see earlier.

Just like the Pascal P100 and Volta V100 before it, the Ampere A100 GPU was bound to get a PCIe variant sooner or later. Now NVIDIA has announced that its A100 PCIe GPU accelerator is available for a diverse set of use cases with system ranging from a single A100 PCIe GPU to servers utilizing two cards at the same time through the 12 NVLINK channels that deliver 600 GB/s of interconnect bandwidth.

In terms of specifications, the A100 PCIe GPU accelerator doesn't change much in terms of core configuration. The GA100 GPU retains the specifications we got to see on the 400W variant with 6912 CUDA cores arranged in 108 SM units, 432 Tensor Cores and 40 GB of HBM2 memory that delivers the same memory bandwidth of 1.55 TB/s (rounded off to 1.6 TB/s). The main difference can be seen in the TDP which is rated at 250W for the PCIe variant whereas the standard variant comes with a 400W TDP.

Now we can guess that the card would feature lower clocks to compensate for the less TDP input but NVIDIA has provided the peak compute numbers and those remain unaffected for the PCIe variant. The FP64 performance is still rated at 9.7/19.5 TFLOPs, FP32 performance is rated at 19.5 /156/312 TFLOPs (Sparsity), FP16 performance is rated at 312/624 TFLOPs (Sparsity) & INT8 is rated at 624/1248 TOPs (Sparsity).

According to NVIDIA, the A100 PCIe accelerator can deliver 90% the performance of the A100 HGX card (400W) in top server applications. This is mainly due to the less time it takes for the card to achieve the said tasks however, in complex situations which required sustained GPU capabilities, the GPU can deliver anywhere from up to 90% to down to 50% the performance of the 400W GPU in the most extreme cases. NVIDIA told that the 50% drop will be very rare and only a few tasks can push the card to such extend.

There's a wide scale adoption being made possible already by NVIDIA and its server partners for the said PCIe based GPU accelerator which include:

  • ASUS will offer the ESC4000A-E10, which can be configured with four A100 PCIe GPUs in a single server.
  • Atos is offering its BullSequana X2415 system with four NVIDIA A100 Tensor Core GPUs.
  • Cisco plans to support NVIDIA A100 Tensor Core GPUs in its Cisco Unified Computing System servers and in its hyperconverged infrastructure system, Cisco HyperFlex.
  • Dell Technologies plans to support NVIDIA A100 Tensor Core GPUs across its PowerEdge servers and solutions that accelerate workloads from edge to core to cloud, just as it supports other NVIDIA GPU accelerators, software and technologies in a wide range of offerings.
  • Fujitsu is bringing A100 GPUs to its PRIMERGY line of servers.
  • GIGABYTE will offer G481-HA0, G492-Z50 and G492-Z51 servers that support up to 10 A100 PCIe GPUs, while the G292-Z40 server supports up to eight.
  • HPE will support A100 PCIe GPUs in the HPE ProLiant DL380 Gen10 Server, and for accelerated HPC and AI workloads, in the HPE Apollo 6500 Gen10 System.
  • Inspur is releasing eight NVIDIA A100-powered systems, including the NF5468M5, NF5468M6 and NF5468A5 using A100 PCIe GPUs, the NF5488M5-D, NF5488A5, NF5488M6 and NF5688M6 using eight-way NVLink, and the NF5888M6 with 16-way NVLink.
  • Lenovo will support A100 PCIe GPUs on select systems, including the Lenovo ThinkSystem SR670 AI-ready server. Lenovo will expand availability across its ThinkSystem and ThinkAgile portfolio in the fall.
  • One Stop Systems will offer its OSS 4UV Gen 4 PCIe expansion system with up to eight NVIDIA A100 PCIe GPUs to allow AI and HPC customers to scale out their Gen 4 servers.
  • Quanta/QCT will offer several QuantaGrid server systems, including D52BV-2U, D43KQ-2U and D52G-4U that support up to eight NVIDIA A100 PCIe GPUs.
  • Supermicro will offer its 4U A+ GPU system, supporting up to eight NVIDIA A100 PCIe GPUs and up to two additional high-performance PCI-E 4.0 expansion slots along with other 1U, 2U and 4U GPU servers.
  • NVIDIA hasn't announced any release date or pricing for the card yet but considering the A100 (400W) Tensor Core GPU is already being shipped since its launch, the A100 (250W) PCIe will be following its footsteps soon.

    What's Your Reaction?

    like

    dislike

    love

    funny

    angry

    sad

    wow