NVIDIA DGX Station 320G Features Quad Ampere A100 GPUs With 320 GB Memory, 2.5 PFLOPs For $149,000 US
NVIDIA DGX Station 320G Features Quad Ampere A100 GPUs With 320 GB Memory, 2.5 PFLOPs For $149,000 US

NVIDIA has just announced it's brand new DGX Station 320G AI server based on the Ampere A100 Tensor Core GPUs. The DGX Station 320G features the updated NVIDIA A100 Tensor Core GPUs which pack double the memory & multi-Petaflops of AI horsepower at its disposal.
The NVIDIA DGX Station 320G is aimed at the AI market, accelerating machine learning and data science performance for corporate offices, research facilities, labs, or home offices everywhere. According to NVIDIA, the DGX Station 320G is designed to be the fastest server in a box dedicated to AI research.
NVIDIA's DGX Station 320G Powers AI Innovation Organizations around the world have adopted DGX Station to power AI and data science across industries such as education, financial services, government, healthcare, and retail. These AI leaders include:
Coming to the specifications, the NVIDIA DGX Station 320G is powered by a total of four A100 Tensor Core GPUs. These aren't just any A100 GPUs as NVIDIA has updated the original specs, accommodating twice the memory.
The NVIDIA A100 Tensor Core GPUs in the DGX Station A100 come packed with 80 GB of HBM2e memory which is twice the memory size of the original A100. This means that the DGX Station has a total of 320 GB of total available capacity while fully supporting MIG (Multi-Instance GPU protocol) and 3rd Gen NVLink support, offering 200 GB/s of bidirectional bandwidth between any GPU pair & 3 times faster interconnect speeds than PCIe Gen 4. The rest of the specs for the A100 Tensor Core GPUs remain the same.
The system itself houses an AMD EPYC Rome 7742 64 Core CPU with full PCIe Gen 4 support, up to 512 GB of dedicated system memory, 1.92 TB NVME M.2 SSD storage for OS, and up to 7.68 TB NVME U.2 SSD storage for data cache. For connectivity, the system carries 2x 10 GbE LAN controllers, a single 1 GbE LAN port for remote management. Display output is provided through a discrete DGX Display Adapter card which offers 4 DisplayPort outputs with up to 4K resolution support. The AIC features its own active cooling solution.
Talking about the cooling solution, the DGX Station 320G houses the A100 GPUs on the rear side of the chassis. All four GPUs and the CPU are supplemented by a refrigerant cooling system which is whisper quiet and also maintenance free. The compressor for the cooler is located within the DGX chassis. The whole system is powered by a 1500W PSU and the cooler operates at a silent 37db.
As for performance, the DGX Station A100 delivers 2.5 Petaflops of AI training power & 5 PetaOPS of INT8 inferencing horsepower. The DGX Station A100 is also the only workstation of its kind to support the MIG (Multi-Instance GPU) protocol, allowing users to slice up individual GPUs, allowing for simultaneous workloads to be executed faster and more efficiently.
Over the original DGX Station, the new version offers a 3.17x increase in Training performance, 4.35x increase in Inference performance, and 1.85x increase in HPC oriented workloads. NVIDIA has also updated its DGX A100 system to feature 80 GB A100 Tensor Core GPUs too. Those allow NVIDIA to gain 3 times faster training performance over the standard 320 GB DGX A100 system, 25% faster inference performance, and two times faster data analytics performance.
Advancing AI with DGX SuperPOD DGX SuperPODs are AI supercomputers featuring 20 or more NVIDIA DGX A100 systems and NVIDIA InfiniBand HDR networking. Among the latest to deploy DGX SuperPODs to power new AI solutions and services are:
The NVIDIA DGX Station 320G will be available later this year for a price of $149,000 or a monthly subscription of $9,000 per month. Cloud-native, multi-tenant NVIDIA DGX SuperPODs will be available in Q2 through NVIDIA’s global partners, which can provide pricing to qualified customers upon request. NVIDIA Base Command will also be available starting in Q2.
What's Your Reaction?






