
Air-Cooled AI SuperCluster
Industry Leading AI Performance with Advanced Air-Cooling Technology
Supermicro's SuperCluster accelerated by the NVIDIA Blackwell Platform empowers the next stage of AI, defined by new breakthroughs, including the evolution of scaling laws and the rise of reasoning models. The SuperCluster provides the core infrastructure elements necessary to scale the NVIDIA Blackwell Platform and deploy the pinnacle of AI training and inference performance. SuperCluster simplifies the complexities of AI infrastructure by providing a fully validated AI cluster with a plug-and-play deployment experience.
Supermicro's new air-cooled SuperCluster is composed of the new Supermicro NVIDIA HGX B200 8-GPU systems. Featuring a redesigned 10U chassis to accommodate the thermals of its leading-edge AI compute performance, it is designed to tackle heavy AI workloads of all types, from training to fine-tuning to inference.
Why Air-Cooled AI SuperCluster?
The new 10U air-cooled SuperCluster features Supermicro NVIDIA HGX B200 8-GPU systems with a redesigned chassis to accommodate the thermals of its leading-edge AI compute performance. It is purpose-built to tackle heavy AI workloads of all types, from training to fine-tuning to inference.
The 10U air-cooled NVIDIA HGX B200 8-GPU node offers an upgraded mechanical design to improve airflow over key components, including the GPU heatsinks and high-speed network cards. Each node contains 8 NVIDIA Blackwell GPUs (180 GB HBM3e each) interconnected at 1.8 TB/s via NVLink, providing a total of 1.4 TB of GPU memory per system.
The SuperCluster creates a massive pool of 256 GPUs acting as one AI supercomputer. Its rack-scale design provides a 1:1 GPU-to-NIC network ratio (using NVIDIA ConnectX-7 NICs or BlueField-3 DPUs), enabling a non-blocking, high-performance fabric across the entire 9-rack cluster.

Rack Scale Design Close-up
Networking
- NVIDIA Quantum-2 400G InfiniBand switches or NVIDIA Spectrum-4 400GbE Ethernet switches dedicated for compute and storage
- Ethernet leaf switches for in-band management
- Out-of-band 1G/10G IPMI switch
- Non-blocking network topology
Compute
- 4× SYS-A22GA-NBRT / AS-A126GS-TNBR / SYS-A21GE-NBRT per rack
- 32× NVIDIA B200 GPUs per rack
- 5.76 TB of HBM3e memory per rack
- Flexible storage options with local or dedicated fabric supporting NVIDIA GPUDirect RDMA and RoCE
Node Configuration
Overview | 10U air-cooled system with NVIDIA HGX B200 8-GPU |
---|---|
CPU | Dual Intel® Xeon® 6900 series or Dual AMD EPYC™ 9005/9004 (SYS-A22GA-NBRT / AS-A126GS-TNBR / SYS-A21GE-NBRT) |
Memory | 24 DIMMs up to DDR5-6400 / 24 DIMMs up to DDR5-6000 / 32 DIMMs up to DDR5-5600 |
GPU | 8× NVIDIA B200 (180 GB HBM3e each); 1.8 TB/s NVLink with NVSwitch |
Networking | 8× ConnectX-7 NICs or BlueField-3 DPUs (up to 400 Gbps) |
Storage | 10× hot-swap 2.5″ NVMe bays; 2× M.2 NVMe slots |
Power Supply | 6× 5250 W Titanium Level redundant PSUs |

32-Node Scalable Unit
Overview | Fully integrated air-cooled 32-node cluster with 256 NVIDIA B200 GPUs |
---|---|
Compute Fabric (Leaf) | 8× Quantum-2 400G InfiniBand or 8× Spectrum-4 400GbE switches |
Compute Fabric (Spine) | 4× Quantum-2 400G InfiniBand or 4× Spectrum-4 400GbE switches |
In-band Mgmt Switch | 3× Spectrum SN4600 100GbE switches |
Out-of-band Mgmt Switch | 2× SSE-G3748R-SMIS 48-port 1GbE ToR; 1× SSE-F3548SR 48-port 10GbE ToR |
Rack | 9× 48U × 750 mm × 1295 mm |
PDU | 34× 208 V 60 A 3Ph |

Software & Services
Software: Supermicro's SuperCloud Composer software provides management tools for monitoring and optimizing air- or liquid-cooled infrastructure, delivering a complete solution from proof of concept to full-scale deployment. It allows administrators to manage all data center components – compute, storage, and networking – through one unified dashboard. The SuperCluster also supports NVIDIA AI Enterprise software to accelerate time-to-production for AI workloads, with NVIDIA NIM microservices enabling easy access to deploy the latest AI models and agents fully optimized for the new NVIDIA Blackwell platform.
Services: Supermicro's on-site rack deployment service helps enterprises build a data center from the ground up, including the planning, design, power-up, validation, testing, installation, and configuration of racks, servers, switches, and other networking equipment to meet specific needs.
-
GPU SuperServer SYS-A22GA-NBRT
AI / Deep Learning
Scientific Research
Conversational AI
Drug Discovery
High Performance Computing (HPC)332 154.86 € -
GPU A+ Server AS-A126GS-TNBR
High Performance Computing
AI/Deep Learning Training
Industrial Automation
Retail
Healthcare
Conversational AI
Business Intelligence & Analytics
Drug Discovery
Climate and Weather Modeling
Finance & Economics328 178.31 €
-
NVIDIA Quantum 2 MQM9700-NS2F
Spine or Top-of-Rack leaf switch
32x OSFP ports
64x NDR IB ports
MLNX-OS
920-9B210-00FN-0M030 215.64 €
ServerSimply Air-Cooled AI SuperCluster Solutions
Step into high-performance AI computing with ServerSimply Air-Cooled AI SuperCluster Solutions, featuring Supermicro’s 10U air-cooled NVIDIA HGX™ B200 8-GPU systems with redesigned chassis for enhanced thermal headroom and airflow efficiency :contentReference[oaicite:1]{index=1}. Each system delivers eight 180 GB HBM3e GPUs interconnected at 1.8 TB/s via NVIDIA NVLink and a 1:1 GPU-to-NIC ratio, supporting both high-speed Ethernet and InfiniBand fabrics for ultra-low latency and high-throughput AI workloads :contentReference[oaicite:2]{index=2}. Scalable rack integration of up to nine air-cooled racks enables seamless expansion to exascale GPU clusters, offering businesses a cost-effective and serviceable solution for AI model training and inference :contentReference[oaicite:3]{index=3}.
