Air-Cooled AI SuperCluster
Air-Cooled AI SuperCluster Rack

Air-Cooled AI SuperCluster

Industry Leading AI Performance with Advanced Air-Cooling Technology

Supermicro's SuperCluster accelerated by the NVIDIA Blackwell Platform empowers the next stage of AI, defined by new breakthroughs, including the evolution of scaling laws and the rise of reasoning models. The SuperCluster provides the core infrastructure elements necessary to scale the NVIDIA Blackwell Platform and deploy the pinnacle of AI training and inference performance. SuperCluster simplifies the complexities of AI infrastructure by providing a fully validated AI cluster with a plug-and-play deployment experience.

Supermicro's new air-cooled SuperCluster is composed of the new Supermicro NVIDIA HGX B200 8-GPU systems. Featuring a redesigned 10U chassis to accommodate the thermals of its leading-edge AI compute performance, it is designed to tackle heavy AI workloads of all types, from training to fine-tuning to inference.

DON'T OVERTHINK — SEE PRODUCTS GET PRICING

Why Air-Cooled AI SuperCluster?

The new 10U air-cooled SuperCluster features Supermicro NVIDIA HGX B200 8-GPU systems with a redesigned chassis to accommodate the thermals of its leading-edge AI compute performance. It is purpose-built to tackle heavy AI workloads of all types, from training to fine-tuning to inference.

The 10U air-cooled NVIDIA HGX B200 8-GPU node offers an upgraded mechanical design to improve airflow over key components, including the GPU heatsinks and high-speed network cards. Each node contains 8 NVIDIA Blackwell GPUs (180 GB HBM3e each) interconnected at 1.8 TB/s via NVLink, providing a total of 1.4 TB of GPU memory per system.

The SuperCluster creates a massive pool of 256 GPUs acting as one AI supercomputer. Its rack-scale design provides a 1:1 GPU-to-NIC network ratio (using NVIDIA ConnectX-7 NICs or BlueField-3 DPUs), enabling a non-blocking, high-performance fabric across the entire 9-rack cluster.

Rack Scale Design Close-up

Rack Scale Design Close-up

Networking

  • NVIDIA Quantum-2 400G InfiniBand switches or NVIDIA Spectrum-4 400GbE Ethernet switches dedicated for compute and storage
  • Ethernet leaf switches for in-band management
  • Out-of-band 1G/10G IPMI switch
  • Non-blocking network topology

Compute

  • 4× SYS-A22GA-NBRT / AS-A126GS-TNBR / SYS-A21GE-NBRT per rack
  • 32× NVIDIA B200 GPUs per rack
  • 5.76 TB of HBM3e memory per rack
  • Flexible storage options with local or dedicated fabric supporting NVIDIA GPUDirect RDMA and RoCE

Node Configuration

Overview10U air-cooled system with NVIDIA HGX B200 8-GPU
CPUDual Intel® Xeon® 6900 series or Dual AMD EPYC™ 9005/9004 (SYS-A22GA-NBRT / AS-A126GS-TNBR / SYS-A21GE-NBRT)
Memory24 DIMMs up to DDR5-6400 / 24 DIMMs up to DDR5-6000 / 32 DIMMs up to DDR5-5600
GPU8× NVIDIA B200 (180 GB HBM3e each); 1.8 TB/s NVLink with NVSwitch
Networking8× ConnectX-7 NICs or BlueField-3 DPUs (up to 400 Gbps)
Storage10× hot-swap 2.5″ NVMe bays; 2× M.2 NVMe slots
Power Supply6× 5250 W Titanium Level redundant PSUs
Single Node Configuration

32-Node Scalable Unit

OverviewFully integrated air-cooled 32-node cluster with 256 NVIDIA B200 GPUs
Compute Fabric (Leaf)8× Quantum-2 400G InfiniBand or 8× Spectrum-4 400GbE switches
Compute Fabric (Spine)4× Quantum-2 400G InfiniBand or 4× Spectrum-4 400GbE switches
In-band Mgmt Switch3× Spectrum SN4600 100GbE switches
Out-of-band Mgmt Switch2× SSE-G3748R-SMIS 48-port 1GbE ToR; 1× SSE-F3548SR 48-port 10GbE ToR
Rack9× 48U × 750 mm × 1295 mm
PDU34× 208 V 60 A 3Ph
32-Node Scalable Unit

Software & Services

Software: Supermicro's SuperCloud Composer software provides management tools for monitoring and optimizing air- or liquid-cooled infrastructure, delivering a complete solution from proof of concept to full-scale deployment. It allows administrators to manage all data center components – compute, storage, and networking – through one unified dashboard. The SuperCluster also supports NVIDIA AI Enterprise software to accelerate time-to-production for AI workloads, with NVIDIA NIM microservices enabling easy access to deploy the latest AI models and agents fully optimized for the new NVIDIA Blackwell platform.

Services: Supermicro's on-site rack deployment service helps enterprises build a data center from the ground up, including the planning, design, power-up, validation, testing, installation, and configuration of racks, servers, switches, and other networking equipment to meet specific needs.

View as Grid List

2 items available

per page
Set Descending Direction
Products
  1. GPU SuperServer SYS-A22GA-NBRT GPU SuperServer SYS-A22GA-NBRT

    AI / Deep Learning
    Scientific Research
    Conversational AI
    Drug Discovery
    High Performance Computing (HPC)

    332 154.86 €
  2. GPU A+ Server AS-A126GS-TNBR GPU A+ Server AS-A126GS-TNBR

    High Performance Computing
    AI/Deep Learning Training
    Industrial Automation
    Retail
    Healthcare
    Conversational AI
    Business Intelligence & Analytics
    Drug Discovery
    Climate and Weather Modeling
    Finance & Economics

    328 178.31 €
View as Grid List

2 items available

per page
Set Descending Direction
Supporting Products
  1. NVIDIA Spectrum MSN2100-CB2FC MSN2100-CB2FC NVIDIA - 920-9N100-00F7-0C0

    Spine or Top-of-Rack switch
    16x QSFP28 100GbE ports
    Cumulus Linux

    11 839.05 €
  2. Mellanox Quantum MQM8700-HS2F MQM8700-HS2F NVIDIA - 920-9B110-00FH-0MD

    Spine or Top-of-Rack leaf switch
    40x QSFP56 HDR IB ports
    MLNX-OS

    17 366.24 €
  3. NVIDIA Spectrum-2 MSN3700-VS2FC Mellanox Spectrum-2 MSN3700-VS2FC

    Spine or super-spine switch
    32x QSFP56 200GbE ports
    Cumulus Linux

    23 538.09 €
  4. NVIDIA Spectrum-2 MSN3420-CB2FC Mellanox Spectrum-2 MSN3420-CB2FC

    Top-of-Rack switch
    12x QSFP28 100GbE ports
    48x SFP28 25GbE ports
    Cumulus Linux

    16 574.26 €
  5. NVIDIA Spectrum-3 MSN4700-WS2FC Mellanox Spectrum-3 MSN4700-WS2FC

    Spine or super-spine switch
    32x QSFP-DD 400GbE ports
    Cumulus Linux

    30 501.93 €
  6. NVIDIA Quantum 2 MQM9700-NS2F NVIDIA Quantum 2 MQM9700-NS2F

    Spine or Top-of-Rack leaf switch
    32x OSFP ports
    64x NDR IB ports
    MLNX-OS
    920-9B210-00FN-0M0

    30 215.64 €
Contact us to learn more about our solutions
Contact now

ServerSimply Air-Cooled AI SuperCluster Solutions

Step into high-performance AI computing with ServerSimply Air-Cooled AI SuperCluster Solutions, featuring Supermicro’s 10U air-cooled NVIDIA HGX™ B200 8-GPU systems with redesigned chassis for enhanced thermal headroom and airflow efficiency :contentReference[oaicite:1]{index=1}. Each system delivers eight 180 GB HBM3e GPUs interconnected at 1.8 TB/s via NVIDIA NVLink and a 1:1 GPU-to-NIC ratio, supporting both high-speed Ethernet and InfiniBand fabrics for ultra-low latency and high-throughput AI workloads :contentReference[oaicite:2]{index=2}. Scalable rack integration of up to nine air-cooled racks enables seamless expansion to exascale GPU clusters, offering businesses a cost-effective and serviceable solution for AI model training and inference :contentReference[oaicite:3]{index=3}.

Loading...