Overview
Enterprise 10u rackmount server featuring 8x NVIDIA B200 GPUs with HGX platform. Designed for next-generation AI training with Blackwell architecture delivering unprecedented FP4/FP8 performance for large language model workloads.
Key Features
- 8x NVIDIA B200 GPUs on HGX baseboard
- NVLink 5.0 interconnect for 1.8TB/s GPU-to-GPU bandwidth
- Blackwell architecture with FP4/FP8 transformer engine
- Up to 141GB HBM3e memory per GPU
- Enterprise-grade redundant power and cooling
Ideal For
Large enterprises and hyperscalers requiring maximum AI training performance for frontier model development.
Feature
8x NVIDIA B200 SXM GPUs, 1.4TB HBM3e GPU Memory Space
Feature
32x DIMM slots, DDR5 up to 5600MT/s(1DPC), 4800MT/s(2DPC)
Feature
2x 10GbE RJ-45, 1x Dedicated BMC/IPMI
Feature
6x 5250W Redundant (3+3) Power Supplies Titanium Level Efficiency
$492,000.00
Prices may vary. Verify on vendor site.
Quick Specs
- Feature
- 6x 5250W Redundant (3+3) Power Supplies Titanium Level Efficiency
Tags
llm-traininggenerative-aihpc
