Overview
High-performance 2u rackmount server with 8x NVIDIA H100 SXM5 GPUs connected via NVLink. Optimized for large-scale LLM training and inference workloads requiring maximum GPU memory bandwidth.
Key Features
- 8x NVIDIA H100 SXM5 GPUs with NVLink
- High-bandwidth HBM3/HBM3e memory for LLM workloads
- NVSwitch fabric for all-to-all GPU communication
- Dual-socket CPU platform
- Enterprise redundant power and advanced cooling
Ideal For
AI teams requiring enterprise-grade infrastructure for LLM training, fine-tuning, and high-throughput inference.
No specifications available.
$289,000.00
Prices may vary. Verify on vendor site.
Tags
llm-trainingllm-inferencegenerative-ai
