H100 SXM5 96 GB
NVIDIA
96GB VRAM · 3360 GB/s bandwidth · 267.6 FP16 TFLOPS · 700W TDP
The H100 SXM5 96 GB has 96GB of VRAM with 3360 GB/s memory bandwidth and 267.6 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (1209 tok/s), Qwen 2.5 7B (636 tok/s), Llama 3.1 8B (602 tok/s). Models larger than ~163B parameters won't fit even at Q4. Electricity cost is approximately $76/month at 700W TDP.
What LLMs can you run?
| Model | Params | Q4 Weight | Fit | Decode |
|---|---|---|---|---|
| Gemma 3 4B | 4.0B | 2 GB | comfortable | 1209 tok/s |
| Qwen 2.5 7B | 7.6B | 4 GB | comfortable | 636 tok/s |
| Llama 3.1 8B | 8.0B | 4 GB | comfortable | 602 tok/s |
| Mistral Small 24B | 24.0B | 12 GB | comfortable | 201 tok/s |
| Gemma 3 27B | 27.4B | 14 GB | comfortable | 176 tok/s |
| Qwen 2.5 Coder 32B | 32.5B | 16 GB | comfortable | 148 tok/s |
| Llama 3.3 70B | 70.6B | 35 GB | comfortable | 68 tok/s |
| Qwen 2.5 72B | 72.7B | 36 GB | comfortable | 66 tok/s |
| Llama 3.1 405B | 405B | 202 GB | won't fit | |
| DeepSeek R1 671B | 671B | 336 GB | won't fit |
Similar GPUs
| GPU | VRAM | BW | TFLOPS | TDP |
|---|---|---|---|---|
| H100 PCIe 96 GB | 96GB | 3360 GB/s | 248.3 | 700W |
| RTX PRO 6000 Blackwell Server | 96GB | 1790 GB/s | 126.0 | 600W |
| M2 Max 96GB | 96GB | 400 GB/s | 27.2 | 60W |
| H100 SXM5 94 GB | 94GB | 3360 GB/s | 267.6 | 700W |
| H100 NVL 94 GB | 94GB | 3940 GB/s | 241.3 | 400W |
Compare with another GPU
Select another GPU to compare specs and model performance side by side.