vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

H100 SXM5 96 GB

NVIDIA

96GB VRAM · 3360 GB/s bandwidth · 267.6 FP16 TFLOPS · 700W TDP

The H100 SXM5 96 GB has 96GB of VRAM with 3360 GB/s memory bandwidth and 267.6 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (1209 tok/s), Qwen 2.5 7B (636 tok/s), Llama 3.1 8B (602 tok/s). Models larger than ~163B parameters won't fit even at Q4. Electricity cost is approximately $76/month at 700W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable1209 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable636 tok/s
Llama 3.1 8B8.0B4 GBcomfortable602 tok/s
Mistral Small 24B24.0B12 GBcomfortable201 tok/s
Gemma 3 27B27.4B14 GBcomfortable176 tok/s
Qwen 2.5 Coder 32B32.5B16 GBcomfortable148 tok/s
Llama 3.3 70B70.6B35 GBcomfortable68 tok/s
Qwen 2.5 72B72.7B36 GBcomfortable66 tok/s
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
H100 PCIe 96 GB96GB3360 GB/s248.3700W
RTX PRO 6000 Blackwell Server96GB1790 GB/s126.0600W
M2 Max 96GB96GB400 GB/s27.260W
H100 SXM5 94 GB94GB3360 GB/s267.6700W
H100 NVL 94 GB94GB3940 GB/s241.3400W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0