vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

Radeon Instinct MI300

AMD

128GB VRAM · 6550 GB/s bandwidth · 383.0 FP16 TFLOPS · 600W TDP

The Radeon Instinct MI300 has 128GB of VRAM with 6550 GB/s memory bandwidth and 383.0 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (2227 tok/s), Qwen 2.5 7B (1172 tok/s), Llama 3.1 8B (1109 tok/s). Models larger than ~218B parameters won't fit even at Q4. Electricity cost is approximately $65/month at 600W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable2227 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable1172 tok/s
Llama 3.1 8B8.0B4 GBcomfortable1109 tok/s
Mistral Small 24B24.0B12 GBcomfortable371 tok/s
Gemma 3 27B27.4B14 GBcomfortable325 tok/s
Qwen 2.5 Coder 32B32.5B16 GBcomfortable274 tok/s
Llama 3.3 70B70.6B35 GBcomfortable126 tok/s
Qwen 2.5 72B72.7B36 GBcomfortable122 tok/s
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Radeon Instinct MI250128GB3280 GB/s362.1500W
Radeon Instinct MI250X128GB3280 GB/s383.0500W
M1 Ultra 128GB128GB800 GB/s42.5120W
M2 Ultra 128GB128GB800 GB/s54.4120W
M3 Ultra 128GB128GB800 GB/s65.5120W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0