Radeon Instinct MI325X
AMD
288GB VRAM · 10300 GB/s bandwidth · 653.7 FP16 TFLOPS · 1000W TDP
The Radeon Instinct MI325X has 288GB of VRAM with 10300 GB/s memory bandwidth and 653.7 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (3502 tok/s), Qwen 2.5 7B (1843 tok/s), Llama 3.1 8B (1744 tok/s). Models larger than ~490B parameters won't fit even at Q4. Electricity cost is approximately $108/month at 1000W TDP.
What LLMs can you run?
| Model | Params | Q4 Weight | Fit | Decode |
|---|---|---|---|---|
| Gemma 3 4B | 4.0B | 2 GB | comfortable | 3502 tok/s |
| Qwen 2.5 7B | 7.6B | 4 GB | comfortable | 1843 tok/s |
| Llama 3.1 8B | 8.0B | 4 GB | comfortable | 1744 tok/s |
| Mistral Small 24B | 24.0B | 12 GB | comfortable | 583 tok/s |
| Gemma 3 27B | 27.4B | 14 GB | comfortable | 511 tok/s |
| Qwen 2.5 Coder 32B | 32.5B | 16 GB | comfortable | 431 tok/s |
| Llama 3.3 70B | 70.6B | 35 GB | comfortable | 198 tok/s |
| Qwen 2.5 72B | 72.7B | 36 GB | comfortable | 192 tok/s |
| Llama 3.1 405B | 405B | 202 GB | comfortable | 34 tok/s |
| DeepSeek R1 671B | 671B | 336 GB | won't fit |
Similar GPUs
| GPU | VRAM | BW | TFLOPS | TDP |
|---|---|---|---|---|
| Radeon Instinct MI350X | 288GB | 8189 GB/s | 576.7 | 1000W |
| Radeon Instinct MI355X | 288GB | 8189 GB/s | 629.1 | 1400W |
| Radeon Instinct MI300A | 192GB | 10300 GB/s | 653.7 | 750W |
| Radeon Instinct MI300X | 192GB | 10300 GB/s | 653.7 | 750W |
| Radeon Instinct MI308X | 192GB | 10300 GB/s | 653.7 | 750W |
Compare with another GPU
Select another GPU to compare specs and model performance side by side.