M1 8GB
Apple Silicon
8GB VRAM · 68 GB/s bandwidth · 5.2 FP16 TFLOPS · 15W TDP · $499 street price
The M1 8GB has 8GB of VRAM. With only 8GB available, no current reference LLMs fit comfortably at Q4 quantization. Consider GPUs with 12GB+ VRAM for local inference.
What LLMs can you run?
| Model | Params | Q4 Weight | Fit | Decode |
|---|---|---|---|---|
| Gemma 3 4B | 4.0B | 2 GB | tight | 13 tok/s |
| Qwen 2.5 7B | 7.6B | 4 GB | tight | 7 tok/s |
| Llama 3.1 8B | 8.0B | 4 GB | tight | 6 tok/s |
| Mistral Small 24B | 24.0B | 12 GB | won't fit | |
| Gemma 3 27B | 27.4B | 14 GB | won't fit | |
| Qwen 2.5 Coder 32B | 32.5B | 16 GB | won't fit | |
| Llama 3.3 70B | 70.6B | 35 GB | won't fit | |
| Qwen 2.5 72B | 72.7B | 36 GB | won't fit | |
| Llama 3.1 405B | 405B | 202 GB | won't fit | |
| DeepSeek R1 671B | 671B | 336 GB | won't fit |
Similar GPUs
| GPU | VRAM | BW | TFLOPS | TDP |
|---|---|---|---|---|
| M2 8GB | 8GB | 100 GB/s | 7.2 | 15W |
| M3 8GB | 8GB | 100 GB/s | 8.2 | 15W |
| T1000 8 GB | 8GB | 160 GB/s | 5.0 | 50W |
| RTX A1000 | 8GB | 192 GB/s | 6.7 | 50W |
| Tesla P4 | 8GB | 192 GB/s | 0.1 | 75W |
Compare with another GPU
Select another GPU to compare specs and model performance side by side.