Llama-3.2-3B-Instruct
by meta-llama
3.0B params · text-generation · 2.0k likes · 3.9M downloads
Llama-3.2-3B-Instruct is a 3.0B parameter model. At Q4 quantization it requires 2GB of VRAM. It runs comfortably on GeForce RTX 4090 (437 tok/s), GeForce RTX 5090 (656 tok/s), M4 Max 128GB (160 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 437 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 656 tok/s | comfortable |
| M4 Max 128GB | 128GB | 160 tok/s | comfortable |
| M4 Pro 48GB | 48GB | 80 tok/s | comfortable |
| M4 Pro 24GB | 24GB | 80 tok/s | comfortable |
| A100 PCIe 80 GB | 80GB | 801 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 1612 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 386 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 320 tok/s | comfortable |
| GeForce RTX 4080 | 16GB | 310 tok/s | comfortable |