Llama-3.2-1B
by meta-llama
1.2B params · text-generation · 2.3k likes · 1.9M downloads
Llama-3.2-1B is a 1.2B parameter model. At Q4 quantization it requires 1GB of VRAM. It runs comfortably on GeForce RTX 4090 (1062 tok/s), GeForce RTX 5090 (1593 tok/s), M4 Max 128GB (388 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 1062 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 1593 tok/s | comfortable |
| M4 Max 128GB | 128GB | 388 tok/s | comfortable |
| M4 Pro 48GB | 48GB | 194 tok/s | comfortable |
| M4 Pro 24GB | 24GB | 194 tok/s | comfortable |
| A100 PCIe 80 GB | 80GB | 1946 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 3915 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 939 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 776 tok/s | comfortable |
| GeForce RTX 4080 | 16GB | 754 tok/s | comfortable |