Llama-2-70b
by meta-llama
70.0B params · text-generation · 538 likes · 9 downloads
Llama-2-70b is a 70.0B parameter model. At Q4 quantization it requires 35GB of VRAM. It runs comfortably on A100 PCIe 80 GB (34 tok/s), H100 SXM5 80 GB (69 tok/s), H200 SXM 141 GB (100 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 6 tok/s | tight |
| M4 Pro 48GB | 48GB | 3 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 34 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 69 tok/s | comfortable |
| M4 Max 64GB | 64GB | 6 tok/s | tight |
| H200 SXM 141 GB | 141GB | 100 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 9 tok/s | tight |