Reflection-Llama-3.1-70B
by mattshumer
141B params · text-generation · 1.7k likes · 327 downloads
Reflection-Llama-3.1-70B is a 141B parameter model. At Q4 quantization it requires 71GB of VRAM. It runs comfortably on H200 SXM 141 GB (49 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 3 tok/s | tight |
| H200 SXM 141 GB | 141GB | 49 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 4 tok/s | tight |