vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

Reflection-Llama-3.1-70B

by mattshumer

141B params · text-generation · 1.7k likes · 327 downloads

Reflection-Llama-3.1-70B is a 141B parameter model. At Q4 quantization it requires 71GB of VRAM. It runs comfortably on H200 SXM 141 GB (49 tok/s).

GPU compatibility

GPUVRAMQ4 DecodeVerdict
M4 Max 128GB128GB3 tok/stight
H200 SXM 141 GB141GB49 tok/scomfortable
M2 Ultra 192GB192GB4 tok/stight
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0