vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

Llama-3.3-Swallow-70B-Instruct-v0.4

by tokyotech-llm

70.6B params · text-generation · 12 likes · 4.5k downloads

Llama-3.3-Swallow-70B-Instruct-v0.4 is a 70.6B parameter model. At Q4 quantization it requires 35GB of VRAM. It runs comfortably on A100 PCIe 80 GB (34 tok/s), H100 SXM5 80 GB (68 tok/s), H200 SXM 141 GB (99 tok/s).

Inference providers

Provider$/1M in$/1M outThroughput
SambaNova147 tok/s

GPU compatibility

GPUVRAMQ4 DecodeVerdict
M4 Max 128GB128GB6 tok/stight
M4 Pro 48GB48GB3 tok/stight
A100 PCIe 80 GB80GB34 tok/scomfortable
H100 SXM5 80 GB80GB68 tok/scomfortable
M4 Max 64GB64GB6 tok/stight
H200 SXM 141 GB141GB99 tok/scomfortable
M2 Ultra 192GB192GB9 tok/stight
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0