vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

Jamba-v0.1

by ai21labs

51.6B params · text-generation · 1.2k likes · 1.2k downloads

Jamba-v0.1 is a 51.6B parameter model. At Q4 quantization it requires 26GB of VRAM. It runs comfortably on GeForce RTX 5090 (38 tok/s), A100 PCIe 80 GB (46 tok/s), H100 SXM5 80 GB (93 tok/s).

GPU compatibility

GPUVRAMQ4 DecodeVerdict
GeForce RTX 509032GB38 tok/scomfortable
M4 Max 128GB128GB9 tok/stight
M4 Pro 48GB48GB4 tok/stight
A100 PCIe 80 GB80GB46 tok/scomfortable
H100 SXM5 80 GB80GB93 tok/scomfortable
M4 Max 64GB64GB9 tok/stight
H200 SXM 141 GB141GB136 tok/scomfortable
M2 Ultra 192GB192GB13 tok/stight
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0