Jamba-v0.1
by ai21labs
51.6B params · text-generation · 1.2k likes · 1.2k downloads
Jamba-v0.1 is a 51.6B parameter model. At Q4 quantization it requires 26GB of VRAM. It runs comfortably on GeForce RTX 5090 (38 tok/s), A100 PCIe 80 GB (46 tok/s), H100 SXM5 80 GB (93 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 5090 | 32GB | 38 tok/s | comfortable |
| M4 Max 128GB | 128GB | 9 tok/s | tight |
| M4 Pro 48GB | 48GB | 4 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 46 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 93 tok/s | comfortable |
| M4 Max 64GB | 64GB | 9 tok/s | tight |
| H200 SXM 141 GB | 141GB | 136 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 13 tok/s | tight |