vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

vicuna-13b-GPTQ-4bit-128g

by anon8231489123

13.0B params · text-generation · 664 likes · 852 downloads

vicuna-13b-GPTQ-4bit-128g is a 13.0B parameter model. At Q4 quantization it requires 7GB of VRAM. It runs comfortably on GeForce RTX 4090 (100 tok/s), GeForce RTX 5090 (151 tok/s), M4 Max 128GB (36 tok/s).

GPU compatibility

GPUVRAMQ4 DecodeVerdict
GeForce RTX 409024GB100 tok/scomfortable
GeForce RTX 509032GB151 tok/scomfortable
M4 Max 128GB128GB36 tok/scomfortable
M4 Pro 48GB48GB18 tok/stight
M4 Pro 24GB24GB18 tok/stight
A100 PCIe 80 GB80GB184 tok/scomfortable
H100 SXM5 80 GB80GB371 tok/scomfortable
GeForce RTX 309024GB89 tok/scomfortable
Radeon RX 7900 XTX24GB73 tok/scomfortable
GeForce RTX 408016GB71 tok/scomfortable
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0