vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

Qwen3-VL-8B-Instruct

by Qwen

8.8B params · image-text-to-text · 808 likes · 8.1M downloads

Qwen3-VL-8B-Instruct is a 8.8B parameter model. At Q4 quantization it requires 4GB of VRAM. It runs comfortably on GeForce RTX 4090 (149 tok/s), GeForce RTX 5090 (224 tok/s), M4 Max 128GB (54 tok/s).

Inference providers

Provider$/1M in$/1M outThroughput
Novita67 tok/s
Together AI63 tok/s

GPU compatibility

GPUVRAMQ4 DecodeVerdict
GeForce RTX 409024GB149 tok/scomfortable
GeForce RTX 509032GB224 tok/scomfortable
M4 Max 128GB128GB54 tok/scomfortable
M4 Pro 48GB48GB27 tok/stight
M4 Pro 24GB24GB27 tok/stight
A100 PCIe 80 GB80GB274 tok/scomfortable
H100 SXM5 80 GB80GB551 tok/scomfortable
GeForce RTX 309024GB132 tok/scomfortable
Radeon RX 7900 XTX24GB109 tok/scomfortable
GeForce RTX 408016GB106 tok/scomfortable
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0