vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

QRWKV6-32B-Instruct-Preview-v0.1

by recursal

34.7B params · text-generation · 79 likes · 6 downloads

QRWKV6-32B-Instruct-Preview-v0.1 is a 34.7B parameter model. At Q4 quantization it requires 17GB of VRAM. It runs comfortably on GeForce RTX 4090 (37 tok/s), GeForce RTX 5090 (56 tok/s), A100 PCIe 80 GB (69 tok/s).

Inference providers

Provider$/1M in$/1M outThroughput
Featherless

GPU compatibility

GPUVRAMQ4 DecodeVerdict
GeForce RTX 409024GB37 tok/scomfortable
GeForce RTX 509032GB56 tok/scomfortable
M4 Max 128GB128GB13 tok/stight
M4 Pro 48GB48GB6 tok/stight
M4 Pro 24GB24GB6 tok/stight
A100 PCIe 80 GB80GB69 tok/scomfortable
H100 SXM5 80 GB80GB139 tok/scomfortable
GeForce RTX 309024GB33 tok/scomfortable
Radeon RX 7900 XTX24GB27 tok/stight
M4 Max 64GB64GB13 tok/stight
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0