vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

led-large-book-summary

by pszemraj

460M params · summarization · 120 likes · 3.0k downloads

led-large-book-summary is a 460M parameter model. At Q4 quantization it requires 0GB of VRAM. It runs comfortably on GeForce RTX 4090 (2855 tok/s), GeForce RTX 5090 (4281 tok/s), M4 Max 128GB (1044 tok/s).

Inference providers

Provider$/1M in$/1M outThroughput
HF Inference

GPU compatibility

GPUVRAMQ4 DecodeVerdict
GeForce RTX 409024GB2855 tok/scomfortable
GeForce RTX 509032GB4281 tok/scomfortable
M4 Max 128GB128GB1044 tok/scomfortable
M4 Pro 48GB48GB522 tok/scomfortable
M4 Pro 24GB24GB522 tok/scomfortable
A100 PCIe 80 GB80GB5231 tok/scomfortable
H100 SXM5 80 GB80GB10521 tok/scomfortable
GeForce RTX 309024GB2524 tok/scomfortable
Radeon RX 7900 XTX24GB2087 tok/scomfortable
GeForce RTX 408016GB2026 tok/scomfortable
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0