vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

bge-small-en-v1.5

by BAAI

33M params · feature-extraction · 419 likes · 6.8M downloads

bge-small-en-v1.5 is a 33M parameter model. At Q4 quantization it requires 0GB of VRAM. It runs comfortably on GeForce RTX 4090 (39357 tok/s), GeForce RTX 5090 (59021 tok/s), M4 Max 128GB (14402 tok/s).

Inference providers

Provider$/1M in$/1M outThroughput
HF Inference

GPU compatibility

GPUVRAMQ4 DecodeVerdict
GeForce RTX 409024GB39357 tok/scomfortable
GeForce RTX 509032GB59021 tok/scomfortable
M4 Max 128GB128GB14402 tok/scomfortable
M4 Pro 48GB48GB7201 tok/scomfortable
M4 Pro 24GB24GB7201 tok/scomfortable
A100 PCIe 80 GB80GB72109 tok/scomfortable
H100 SXM5 80 GB80GB145033 tok/scomfortable
GeForce RTX 309024GB34798 tok/scomfortable
Radeon RX 7900 XTX24GB28776 tok/scomfortable
GeForce RTX 408016GB27932 tok/scomfortable
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0