vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

RTX A4000H

NVIDIA

16GB VRAM · 448 GB/s bandwidth · 19.2 FP16 TFLOPS · 140W TDP

The RTX A4000H has 16GB of VRAM with 448 GB/s memory bandwidth and 19.2 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (138 tok/s), Qwen 2.5 7B (73 tok/s), Llama 3.1 8B (69 tok/s). Models larger than ~27B parameters won't fit even at Q4. Electricity cost is approximately $15/month at 140W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable138 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable73 tok/s
Llama 3.1 8B8.0B4 GBcomfortable69 tok/s
Mistral Small 24B24.0B12 GBtight23 tok/s
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Quadro RTX 500016GB448 GB/s22.3230W
GeForce RTX 5060 Ti 16 GB16GB448 GB/s23.7180W
RTX A400016GB448 GB/s19.2140W
Radeon Instinct MI2516GB436 GB/s24.6300W
Arc A77016GB512 GB/s39.3225W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0