vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

RTX A4500

NVIDIA

20GB VRAM · 640 GB/s bandwidth · 23.6 FP16 TFLOPS · 200W TDP

The RTX A4500 has 20GB of VRAM with 640 GB/s memory bandwidth and 23.6 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (198 tok/s), Qwen 2.5 7B (104 tok/s), Llama 3.1 8B (98 tok/s). Models larger than ~34B parameters won't fit even at Q4. Electricity cost is approximately $22/month at 200W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable198 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable104 tok/s
Llama 3.1 8B8.0B4 GBcomfortable98 tok/s
Mistral Small 24B24.0B12 GBcomfortable33 tok/s
Gemma 3 27B27.4B14 GBtight28 tok/s
Qwen 2.5 Coder 32B32.5B16 GBtight24 tok/s
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
GeForce RTX 3080 Ti 20 GB20GB760 GB/s34.1350W
A10M20GB500 GB/s23.4150W
Radeon RX 7900 XT20GB800 GB/s103.0300W
RTX 4000 Ada Generation20GB360 GB/s26.7130W
RTX 4000 SFF Ada Generation20GB280 GB/s19.270W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0