vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

A40 PCIe

NVIDIA

48GB VRAM · 695 GB/s bandwidth · 37.4 FP16 TFLOPS · 300W TDP

The A40 PCIe has 48GB of VRAM with 695 GB/s memory bandwidth and 37.4 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (215 tok/s), Qwen 2.5 7B (113 tok/s), Llama 3.1 8B (107 tok/s). Models larger than ~82B parameters won't fit even at Q4. Electricity cost is approximately $32/month at 300W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable215 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable113 tok/s
Llama 3.1 8B8.0B4 GBcomfortable107 tok/s
Mistral Small 24B24.0B12 GBcomfortable35 tok/s
Gemma 3 27B27.4B14 GBcomfortable31 tok/s
Qwen 2.5 Coder 32B32.5B16 GBtight26 tok/s
Llama 3.3 70B70.6B35 GBtight12 tok/s
Qwen 2.5 72B72.7B36 GBtight11 tok/s
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Quadro RTX 800048GB672 GB/s32.6260W
Quadro RTX 8000 Passive48GB624 GB/s29.9260W
RTX A600048GB768 GB/s38.7300W
M4 Max 48GB48GB546 GB/s36.975W
Radeon PRO W7800 48 GB48GB864 GB/s90.5281W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0