vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GeForce RTX 5080

NVIDIA

16GB VRAM · 960 GB/s bandwidth · 56.3 FP16 TFLOPS · 360W TDP

The GeForce RTX 5080 has 16GB of VRAM with 960 GB/s memory bandwidth and 56.3 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (264 tok/s), Qwen 2.5 7B (138 tok/s), Llama 3.1 8B (131 tok/s). Models larger than ~27B parameters won't fit even at Q4. Electricity cost is approximately $39/month at 360W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable264 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable138 tok/s
Llama 3.1 8B8.0B4 GBcomfortable131 tok/s
Mistral Small 24B24.0B12 GBcomfortable44 tok/s
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Radeon Instinct MI5016GB1020 GB/s26.8300W
Radeon VII16GB1020 GB/s26.9295W
GeForce RTX 5070 Ti16GB896 GB/s43.9300W
GeForce RTX 4080 SUPER16GB736 GB/s52.2320W
Quadro GP10016GB732 GB/s20.7235W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0