vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GeForce RTX 3080 12 GB

NVIDIA

12GB VRAM · 912 GB/s bandwidth · 30.6 FP16 TFLOPS · 350W TDP

The GeForce RTX 3080 12 GB has 12GB of VRAM with 912 GB/s memory bandwidth and 30.6 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (282 tok/s), Qwen 2.5 7B (148 tok/s), Llama 3.1 8B (140 tok/s). Models larger than ~20B parameters won't fit even at Q4. Electricity cost is approximately $38/month at 350W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable282 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable148 tok/s
Llama 3.1 8B8.0B4 GBcomfortable140 tok/s
Mistral Small 24B24.0B12 GBwon't fit
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
GeForce RTX 3080 Ti12GB912 GB/s34.1350W
RTX A5000-12Q12GB768 GB/s27.8230W
GeForce RTX 507012GB672 GB/s30.9250W
TITAN V12GB651 GB/s29.8250W
Tesla P100 PCIe 12 GB12GB549 GB/s19.1250W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0