vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GeForce RTX 3080

NVIDIA

10GB VRAM · 760 GB/s bandwidth · 29.8 FP16 TFLOPS · 320W TDP

The GeForce RTX 3080 has 10GB of VRAM with 760 GB/s memory bandwidth and 29.8 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (235 tok/s), Qwen 2.5 7B (124 tok/s), Llama 3.1 8B (117 tok/s). Models larger than ~17B parameters won't fit even at Q4. Electricity cost is approximately $35/month at 320W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable235 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable124 tok/s
Llama 3.1 8B8.0B4 GBcomfortable117 tok/s
Mistral Small 24B24.0B12 GBwon't fit
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Arc B57010GB380 GB/s23.0150W
Radeon RX 670010GB320 GB/s22.6175W
Radeon RX 6750 GRE 10 GB10GB320 GB/s22.6170W
GeForce RTX 2080 Ti11GB616 GB/s26.9250W
GeForce GTX 1080 Ti11GB484 GB/s0.2250W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0