vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GeForce RTX 4070 Ti SUPER AD102

NVIDIA

16GB VRAM · 672 GB/s bandwidth · 44.1 FP16 TFLOPS · 285W TDP

The GeForce RTX 4070 Ti SUPER AD102 has 16GB of VRAM with 672 GB/s memory bandwidth and 44.1 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (218 tok/s), Qwen 2.5 7B (114 tok/s), Llama 3.1 8B (108 tok/s). Models larger than ~27B parameters won't fit even at Q4. Electricity cost is approximately $31/month at 285W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable218 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable114 tok/s
Llama 3.1 8B8.0B4 GBcomfortable108 tok/s
Mistral Small 24B24.0B12 GBcomfortable36 tok/s
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
GeForce RTX 4070 Ti SUPER16GB672 GB/s44.1285W
Radeon RX 907016GB644 GB/s72.2220W
Radeon RX 9070 XT16GB644 GB/s97.3304W
GeForce RTX 408016GB716 GB/s48.7320W
Radeon RX 7800 XT16GB624 GB/s74.7263W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0