vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GeForce RTX 4060 Ti 16 GB

NVIDIA

16GB VRAM · 288 GB/s bandwidth · 22.1 FP16 TFLOPS · 165W TDP

The GeForce RTX 4060 Ti 16 GB has 16GB of VRAM with 288 GB/s memory bandwidth and 22.1 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (93 tok/s), Qwen 2.5 7B (49 tok/s), Llama 3.1 8B (46 tok/s). Models larger than ~27B parameters won't fit even at Q4. Electricity cost is approximately $18/month at 165W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable93 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable49 tok/s
Llama 3.1 8B8.0B4 GBcomfortable46 tok/s
Mistral Small 24B24.0B12 GBtight15 tok/s
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Radeon RX 7600 XT16GB288 GB/s45.1190W
Quadro P500016GB288 GB/s0.1180W
RTX 2000 Ada Generation16GB256 GB/s12.070W
Radeon RX 9060 XT 16 GB16GB322 GB/s51.3160W
Arc Pro B5016GB224 GB/s21.370W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0