GeForce RTX 4060 Ti 16 GB
NVIDIA
16GB VRAM · 288 GB/s bandwidth · 22.1 FP16 TFLOPS · 165W TDP
The GeForce RTX 4060 Ti 16 GB has 16GB of VRAM with 288 GB/s memory bandwidth and 22.1 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (93 tok/s), Qwen 2.5 7B (49 tok/s), Llama 3.1 8B (46 tok/s). Models larger than ~27B parameters won't fit even at Q4. Electricity cost is approximately $18/month at 165W TDP.
What LLMs can you run?
| Model | Params | Q4 Weight | Fit | Decode |
|---|---|---|---|---|
| Gemma 3 4B | 4.0B | 2 GB | comfortable | 93 tok/s |
| Qwen 2.5 7B | 7.6B | 4 GB | comfortable | 49 tok/s |
| Llama 3.1 8B | 8.0B | 4 GB | comfortable | 46 tok/s |
| Mistral Small 24B | 24.0B | 12 GB | tight | 15 tok/s |
| Gemma 3 27B | 27.4B | 14 GB | won't fit | |
| Qwen 2.5 Coder 32B | 32.5B | 16 GB | won't fit | |
| Llama 3.3 70B | 70.6B | 35 GB | won't fit | |
| Qwen 2.5 72B | 72.7B | 36 GB | won't fit | |
| Llama 3.1 405B | 405B | 202 GB | won't fit | |
| DeepSeek R1 671B | 671B | 336 GB | won't fit |
Similar GPUs
| GPU | VRAM | BW | TFLOPS | TDP |
|---|---|---|---|---|
| Radeon RX 7600 XT | 16GB | 288 GB/s | 45.1 | 190W |
| Quadro P5000 | 16GB | 288 GB/s | 0.1 | 180W |
| RTX 2000 Ada Generation | 16GB | 256 GB/s | 12.0 | 70W |
| Radeon RX 9060 XT 16 GB | 16GB | 322 GB/s | 51.3 | 160W |
| Arc Pro B50 | 16GB | 224 GB/s | 21.3 | 70W |
Compare with another GPU
Select another GPU to compare specs and model performance side by side.