vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

T1000 8 GB

NVIDIA

8GB VRAM · 160 GB/s bandwidth · 5.0 FP16 TFLOPS · 50W TDP

The T1000 8 GB has 8GB of VRAM with 160 GB/s memory bandwidth and 5.0 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (46 tok/s). Models larger than ~14B parameters won't fit even at Q4. Electricity cost is approximately $5/month at 50W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable46 tok/s
Qwen 2.5 7B7.6B4 GBtight24 tok/s
Llama 3.1 8B8.0B4 GBtight23 tok/s
Mistral Small 24B24.0B12 GBwon't fit
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
RTX A10008GB192 GB/s6.750W
Tesla P48GB192 GB/s0.175W
M2 8GB8GB100 GB/s7.215W
M3 8GB8GB100 GB/s8.215W
Radeon Instinct MI68GB224 GB/s5.7150W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0