vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

M3 Pro 18GB

Apple Silicon

18GB VRAM · 150 GB/s bandwidth · 14.8 FP16 TFLOPS · 30W TDP · $1,299 street price

The M3 Pro 18GB has 18GB of VRAM with 150 GB/s memory bandwidth and 14.8 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (32 tok/s). Models larger than ~31B parameters won't fit even at Q4. Electricity cost is approximately $3/month at 30W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable32 tok/s
Qwen 2.5 7B7.6B4 GBtight16 tok/s
Llama 3.1 8B8.0B4 GBtight16 tok/s
Mistral Small 24B24.0B12 GBtight5 tok/s
Gemma 3 27B27.4B14 GBtight4 tok/s
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
M4 16GB16GB120 GB/s9.220W
M1 Pro 16GB16GB200 GB/s10.630W
M2 16GB16GB100 GB/s7.215W
M2 Pro 16GB16GB200 GB/s13.630W
M3 16GB16GB100 GB/s8.215W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0