vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

M4 16GB

Apple Silicon

16GB VRAM · 120 GB/s bandwidth · 9.2 FP16 TFLOPS · 20W TDP · $999 street price

The M4 16GB has 16GB of VRAM. With only 16GB available, no current reference LLMs fit comfortably at Q4 quantization. Consider GPUs with 12GB+ VRAM for local inference.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBtight26 tok/s
Qwen 2.5 7B7.6B4 GBtight13 tok/s
Llama 3.1 8B8.0B4 GBtight13 tok/s
Mistral Small 24B24.0B12 GBtight4 tok/s
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
M2 16GB16GB100 GB/s7.215W
M3 16GB16GB100 GB/s8.215W
M1 16GB16GB68 GB/s5.215W
M1 Pro 16GB16GB200 GB/s10.630W
M2 Pro 16GB16GB200 GB/s13.630W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0