vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

M4 32GB

Apple Silicon

32GB VRAM · 120 GB/s bandwidth · 9.2 FP16 TFLOPS · 20W TDP · $1,399 street price

The M4 32GB has 32GB of VRAM. With only 32GB available, no current reference LLMs fit comfortably at Q4 quantization. Consider GPUs with 12GB+ VRAM for local inference.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBtight26 tok/s
Qwen 2.5 7B7.6B4 GBtight13 tok/s
Llama 3.1 8B8.0B4 GBtight13 tok/s
Mistral Small 24B24.0B12 GBtight4 tok/s
Gemma 3 27B27.4B14 GBtight3 tok/s
Qwen 2.5 Coder 32B32.5B16 GBtight3 tok/s
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
M1 Max 32GB32GB200 GB/s21.260W
M1 Pro 32GB32GB200 GB/s10.630W
M2 Max 32GB32GB200 GB/s27.260W
M2 Pro 32GB32GB200 GB/s13.630W
Radeon PRO V62032GB512 GB/s40.5300W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0