vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GLM-4.6

by zai-org

357B params · text-generation · 1.2k likes · 73.1k downloads

GLM-4.6 is a 357B parameter model. At Q4 quantization it requires 178GB of VRAM. It requires a GPU with at least 178GB of VRAM.

Inference providers

Provider$/1M in$/1M outThroughput
Novita100 tok/s
Together AI42 tok/s
Z.ai87 tok/s
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0