vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

GLM-4.7

by zai-org

358B params · text-generation · 1.9k likes · 89.9k downloads

GLM-4.7 is a 358B parameter model. At Q4 quantization it requires 179GB of VRAM. It requires a GPU with at least 179GB of VRAM.

Inference providers

Provider$/1M in$/1M outThroughput
Novita94 tok/s
Z.ai86 tok/s
Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0