GLM-5
by zai-org
754B params · text-generation · 1.8k likes · 234.1k downloads
GLM-5 is a 754B parameter model. At Q4 quantization it requires 377GB of VRAM. It requires a GPU with at least 377GB of VRAM.
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 39 tok/s | ||
| Together AI | 42 tok/s | ||
| Fireworks | 70 tok/s | ||
| Z.ai | 38 tok/s |