GLM-4.5
by zai-org
358B params · text-generation · 1.4k likes · 44.5k downloads
GLM-4.5 is a 358B parameter model. At Q4 quantization it requires 179GB of VRAM. It requires a GPU with at least 179GB of VRAM.
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 55 tok/s | ||
| Z.ai | 53 tok/s |