GLM-4.7
by zai-org
358B params · text-generation · 1.9k likes · 89.9k downloads
GLM-4.7 is a 358B parameter model. At Q4 quantization it requires 179GB of VRAM. It requires a GPU with at least 179GB of VRAM.
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 94 tok/s | ||
| Z.ai | 86 tok/s |