GLM-4.6
by zai-org
357B params · text-generation · 1.2k likes · 73.1k downloads
GLM-4.6 is a 357B parameter model. At Q4 quantization it requires 178GB of VRAM. It requires a GPU with at least 178GB of VRAM.
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 100 tok/s | ||
| Together AI | 42 tok/s | ||
| Z.ai | 87 tok/s |