GLM-4.6V-FP8
by zai-org
108B params · image-text-to-text · 31 likes · 6.0k downloads
GLM-4.6V-FP8 is a 108B parameter model. At Q4 quantization it requires 54GB of VRAM. It runs comfortably on H100 SXM5 80 GB (44 tok/s), H200 SXM 141 GB (65 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Z.ai | 54 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 4 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 22 tok/s | tight |
| H100 SXM5 80 GB | 80GB | 44 tok/s | comfortable |
| M4 Max 64GB | 64GB | 4 tok/s | tight |
| H200 SXM 141 GB | 141GB | 65 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 6 tok/s | tight |