Mixtral-8x22B-Instruct-v0.1
by mistralai
141B params · 746 likes · 25.1k downloads
Mixtral-8x22B-Instruct-v0.1 is a 141B parameter model. At Q4 quantization it requires 70GB of VRAM. It runs comfortably on H200 SXM 141 GB (50 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Nscale | 19 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 3 tok/s | tight |
| H200 SXM 141 GB | 141GB | 50 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 4 tok/s | tight |