Qwen3-Next-80B-A3B-Instruct
by Qwen
81.3B params · text-generation · 949 likes · 898.0k downloads
Qwen3-Next-80B-A3B-Instruct is a 81.3B parameter model. At Q4 quantization it requires 41GB of VRAM. It runs comfortably on H100 SXM5 80 GB (59 tok/s), H200 SXM 141 GB (86 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 101 tok/s | ||
| Hyperbolic | 175 tok/s | ||
| Together AI | 136 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 5 tok/s | tight |
| M4 Pro 48GB | 48GB | 2 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 29 tok/s | tight |
| H100 SXM5 80 GB | 80GB | 59 tok/s | comfortable |
| M4 Max 64GB | 64GB | 5 tok/s | tight |
| H200 SXM 141 GB | 141GB | 86 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 8 tok/s | tight |