Qwen3.5-35B-A3B
by Qwen
36.0B params · image-text-to-text · 1.1k likes · 1.3M downloads
Qwen3.5-35B-A3B is a 36.0B parameter model. At Q4 quantization it requires 18GB of VRAM. It runs comfortably on GeForce RTX 4090 (36 tok/s), GeForce RTX 5090 (54 tok/s), A100 PCIe 80 GB (66 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 114 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 36 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 54 tok/s | comfortable |
| M4 Max 128GB | 128GB | 13 tok/s | tight |
| M4 Pro 48GB | 48GB | 6 tok/s | tight |
| M4 Pro 24GB | 24GB | 6 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 66 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 134 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 32 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 26 tok/s | tight |
| M4 Max 64GB | 64GB | 13 tok/s | tight |