Qwen3-Coder-30B-A3B-Instruct-GGUF
by unsloth
30.0B params · text-generation · 529 likes · 178.8k downloads
Qwen3-Coder-30B-A3B-Instruct-GGUF is a 30.0B parameter model. At Q4 quantization it requires 15GB of VRAM. It runs comfortably on GeForce RTX 4090 (43 tok/s), GeForce RTX 5090 (65 tok/s), A100 PCIe 80 GB (80 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 43 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 65 tok/s | comfortable |
| M4 Max 128GB | 128GB | 16 tok/s | tight |
| M4 Pro 48GB | 48GB | 8 tok/s | tight |
| M4 Pro 24GB | 24GB | 8 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 80 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 161 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 38 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 32 tok/s | comfortable |
| M4 Max 64GB | 64GB | 16 tok/s | tight |