gpt-oss-20b
by openai
21.5B params · text-generation · 4.4k likes · 7.4M downloads
gpt-oss-20b is a 21.5B parameter model. At Q4 quantization it requires 11GB of VRAM. It runs comfortably on GeForce RTX 4090 (61 tok/s), GeForce RTX 5090 (91 tok/s), A100 PCIe 80 GB (111 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Groq | 558 tok/s | ||
| Novita | 97 tok/s | ||
| Nscale | 121 tok/s | ||
| Hyperbolic | 95 tok/s | ||
| Together AI | 76 tok/s | ||
| Fireworks | 143 tok/s | ||
| OVHcloud | 85 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 61 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 91 tok/s | comfortable |
| M4 Max 128GB | 128GB | 22 tok/s | tight |
| M4 Pro 48GB | 48GB | 11 tok/s | tight |
| M4 Pro 24GB | 24GB | 11 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 111 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 224 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 53 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 44 tok/s | comfortable |
| GeForce RTX 4080 | 16GB | 43 tok/s | comfortable |