gpt-oss-120b
by openai
120B params · text-generation · 4.6k likes · 4.4M downloads
gpt-oss-120b is a 120B parameter model. At Q4 quantization it requires 60GB of VRAM. It runs comfortably on H100 SXM5 80 GB (40 tok/s), H200 SXM 141 GB (58 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Groq | 435 tok/s | ||
| Novita | 51 tok/s | ||
| Cerebras | 206 tok/s | ||
| SambaNova | 552 tok/s | ||
| Nscale | 95 tok/s | ||
| Hyperbolic | 90 tok/s | ||
| Together AI | 81 tok/s | ||
| Fireworks | 90 tok/s | ||
| Scaleway | 116 tok/s | ||
| OVHcloud | 131 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 3 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 19 tok/s | tight |
| H100 SXM5 80 GB | 80GB | 40 tok/s | comfortable |
| H200 SXM 141 GB | 141GB | 58 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 5 tok/s | tight |