Llama-4-Scout-17B-16E-Instruct
by meta-llama
109B params · image-text-to-text · 1.2k likes · 213.1k downloads
Llama-4-Scout-17B-16E-Instruct is a 109B parameter model. At Q4 quantization it requires 54GB of VRAM. It runs comfortably on H100 SXM5 80 GB (44 tok/s), H200 SXM 141 GB (64 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Groq | 338 tok/s | ||
| Novita | 79 tok/s | ||
| Nscale | 32 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 4 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 22 tok/s | tight |
| H100 SXM5 80 GB | 80GB | 44 tok/s | comfortable |
| H200 SXM 141 GB | 141GB | 64 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 6 tok/s | tight |