Meta-Llama-3-8B-Instruct
by meta-llama
8.0B params · text-generation · 4.4k likes · 1.4M downloads
Meta-Llama-3-8B-Instruct is a 8.0B parameter model. At Q4 quantization it requires 4GB of VRAM. It runs comfortably on GeForce RTX 4090 (163 tok/s), GeForce RTX 5090 (245 tok/s), M4 Max 128GB (59 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 88 tok/s | ||
| Featherless |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 163 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 245 tok/s | comfortable |
| M4 Max 128GB | 128GB | 59 tok/s | comfortable |
| M4 Pro 48GB | 48GB | 29 tok/s | tight |
| M4 Pro 24GB | 24GB | 29 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 299 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 602 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 144 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 119 tok/s | comfortable |
| GeForce RTX 4080 | 16GB | 116 tok/s | comfortable |