Llama-3.3-Swallow-70B-Instruct-v0.4
by tokyotech-llm
70.6B params · text-generation · 12 likes · 4.5k downloads
Llama-3.3-Swallow-70B-Instruct-v0.4 is a 70.6B parameter model. At Q4 quantization it requires 35GB of VRAM. It runs comfortably on A100 PCIe 80 GB (34 tok/s), H100 SXM5 80 GB (68 tok/s), H200 SXM 141 GB (99 tok/s).
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| SambaNova | 147 tok/s |
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| M4 Max 128GB | 128GB | 6 tok/s | tight |
| M4 Pro 48GB | 48GB | 3 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 34 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 68 tok/s | comfortable |
| M4 Max 64GB | 64GB | 6 tok/s | tight |
| H200 SXM 141 GB | 141GB | 99 tok/s | comfortable |
| M2 Ultra 192GB | 192GB | 9 tok/s | tight |