gpt4-x-alpaca-13b-native-4bit-128g
by anon8231489123
13.0B params · text-generation · 729 likes · 809 downloads
gpt4-x-alpaca-13b-native-4bit-128g is a 13.0B parameter model. At Q4 quantization it requires 7GB of VRAM. It runs comfortably on GeForce RTX 4090 (100 tok/s), GeForce RTX 5090 (151 tok/s), M4 Max 128GB (36 tok/s).
GPU compatibility
| GPU | VRAM | Q4 Decode | Verdict |
|---|---|---|---|
| GeForce RTX 4090 | 24GB | 100 tok/s | comfortable |
| GeForce RTX 5090 | 32GB | 151 tok/s | comfortable |
| M4 Max 128GB | 128GB | 36 tok/s | comfortable |
| M4 Pro 48GB | 48GB | 18 tok/s | tight |
| M4 Pro 24GB | 24GB | 18 tok/s | tight |
| A100 PCIe 80 GB | 80GB | 184 tok/s | comfortable |
| H100 SXM5 80 GB | 80GB | 371 tok/s | comfortable |
| GeForce RTX 3090 | 24GB | 89 tok/s | comfortable |
| Radeon RX 7900 XTX | 24GB | 73 tok/s | comfortable |
| GeForce RTX 4080 | 16GB | 71 tok/s | comfortable |