DeepSeek-V3.2
by deepseek-ai
685B params · text-generation · 1.3k likes · 341.8k downloads
DeepSeek-V3.2 is a 685B parameter model. At Q4 quantization it requires 343GB of VRAM. It requires a GPU with at least 343GB of VRAM.
Inference providers
| Provider | $/1M in | $/1M out | Throughput |
|---|---|---|---|
| Novita | 29 tok/s | ||
| Fireworks | 81 tok/s |