vram.run Models Hardware Providers Cloud State of Inference
API provider data is live · Hardware & cloud pricing curated 2026-02-23

Arc A580

Intel

8GB VRAM · 512 GB/s bandwidth · 24.6 FP16 TFLOPS · 175W TDP

The Arc A580 has 8GB of VRAM with 512 GB/s memory bandwidth and 24.6 TFLOPS FP16 compute. At Q4 quantization, it can comfortably run Gemma 3 4B (102 tok/s), Qwen 2.5 7B (53 tok/s), Llama 3.1 8B (51 tok/s). Models larger than ~14B parameters won't fit even at Q4. Electricity cost is approximately $19/month at 175W TDP.

What LLMs can you run?

ModelParamsQ4 WeightFitDecode
Gemma 3 4B4.0B2 GBcomfortable102 tok/s
Qwen 2.5 7B7.6B4 GBcomfortable53 tok/s
Llama 3.1 8B8.0B4 GBcomfortable51 tok/s
Mistral Small 24B24.0B12 GBwon't fit
Gemma 3 27B27.4B14 GBwon't fit
Qwen 2.5 Coder 32B32.5B16 GBwon't fit
Llama 3.3 70B70.6B35 GBwon't fit
Qwen 2.5 72B72.7B36 GBwon't fit
Llama 3.1 405B405B202 GBwon't fit
DeepSeek R1 671B671B336 GBwon't fit

Similar GPUs

GPUVRAMBWTFLOPSTDP
Arc A7508GB512 GB/s34.4225W
Radeon Pro V5208GB512 GB/s14.8225W
GeForce RTX 2080 SUPER8GB495 GB/s22.3250W
Radeon Pro W57008GB448 GB/s17.3205W
GeForce RTX 2060 SUPER8GB448 GB/s14.4175W

Compare with another GPU

Select another GPU to compare specs and model performance side by side.

Install CLI [email protected] Raw data · MIT · API data: live · HW/Cloud data: curated 2026-02-23 · v0.6.0