DeepSeek V3.2 — Local AI Model by DeepSeek

DeepSeek's early 2026 evolution of V3. The V3.2 model (671B total, 37B active) focuses on improved long-context reasoning, tool use, and agentic tasks. It directly competes with GPT-4o-level performance on most benchmarks while remaining fully open-source under MIT license. Beats Qwen3-Coder-Next and Kimi K2.5 on several enterprise agentic benchmarks.

Hardware Requirements

DeepSeek V3.2 671BMin 80 GB VRAM · Q4_K_M · 128,000 ctx · ollama run deepseek-v3.2:671b-q4

How to Run Locally

Install Ollama then run: ollama run deepseek-v3.2:671b-q4

Minimum VRAM: 80 GB. For best results use Q4_K_M quantization.