Phi-4 Mini — Local AI Model by Microsoft

Microsoft's ultra-efficient small model for mobile and edge deployment. Phi-4 Mini achieves remarkable reasoning capabilities in just 3.8B parameters using high-quality synthetic data — the same approach behind Phi-4 14B. Designed for on-device AI on phones and laptops without discrete GPUs.

Hardware Requirements

Phi-4 Mini (3.8B)Min 2 GB VRAM · Q4_K_M · 128,000 ctx · ollama run phi4-mini

How to Run Locally

Install Ollama then run: ollama run phi4-mini

Minimum VRAM: 2 GB. For best results use Q4_K_M quantization.