OLMo 2 — Local AI Model by Allen AI

The world's most transparent LLM. OLMo 2 releases everything: weights, training code, training data, evaluation suite, and intermediate checkpoints. Perfect for researchers and compliance-sensitive deployments needing fully auditable AI. Competitive with Llama 3.1 on most benchmarks.

Hardware Requirements

OLMo 2 7B InstructMin 5 GB VRAM · Q4_K_M · 4,096 ctx · ollama run olmo2:7b
OLMo 2 13B InstructMin 9 GB VRAM · Q4_K_M · 4,096 ctx · ollama run olmo2:13b

How to Run Locally

Install Ollama then run: ollama run olmo2:7b

Minimum VRAM: 5 GB. For best results use Q4_K_M quantization.