Gemma 2 Family — Local AI Model by Google

Built from the same research and technology as Google Gemini. Gemma 2 uses knowledge distillation for extremely high efficiency and creative performance.

Hardware Requirements

Gemma 2 9B ITMin 8 GB VRAM · Q4_K_M · 8,192 ctx · ollama run gemma2

How to Run Locally

Install Ollama then run: ollama run gemma2

Minimum VRAM: 8 GB. For best results use Q4_K_M quantization.