How to Run DeepSeek R1 Locally Using Ollama (macOS, Windows, and Ubuntu Guide)

Dec 31, 2024
3 min to read
AI
DeepSeek
OpenAI
ollama

How to Run DeepSeek R1 Locally Using Ollama (macOS, Windows, and Ubuntu Guide)

Looking to harness the power of large language models without depending on the cloud? Meet DeepSeek R1 — a high-performance, open-source AI model that rivals, if not surpasses, many of OpenAI’s O1 models. In this guide, we’ll show you how to run it safely and locally using Ollama — across Mac, Windows, and Ubuntu.


🔍 What is DeepSeek R1?

DeepSeek R1 is a state-of-the-art large language model trained on a massive 2T token dataset. It’s optimized for reasoning, coding, multi-language tasks, and more.

Why it stands out:

  • 🧠 Comparable or better than OpenAI's O1 models in reasoning and dialogue
  • 🔐 100% private and local — no data ever leaves your machine
  • ⚙️ Great for developers, researchers, and AI enthusiasts
  • 💬 Excels in code generation, math, and natural conversation

🧰 What is Ollama?

Ollama makes it dead simple to run powerful LLMs like DeepSeek R1 locally.

  • 🎯 One-line install
  • 💻 Works on macOS, Windows (via WSL), and Ubuntu
  • ⚡ Supports GPU acceleration
  • 📡 No cloud, no spying, no latency

✅ System Requirements

  • macOS (Intel or Apple Silicon), Windows 11 (with WSL), or Ubuntu 20.04+
  • 10GB+ disk space for the model
  • 8GB RAM minimum (16GB recommended)
  • Optional: GPU for faster inference

🛠️ How to Install and Run DeepSeek R1

🍏 macOS

  1. Install Ollama:

    brew install ollama
    
  2. Run a test model:

    ollama run hello
    
  3. Pull and run DeepSeek R1:

    ollama run deepseek
    

🪟 Windows (WSL Required)

  1. Install WSL and open Ubuntu in WSL.

  2. Inside the Ubuntu terminal:

    curl -fsSL https://ollama.com/install.sh | sh
    
  3. Start the server:

    ollama serve
    
  4. Run DeepSeek R1:

    ollama run deepseek
    

🐧 Ubuntu (Linux)

  1. Install Ollama:

    curl -fsSL https://ollama.com/install.sh | sh
    
  2. Run a test model:

    ollama run hello
    
  3. Run DeepSeek R1:

    ollama run deepseek
    

💬 Interact With the Model

You can talk to DeepSeek right from your terminal or integrate it via API:

curl http://localhost:11434/api/generate -d '{
  "model": "deepseek",
  "prompt": "Explain general relativity like I'm 12."
}'

🔒 100% Local, 100% Private

Running DeepSeek R1 via Ollama means:

  • 🧠 No data leaves your device
  • 💬 No internet required after install
  • 🧘 Full control over your AI stack

Perfect for sensitive work, dev environments, and peace of mind.


🧠 Why Choose DeepSeek Over O1 Models?

While OpenAI's O1 models are impressive, DeepSeek R1 offers:

  • ⚖️ Equal or superior reasoning performance
  • 🔍 More transparency and control
  • 💸 Zero cost per query
  • 🚀 Offline, high-speed inference

🎯 Final Thoughts

DeepSeek R1 is a powerful, private, and production-ready model you can run entirely offline. With Ollama, setup is effortless — and you'll be chatting with a top-tier AI model in minutes.

No API keys. No subscriptions. No cloud. Just you and your model.

Try it today and see just how powerful local AI can be.