Skip to main content
AI

Run DeepSeek R1 Locally and Experience the Future of Open-Source AI

Brad Malgas

Brad Malgas

Author

27 January 20253 min read

DeepSeek R1 is shaking up the AI world with its open-source, locally runnable LLM. Learn how to install and run it on your machine using Ollama and explore its capabilities firsthand.

Run DeepSeek R1 Locally and Experience the Future of Open-Source AI cover image

Over the past week, the AI community has been buzzing about the newest large language model (LLM) on the block—DeepSeek R1. If you haven’t heard about it yet, here’s the gist: DeepSeek is a China-based AI development firm that built R1 at a fraction of the cost of its competitors (looking at you, GPT).

But why is everyone so excited about this new LLM? Is it smarter than GPT-4o? Is it cheaper to use? Can you really run it locally on a MacBook? If you’ve read the title, then you already know the answer to all of these: Yes, yes, and absolutely yes.

Why DeepSeek R1 Is a Big Deal

DeepSeek R1 is completely open-source, meaning you can use it for free. Great for us, but not so much for NVIDIA—their stock price felt it after R1 was dropped (no pun intended).

Another major reason for the hype is its training approach. While models like GPT-4o rely on supervised learning (where inputs and outputs are explicitly taught), DeepSeek R1 uses reinforcement learning—a trial-and-error technique similar to how humans learn. So if you were worried about AI coming for your job ...

How to Run DeepSeek R1 Locally with Ollama

If you’re like me and a bit paranoid about using AI models through a web UI, you’ll want to run DeepSeek R1 locally. The good news? It’s easy. You only need to install a few things.

Step 1: Install Ollama

Download and install it here: https://ollama.com/.

Well done, that's it for the prerequisites. All you need is Ollama.

Step 2: Run DeepSeek R1

Once Ollama is installed, open your terminal and run the following command:

bash
ollama run deepseek-r1:1.5b

This will download and install the 1.5-billion parameter version of DeepSeek R1. Don’t let the number scare you—this is actually the smallest version. The full model, in all its glory, has 671 billion parameters. Running that locally? Good luck. (Apparently, you can link 12 Mac Minis together to handle it.)

DeepSeek-R1-Installation-30-Jan-2025.pngDeepSeek-R1-Installation-30-Jan-2025.png

Once the command runs, Ollama will begin downloading the model. It’s only 1.1GB, but if you have great fibre speeds like I do, that means about 8 hours and 27 minutes. (Jokes aside, my WiFi has been struggling lately.)

Step 3: Interact with DeepSeek R1

After installation, you can start chatting with the model directly from your terminal. Try typing:

bash
hello world

DeepSeek-R1-Hello-world-30-Jan-2025.pngDeepSeek-R1-Hello-world-30-Jan-2025.png

It should respond with something like:
"Hello! How can I assist you today?"

You can now type any message as you would to ChatGPT or Gemini. Let’s start with something simple:

bash
What is the meaning of life?

DeepSeek-R1-Meaning-of-life-30-Jan-2025.pngDeepSeek-R1-Meaning-of-life-30-Jan-2025.png

DeepSeek R1’s "think" block showcases its reasoning capabilities before providing an answer. (Full disclosure: I didn’t actually finish downloading the 1.5B model—this response is from the 7B version. Blame Afrihost, not me.)

Final Thoughts

The possibilities with DeepSeek R1 are endless. The fact that it’s free and open-source means developers can use it commercially without restrictions. So go ahead—download it, experiment, and build something cool!

And remember:

Made in China is a good flex sometimes.

Sources:

🔗 DeepSeek explained: Everything you need to know
🎥 DeepSeek-R1 Crash Course

Loading reactions…

Loading comments…