Introduction
While I primarily rely on cloud-based LLMs like GPT 4 and Claude 3.5 for my AI development projects, I find it valuable to run local models during testing and experimentation. This becomes particularly useful when I'm handling sensitive data or traveling in areas where internet connectivity might be unreliable.
For several months now, I've been exploring local language models through Olly on my personal computer. If you're new to Olly, it's a user-friendly, open-source platform that lets you run language models right on your own device. Think of it as your personal AI assistant that works offline - you won't need to connect to any external servers to tap into its capabilities.
The tech world is buzzing with excitement over DeepSeek R1, a new language model that's been turning heads across the internet. What makes it truly remarkable is how it stands toe-to-toe with high-end commercial language models, particularly matching the sophisticated reasoning abilities of OpenAI's costly GPT 01. The best part? DeepSeek R1's open-source nature means it's freely available for everyone to use.
Let's explore how you can set up and run DeepSeek R1 on your MacBook for local AI development. I'll guide you through each step to help you harness this powerful tool right on your own machine.
System requirements
If you're planning to run DeepSeek R1 on your MacBook, you'll need to check if your device meets the right specs first. The good news is that DeepSeek R1 comes in six different models, each with its own parameter size and GPU memory needs, so you can choose what works best for your machine.
Let's explore the hardware and software specifications you'll need to get different versions of the DeepSeek R1 model running smoothly on your MacBook. Here's what your system needs to meet:

I use a MacBook Pro equipped with the M1 Max processor and 32GB of RAM for my work.
Necessary software and tools
While downloading and installing Olly directly is possible, it's much simpler to handle both installation and management through Homebrew.
If you're looking to manage software on your Mac with ease, Homebrew is your go-to solution. It's a powerful package manager that takes the headache out of installing and updating software on macOS. Want to learn more? Head over to https://brew.sh/ for all the details.
Installation of Homebrew is very easy. Simply run
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
.Install Ollama by running this command in your terminal after installing Homebrew:
brew install ollama
.To begin, launch Ollama by typing
ollama serve&
into your terminal.Downloading and Installing the DeepSeek LLM
After installing and launching Ollama, you can get the DeepSeek LLM model up and running with a straightforward terminal command.
To start using DeepSeek R1 with Ollama, simply run:
ollama run deepseek-r1
. The first time you execute this command, the system will download the model files, which might take a few minutes.By default, Ollama loads the deepseek-r1:7b version, but you can specify different model sizes using these commands:
Choose a model that's compatible with your MacBook's hardware power and memory capacity to ensure smooth operation.
Verifying Ollama and DeepSeek
Once you've confirmed that Ollama is up and running, you can put DeepSeek to the test by typing a simple prompt into the command line. It's a straightforward way to see the model in action.
Run:
ollama run deepseek-r1
Now you can start chatting with DeepSeek by typing in your questions or prompts. Feel free to explore different topics and see how the model responds to your queries.
To quit the session, just type
/bye
.Integrating DeepSeek with other tools
Since Deepseek runs through Ollama, it works smoothly with any tool that connects to Ollama. You'll just need to set up your tool to communicate with Ollama through port 11434, and you're ready to go.
Popular development environments and AI assistants like VS Code, Bolt AI, Olly AI, and Viinyx seamlessly connect with Olly to enhance your workflow.
My Impression of DeepSeek
While DeepSeek delivers impressive performance in the cloud environment, I've noticed that running its 7B model on my 32GB M1 Max MacBook isn't quite meeting my performance expectations.
When I tested the smaller 1B model with a straightforward comparison between 9.11 and 9.10123, it couldn't deliver accurate results. In contrast to its larger 7B counterpart, this compact version stumbled on basic numerical comparisons. Based on this performance issue, I can't recommend using this smaller model for practical applications.

Conclusion
If you're planning to run DeepSeek LLM on your MacBook, it's essential to look at your device's technical specs first. This powerful language model can do amazing things for local development, but you'll want to pick the right model size that works smoothly with your MacBook's capabilities.