Running DeepSeek locally with Docker-Compose is possible with a Mac, though a lighter-weight implementation of the model is recommended.
This will take you through how to run DeepSeek on localhost with a web-ui interface.
These steps require internet connection
-
Install Ollama
-
Pick a model based on your hardware:
ollama pull deepseek-r1:8b # Fast, lightweight
ollama pull deepseek-r1:14b # Balanced performance
ollama pull deepseek-r1:32b # Heavy processing
ollama pull deepseek-r1:70b # Max reasoning, slowest
ollama pull deepseek-coder:1.3b # Code completion assist
- Test the model locally via the terminal
ollama run deepseek-r1:8b
-
Install Docker
-
Install Docker-Compose
-
Create Docker-Compose file as seen in this repo. If you wish to use an internet connect, you can simply uncomment the image for the open-webui service and remove the build.
-
Open the docker app and run
docker-compose up --build
-
Visit
http://localhost:3000
to see your chat.
-
Follow steps 1-2 in the Steps to run with a web interface, then you can also install the CodeGPT for VScode extension.
-
Navigate to the Local LLMs section. This is likely accessed from the initial model selection drop down (pictured with claude selected).
-
From the available options, select 'Ollama' as the local LLM provider.
-
You can now turn off internet and using Local LLMs, continue to chat/analyze code.
-
Follow steps 1-2 in the Steps to run with a web interface
-
Install
uv
curl -LsSf https://astral.sh/uv/install.sh | sh
-
Create uv env:
mkdir ~/< project root >/< your directory name> && uv venv --python 3.11
-
Install open-webui:
cd ~/< project root >/< your directory name > && uv pip install open-webui
-
Start open-webui:
DATA_DIR=~/.open-webui uv run open-webui serve
-
Visit localhost and start chatting!
-
Follow steps 1-6 in the steps to run with a web interface
-
Next, follow steps 1-4 in the steps to running open-webui locally without internet
-
Once this is done, create a
Dockerfile
in your chosen directory where the open-webui deps live like that seen in this project, to mimic the setup and install in the docker container of all dependencies for open-webui. -
Next, start the app:
docker-compose up --build
. If you do not wish to see logs:docker-compose up --build -d
-
Visit localhost and start chatting!
-
If models are not available to select, turn on your internet temporarily, go back to the terminal and run
docker exec -it ollama bash
-
Download the model to your service using the
ollama pull
commands seen earlier in step 1. -
Verify models are installed with
ollama list
while still in the cli. If so, you can turn off internet again and exit the cli withctrl + d
orexit
-
Restart your
open-webui
container withdocker-compose restart open-webui
-
Inspect the network:
docker network ls
thendocker network inspect < network >
-
Inspect Ollama and models:
curl http://localhost:11434/api/tags
- or -
docker exec -it ollama ollama list
-
Restart open-webui container:
docker-compose restart open-webui
-
Depending on your hardware, running
docker-compose down
thendocker-compose up -d
to restart built containers can take a moment. Check progress withdocker logs < service name >