How to run DeepSeek with Ollama
Quick answer
You cannot run
DeepSeek models directly with Ollama because Ollama runs local models without API keys, while DeepSeek is a cloud API accessed via the OpenAI-compatible SDK. Use the openai Python SDK with DeepSeek by setting the base_url to https://api.deepseek.com and your API key from os.environ.PREREQUISITES
Python 3.8+DeepSeek API key (set in environment variable DEEPSEEK_API_KEY)pip install openai>=1.0
Setup
Install the openai Python SDK (v1 or later) to access DeepSeek's API. Set your DeepSeek API key in the environment variable DEEPSEEK_API_KEY. Ollama runs local models and does not support remote API calls, so you use the OpenAI-compatible client with DeepSeek's base_url.
pip install openai>=1.0 Step by step
Use the openai SDK with base_url set to DeepSeek's API endpoint. This example sends a chat completion request to the deepseek-chat model.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["DEEPSEEK_API_KEY"], base_url="https://api.deepseek.com")
response = client.chat.completions.create(
model="deepseek-chat",
messages=[{"role": "user", "content": "Hello from DeepSeek via Ollama?"}]
)
print(response.choices[0].message.content) output
Hello from DeepSeek via Ollama? I'm here to assist you!
Common variations
You can switch models to deepseek-reasoner for reasoning tasks or adjust max_tokens and temperature parameters. Ollama itself does not support remote API calls, so you cannot run DeepSeek models inside Ollama directly. Instead, run Ollama locally for local models and use the OpenAI-compatible client for DeepSeek cloud models.
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=[{"role": "user", "content": "Explain quantum computing in simple terms."}],
max_tokens=500,
temperature=0.7
)
print(response.choices[0].message.content) output
Quantum computing uses quantum bits that can be in multiple states simultaneously, enabling faster problem solving for certain tasks.
Troubleshooting
- If you get authentication errors, verify your
DEEPSEEK_API_KEYenvironment variable is set correctly. - If requests time out, check your network connection and DeepSeek service status.
- Remember Ollama runs local models only; it cannot proxy or run DeepSeek cloud models.
Key Takeaways
- Use the OpenAI-compatible
openaiSDK withbase_url="https://api.deepseek.com"to access DeepSeek models. -
Ollamaruns local models only and cannot run or proxy DeepSeek cloud models. - Set your DeepSeek API key in
os.environ["DEEPSEEK_API_KEY"]before running code. - Switch between
deepseek-chatanddeepseek-reasonermodels for general chat or reasoning tasks. - Troubleshoot authentication and network issues by verifying environment variables and connectivity.