Code beginner · 3 min read

How to build a chatbot with DeepSeek API

Direct answer
Use the openai SDK with base_url="https://api.deepseek.com" and call client.chat.completions.create() with model deepseek-chat to build a chatbot.

Setup

Install
bash
pip install openai
Env vars
DEEPSEEK_API_KEY
Imports
python
from openai import OpenAI
import os

Examples

inHello, who are you?
outI am an AI chatbot powered by DeepSeek. How can I assist you today?
inCan you help me with Python code?
outAbsolutely! What Python coding help do you need?
inTell me a joke.
outWhy did the programmer quit his job? Because he didn't get arrays.

Integration steps

  1. Install the OpenAI Python SDK and set the DEEPSEEK_API_KEY environment variable.
  2. Initialize the OpenAI client with the DeepSeek API base URL.
  3. Build the chat messages list with user input.
  4. Call the chat.completions.create() method with model 'deepseek-chat'.
  5. Extract the chatbot's reply from response.choices[0].message.content.
  6. Display or use the chatbot response in your application.

Full code

python
from openai import OpenAI
import os

# Initialize DeepSeek client with API key and base URL
client = OpenAI(api_key=os.environ["DEEPSEEK_API_KEY"], base_url="https://api.deepseek.com")

# Prepare chat messages
messages = [
    {"role": "user", "content": "Hello, who are you?"}
]

# Create chat completion
response = client.chat.completions.create(
    model="deepseek-chat",
    messages=messages
)

# Extract and print chatbot reply
reply = response.choices[0].message.content
print("Chatbot:", reply)
output
Chatbot: I am an AI chatbot powered by DeepSeek. How can I assist you today?

API trace

Request
json
{"model": "deepseek-chat", "messages": [{"role": "user", "content": "Hello, who are you?"}]}
Response
json
{"choices": [{"message": {"content": "I am an AI chatbot powered by DeepSeek. How can I assist you today?"}}], "usage": {"total_tokens": 25}}
Extractresponse.choices[0].message.content

Variants

Streaming Chatbot Response

Use streaming to provide real-time partial responses for better user experience in chat interfaces.

python
from openai import OpenAI
import os

client = OpenAI(api_key=os.environ["DEEPSEEK_API_KEY"], base_url="https://api.deepseek.com")

messages = [{"role": "user", "content": "Tell me a story."}]

response = client.chat.completions.create(
    model="deepseek-chat",
    messages=messages,
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.get('content', ''), end='', flush=True)
print()
Async Chatbot Call

Use async calls when integrating the chatbot in asynchronous Python applications or web servers.

python
import asyncio
from openai import OpenAI
import os

async def main():
    client = OpenAI(api_key=os.environ["DEEPSEEK_API_KEY"], base_url="https://api.deepseek.com")
    messages = [{"role": "user", "content": "What is AI?"}]
    response = await client.chat.completions.acreate(
        model="deepseek-chat",
        messages=messages
    )
    print("Chatbot:", response.choices[0].message.content)

asyncio.run(main())
Use Alternative Model deepseek-reasoner

Use the 'deepseek-reasoner' model for more complex reasoning or explanation tasks.

python
from openai import OpenAI
import os

client = OpenAI(api_key=os.environ["DEEPSEEK_API_KEY"], base_url="https://api.deepseek.com")

messages = [{"role": "user", "content": "Explain quantum computing simply."}]

response = client.chat.completions.create(
    model="deepseek-reasoner",
    messages=messages
)

print("Chatbot:", response.choices[0].message.content)

Performance

Latency~700ms for a typical 100-token response with deepseek-chat
Cost~$0.0015 per 500 tokens exchanged
Rate limitsDefault tier: 600 requests per minute, 40,000 tokens per minute
  • Keep user messages concise to reduce token usage.
  • Use system or assistant messages sparingly.
  • Batch multiple user inputs if possible to reduce calls.
ApproachLatencyCost/callBest for
Standard chat.completions.create()~700ms~$0.0015General chatbot use
Streaming chat.completions.create(stream=True)~700ms initial + streaming~$0.0015Real-time chat UI
Async chat.completions.acreate()~700ms~$0.0015Async Python apps
deepseek-reasoner model~900ms~$0.002Complex reasoning tasks

Quick tip

Always set the base_url to 'https://api.deepseek.com' when using the OpenAI SDK with DeepSeek API.

Common mistake

Forgetting to specify the 'base_url' parameter causes requests to go to OpenAI's API instead of DeepSeek's endpoint.

Verified 2026-04 · deepseek-chat, deepseek-reasoner
Verify ↗