How to beginner · 3 min read

How to build chatbot with OpenAI API

Quick answer
Use the OpenAI Python SDK to create chat completions with models like gpt-4o. Initialize the client with your API key from os.environ, then send user messages to client.chat.completions.create() and extract the response from response.choices[0].message.content.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official openai Python package and set your API key as an environment variable.

  • Install package: pip install openai
  • Set environment variable in your shell: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install openai

Step by step

This example shows a complete Python script that sends a user message to the gpt-4o model and prints the chatbot's reply.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

messages = [
    {"role": "user", "content": "Hello, who won the World Series in 2023?"}
]

response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages
)

print("Chatbot reply:", response.choices[0].message.content)
output
Chatbot reply: The Texas Rangers won the 2023 World Series.

Common variations

You can use streaming to receive tokens as they arrive, or use async calls for concurrency. You can also switch models like gpt-4o-mini for faster, cheaper responses.

python
import os
import asyncio
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

async def async_chat():
    messages = [{"role": "user", "content": "Tell me a joke."}]
    stream = await client.chat.completions.acreate(
        model="gpt-4o-mini",
        messages=messages,
        stream=True
    )
    async for chunk in stream:
        delta = chunk.choices[0].delta.content or ""
        print(delta, end="", flush=True)

asyncio.run(async_chat())
output
Why did the scarecrow win an award? Because he was outstanding in his field!

Troubleshooting

  • If you get an authentication error, verify your OPENAI_API_KEY environment variable is set correctly.
  • For rate limit errors, reduce request frequency or upgrade your plan.
  • If the model name is invalid, check for typos and use current model names like gpt-4o.

Key Takeaways

  • Use the official openai Python SDK v1+ with OpenAI client and environment API key.
  • Send user messages as a list of dicts with roles to client.chat.completions.create() using gpt-4o or similar models.
  • Streaming and async calls improve responsiveness and concurrency for chatbots.
  • Always handle API errors like authentication and rate limits gracefully.
  • Keep model names up to date as they may change over time.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗