How to beginner · 4 min read

How to list messages in a thread

Quick answer
To list messages in a thread using the OpenAI Python SDK, maintain the conversation history as a list of message dictionaries and pass it to client.chat.completions.create(). The messages parameter holds the entire thread, allowing you to access or display all messages in order.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official openai Python SDK version 1.0 or higher and set your API key as an environment variable.

  • Install SDK: pip install openai>=1.0
  • Set environment variable in your shell: export OPENAI_API_KEY='your_api_key'
bash
pip install openai>=1.0

Step by step

Use the OpenAI client to send a list of messages representing the thread. Each message is a dictionary with role and content. The response includes the assistant's reply, and you can append it to the thread to keep the conversation state.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Initial thread messages
thread_messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello, who won the world series in 2023?"},
    {"role": "assistant", "content": "The Texas Rangers won the 2023 World Series."}
]

# Add a new user message to the thread
thread_messages.append({"role": "user", "content": "Who was the MVP?"})

# Create chat completion with full thread
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=thread_messages
)

# Extract assistant reply
assistant_reply = response.choices[0].message.content

# Append assistant reply to thread
thread_messages.append({"role": "assistant", "content": assistant_reply})

# List all messages in the thread
for i, msg in enumerate(thread_messages):
    print(f"Message {i+1} [{msg['role']}]: {msg['content']}")
output
Message 1 [system]: You are a helpful assistant.
Message 2 [user]: Hello, who won the world series in 2023?
Message 3 [assistant]: The Texas Rangers won the 2023 World Series.
Message 4 [user]: Who was the MVP?
Message 5 [assistant]: The MVP of the 2023 World Series was Corey Seager.

Common variations

You can list messages in a thread asynchronously using async Python with the OpenAI SDK or use different models like gpt-4.1. For streaming responses, handle partial outputs while maintaining the message list. Also, you can store and retrieve the thread from a database or file to persist conversation state.

python
import asyncio
import os
from openai import OpenAI

async def async_list_thread():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

    thread_messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, who won the world series in 2023?"}
    ]

    response = await client.chat.completions.acreate(
        model="gpt-4.1",
        messages=thread_messages
    )

    assistant_reply = response.choices[0].message.content
    thread_messages.append({"role": "assistant", "content": assistant_reply})

    for i, msg in enumerate(thread_messages):
        print(f"Message {i+1} [{msg['role']}]: {msg['content']}")

asyncio.run(async_list_thread())
output
Message 1 [system]: You are a helpful assistant.
Message 2 [user]: Hello, who won the world series in 2023?
Message 3 [assistant]: The Texas Rangers won the 2023 World Series.

Troubleshooting

  • If you get an authentication error, verify your OPENAI_API_KEY environment variable is set correctly.
  • If the thread messages are too long, you may hit token limits; trim or summarize older messages.
  • Ensure each message dictionary has valid role values: system, user, or assistant.

Key Takeaways

  • Maintain the entire conversation as a list of message dicts to represent a thread.
  • Pass the full message list to client.chat.completions.create() to continue the thread.
  • Append new user and assistant messages to keep thread state updated.
  • Use async methods for non-blocking thread listing and interaction.
  • Watch token limits and environment variable setup to avoid common errors.
Verified 2026-04 · gpt-4o-mini, gpt-4.1
Verify ↗