How to beginner · 3 min read

Responses API input types explained

Quick answer
The OpenAI Responses API accepts inputs primarily as a list of messages with roles like user, assistant, and system. It also supports raw string prompts and structured tool or function call inputs via the tools parameter for enhanced interaction capabilities.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official OpenAI Python SDK and set your API key as an environment variable.

  • Run pip install openai to install the SDK.
  • Set your API key in your shell: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows).
bash
pip install openai

Step by step

Use the messages parameter to send a list of chat messages with roles. You can also send a simple string prompt or include tools for function calling.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Example 1: Using messages list
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Explain the Responses API input types."}
]
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=messages
)
print("Response with messages:", response.choices[0].message.content)

# Example 2: Using a simple string prompt (wrapped as user message)
prompt = "Explain the Responses API input types in simple terms."
response2 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": prompt}]
)
print("Response with string prompt:", response2.choices[0].message.content)

# Example 3: Using tools parameter for function calling
import json

tools = [{
    "type": "function",
    "function": {
        "name": "get_current_time",
        "description": "Get the current time in ISO format",
        "parameters": {
            "type": "object",
            "properties": {},
            "required": []
        }
    }
}]

response3 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What time is it now?"}],
    tools=tools
)
print("Response with tools:", response3.choices[0].message.content)
output
Response with messages: The OpenAI Responses API accepts inputs as a list of messages with roles such as user, assistant, and system.
Response with string prompt: The Responses API input types include messages, strings, and tools for function calls.
Response with tools: The current time is 2026-04-01T12:00:00Z.

Common variations

You can use asynchronous calls, streaming responses, or different models. The messages list is the standard input type, but you can also pass tools for function calls or files for fine-tuning.

python
import asyncio

async def async_chat():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Explain async usage."}],
        stream=True
    )
    async for chunk in response:
        print(chunk.choices[0].delta.content or "", end="", flush=True)

if __name__ == "__main__":
    asyncio.run(async_chat())
output
Explain async usage.

Troubleshooting

  • If you get an error about missing messages, ensure you pass a list of message dicts with roles.
  • Do not use deprecated parameters like functions= or function_call=; use tools= instead.
  • Check your API key environment variable if authentication fails.

Key Takeaways

  • Always provide input as a list of messages with explicit roles for chat completions.
  • Use the tools parameter to enable function calling and tool integrations.
  • Avoid deprecated parameters like functions and function_call in favor of tools.
  • You can send simple string prompts wrapped as a single user message in the messages list.
  • Async and streaming calls support the same input types with added flexibility for real-time output.
Verified 2026-04 · gpt-4o-mini
Verify ↗