How to Intermediate · 4 min read

How to use reasoning models in production

Quick answer
Use specialized reasoning models such as deepseek-reasoner or claude-sonnet-4-5 in production by calling their APIs with clear prompts that guide step-by-step logical thinking. Implement robust error handling and prompt engineering to ensure consistent, explainable outputs suitable for complex decision-making tasks.

PREREQUISITES

  • Python 3.8+
  • API key for reasoning model provider (e.g., DeepSeek or Anthropic)
  • pip install openai>=1.0 or anthropic>=0.20

Setup

Install the required Python SDK and set your API key as an environment variable. For DeepSeek, use the OpenAI-compatible SDK; for Anthropic, use their official SDK.

bash
pip install openai anthropic

Step by step

This example shows how to call the deepseek-reasoner model for a reasoning task using the OpenAI-compatible SDK. The prompt instructs the model to think step-by-step before answering.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["DEEPSEEK_API_KEY"])

prompt = (
    "You are a reasoning assistant. Solve the problem step-by-step:\n"
    "If a train leaves city A at 60 mph and another leaves city B at 40 mph, when do they meet?"
)

response = client.chat.completions.create(
    model="deepseek-reasoner",
    messages=[{"role": "user", "content": prompt}]
)

print(response.choices[0].message.content)
output
They meet after X hours at Y miles from city A. [Detailed step-by-step reasoning here.]

Common variations

You can use claude-sonnet-4-5 with Anthropic's SDK for similar reasoning tasks. Async calls and streaming outputs are also supported for real-time applications.

python
import os
from anthropic import Anthropic

client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

system_prompt = "You are a helpful reasoning assistant."
user_prompt = (
    "Explain step-by-step how to solve: If a train leaves city A at 60 mph and another leaves city B at 40 mph, when do they meet?"
)

response = client.messages.create(
    model="claude-sonnet-4-5",
    max_tokens=512,
    system=system_prompt,
    messages=[{"role": "user", "content": user_prompt}]
)

print(response.content[0].text)
output
Step 1: Calculate distance... Step 2: Calculate time... Final answer: They meet after ...

Troubleshooting

  • If the model output is vague or incomplete, refine your prompt to explicitly request step-by-step reasoning.
  • For timeout errors, reduce max_tokens or split complex queries into smaller parts.
  • Ensure your API key is valid and environment variables are correctly set.

Key Takeaways

  • Use reasoning-specific models like deepseek-reasoner or claude-sonnet-4-5 for complex logic tasks.
  • Craft prompts that explicitly ask for step-by-step reasoning to improve output quality.
  • Implement error handling and token limits to maintain production stability.
Verified 2026-04 · deepseek-reasoner, claude-sonnet-4-5
Verify ↗