Code beginner · 3 min read

How to translate text with AI in Python

Direct answer
Use the OpenAI Python SDK to call chat.completions.create with a prompt instructing the model to translate text, specifying the source and target languages.

Setup

Install
bash
pip install openai
Env vars
OPENAI_API_KEY
Imports
python
import os
from openai import OpenAI

Examples

inTranslate 'Hello, how are you?' from English to Spanish.
outHola, ¿cómo estás?
inTranslate 'Good morning, have a nice day!' from English to French.
outBonjour, passez une bonne journée !
inTranslate 'Thank you for your help.' from English to Japanese.
outご助力ありがとうございます。

Integration steps

  1. Import the OpenAI SDK and initialize the client with the API key from os.environ.
  2. Construct a chat message instructing the model to translate the given text specifying source and target languages.
  3. Call the chat.completions.create method with a suitable model like gpt-4o and the messages array.
  4. Extract the translated text from response.choices[0].message.content.
  5. Print or return the translated output.

Full code

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def translate_text(text: str, source_lang: str, target_lang: str) -> str:
    prompt = (
        f"Translate the following text from {source_lang} to {target_lang} without extra explanation:\n" 
        f"{text}"
    )
    messages = [{"role": "user", "content": prompt}]
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages
    )
    return response.choices[0].message.content.strip()

if __name__ == "__main__":
    source = "English"
    target = "Spanish"
    text_to_translate = "Hello, how are you?"
    translation = translate_text(text_to_translate, source, target)
    print(f"Original ({source}): {text_to_translate}")
    print(f"Translated ({target}): {translation}")
output
Original (English): Hello, how are you?
Translated (Spanish): Hola, ¿cómo estás?

API trace

Request
json
{"model": "gpt-4o", "messages": [{"role": "user", "content": "Translate the following text from English to Spanish without extra explanation:\nHello, how are you?"}]}
Response
json
{"choices": [{"message": {"content": "Hola, ¿cómo estás?"}}], "usage": {"prompt_tokens": 20, "completion_tokens": 7, "total_tokens": 27}}
Extractresponse.choices[0].message.content

Variants

Streaming translation

Use streaming when translating long texts to provide incremental output and improve user experience.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def translate_stream(text: str, source_lang: str, target_lang: str):
    prompt = f"Translate the following text from {source_lang} to {target_lang} without extra explanation:\n{text}"
    messages = [{"role": "user", "content": prompt}]
    stream = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        stream=True
    )
    translation = ""
    for chunk in stream:
        delta = chunk.choices[0].delta.content or ""
        print(delta, end="", flush=True)
        translation += delta
    print()
    return translation.strip()

if __name__ == "__main__":
    translate_stream("Good morning, have a nice day!", "English", "French")
Async translation

Use async calls when integrating translation in applications requiring concurrency or non-blocking behavior.

python
import os
import asyncio
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

async def translate_async(text: str, source_lang: str, target_lang: str) -> str:
    prompt = f"Translate the following text from {source_lang} to {target_lang} without extra explanation:\n{text}"
    messages = [{"role": "user", "content": prompt}]
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=messages
    )
    return response.choices[0].message.content.strip()

async def main():
    translation = await translate_async("Thank you for your help.", "English", "Japanese")
    print(f"Translated (Japanese): {translation}")

if __name__ == "__main__":
    asyncio.run(main())
Use Anthropic Claude for translation

Use Anthropic Claude models if you prefer Claude's style or have an Anthropic API key.

python
import os
import anthropic

client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

system_prompt = "You are a helpful assistant that translates text accurately."

def translate_with_claude(text: str, source_lang: str, target_lang: str) -> str:
    user_message = f"Translate the following text from {source_lang} to {target_lang}: {text}"
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        system=system_prompt,
        messages=[{"role": "user", "content": user_message}]
    )
    return response.content.strip()

if __name__ == "__main__":
    translation = translate_with_claude("Hello, how are you?", "English", "Spanish")
    print(f"Translated (Spanish): {translation}")

Performance

Latency~800ms for gpt-4o non-streaming translation calls
Cost~$0.002 per 500 tokens exchanged on gpt-4o
Rate limitsTier 1: 500 requests per minute / 30,000 tokens per minute
  • Keep prompts concise to reduce token usage.
  • Avoid unnecessary system messages or verbose instructions.
  • Batch multiple sentences in one request to amortize overhead.
ApproachLatencyCost/callBest for
Standard OpenAI chat completion~800ms~$0.002 per 500 tokensSimple, accurate translations
Streaming OpenAI chat completion~800ms + streaming~$0.002 per 500 tokensLong texts with incremental output
Anthropic Claude chat completion~900msCheck Anthropic pricingAlternative style and tone preferences

Quick tip

Always specify source and target languages explicitly in your prompt to improve translation accuracy.

Common mistake

Not specifying source and target languages clearly in the prompt, leading to inaccurate or incomplete translations.

Verified 2026-04 · gpt-4o, claude-3-5-sonnet-20241022
Verify ↗