How to beginner · 3 min read

How to use Together AI with LangChain

Quick answer
Use the openai Python SDK with base_url="https://api.together.xyz/v1" and your TOGETHER_API_KEY to call Together AI models. In LangChain, configure ChatOpenAI with the same base_url and API key to seamlessly integrate Together AI chat completions into your chains.

PREREQUISITES

  • Python 3.8+
  • Together AI API key (set TOGETHER_API_KEY environment variable)
  • pip install openai>=1.0 langchain_openai

Setup

Install the required Python packages and set your environment variable for the Together AI API key.

  • Install packages: pip install openai langchain_openai
  • Set environment variable: export TOGETHER_API_KEY="your_api_key_here" (Linux/macOS) or set TOGETHER_API_KEY=your_api_key_here (Windows)
bash
pip install openai langchain_openai

Step by step

This example shows how to call Together AI directly with the OpenAI SDK and how to configure LangChain's ChatOpenAI to use Together AI for chat completions.

python
import os
from openai import OpenAI
from langchain_openai import ChatOpenAI

# Initialize Together AI client using OpenAI SDK with base_url override
client = OpenAI(
    api_key=os.environ["TOGETHER_API_KEY"],
    base_url="https://api.together.xyz/v1"
)

# Direct Together AI chat completion call
response = client.chat.completions.create(
    model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
    messages=[{"role": "user", "content": "Hello from Together AI!"}]
)
print("Direct Together AI response:", response.choices[0].message.content)

# LangChain ChatOpenAI configured to use Together AI
chat = ChatOpenAI(
    model_name="meta-llama/Llama-3.3-70B-Instruct-Turbo",
    openai_api_key=os.environ["TOGETHER_API_KEY"],
    openai_api_base="https://api.together.xyz/v1"
)

# Run a chat completion via LangChain
result = chat.invoke([{"role": "user", "content": "Hello from LangChain with Together AI!"}])
print("LangChain Together AI response:", result.content)
output
Direct Together AI response: Hello from Together AI!
LangChain Together AI response: Hello from LangChain with Together AI!

Common variations

You can use async calls with the OpenAI SDK for Together AI by using await and async functions. Also, you can switch models by changing the model parameter to other Together AI supported models like meta-llama/Llama-3.3-70B-Instruct-Turbo or smaller variants.

python
import asyncio
from openai import OpenAI

async def async_together_chat():
    client = OpenAI(
        api_key=os.environ["TOGETHER_API_KEY"],
        base_url="https://api.together.xyz/v1"
    )
    response = await client.chat.completions.create(
        model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
        messages=[{"role": "user", "content": "Async call with Together AI"}]
    )
    print("Async Together AI response:", response.choices[0].message.content)

asyncio.run(async_together_chat())
output
Async Together AI response: Async call with Together AI

Troubleshooting

  • If you get authentication errors, verify your TOGETHER_API_KEY environment variable is set correctly.
  • If the API returns model not found, confirm the model name is valid and supported by Together AI.
  • For network errors, check your internet connection and that https://api.together.xyz/v1 is reachable.

Key Takeaways

  • Use the OpenAI SDK with base_url="https://api.together.xyz/v1" to access Together AI.
  • Configure LangChain's ChatOpenAI with Together AI API key and base URL for seamless integration.
  • Together AI supports async calls and multiple model variants for flexible usage.
Verified 2026-04 · meta-llama/Llama-3.3-70B-Instruct-Turbo
Verify ↗