How to use Qwen via Together AI
Quick answer
Use the
openai Python SDK with base_url="https://api.together.xyz/v1" and your TOGETHER_API_KEY to call the qwen-v1 model. Create chat completions with client.chat.completions.create() passing messages to interact with Qwen via Together AI.PREREQUISITES
Python 3.8+Together AI API key (set TOGETHER_API_KEY environment variable)pip install openai>=1.0
Setup
Install the openai Python package and set your Together AI API key as an environment variable.
- Install SDK:
pip install openai - Set environment variable:
export TOGETHER_API_KEY="your_api_key_here"(Linux/macOS) orset TOGETHER_API_KEY=your_api_key_here(Windows)
pip install openai Step by step
Use the OpenAI-compatible SDK with Together AI's base URL and your API key to call the qwen-v1 model. Pass chat messages to generate completions.
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["TOGETHER_API_KEY"],
base_url="https://api.together.xyz/v1"
)
response = client.chat.completions.create(
model="qwen-v1",
messages=[{"role": "user", "content": "Explain the benefits of using Qwen via Together AI."}]
)
print(response.choices[0].message.content) output
Qwen via Together AI offers seamless integration with an OpenAI-compatible API, enabling easy access to powerful language models with minimal setup and robust performance.
Common variations
You can use different Qwen model versions if available by changing the model parameter. For streaming responses, use the stream=True parameter in chat.completions.create(). Async calls require an async HTTP client wrapper around the OpenAI SDK.
import asyncio
import os
from openai import OpenAI
async def async_chat():
client = OpenAI(
api_key=os.environ["TOGETHER_API_KEY"],
base_url="https://api.together.xyz/v1"
)
response = await client.chat.completions.acreate(
model="qwen-v1",
messages=[{"role": "user", "content": "What is Qwen?"}],
stream=True
)
async for chunk in response:
print(chunk.choices[0].delta.get("content", ""), end="", flush=True)
asyncio.run(async_chat()) output
Qwen is a state-of-the-art large language model developed to provide advanced natural language understanding and generation capabilities...
Troubleshooting
- If you get authentication errors, verify your
TOGETHER_API_KEYenvironment variable is set correctly. - For HTTP 404 errors, confirm the
base_urlis exactlyhttps://api.together.xyz/v1. - If the model name
qwen-v1is not found, check Together AI documentation for updated model names.
Key Takeaways
- Use the OpenAI-compatible
openaiSDK with Together AI'sbase_urlto access Qwen models. - Set your API key in the
TOGETHER_API_KEYenvironment variable for secure authentication. - Switch models or enable streaming by adjusting parameters in
chat.completions.create(). - Async streaming requires an async event loop and
acreate()method. - Check Together AI docs for latest model names and endpoint updates.