How to use Tavily search API
Quick answer
Use the Tavily search API by sending HTTP POST requests to its endpoint with your query and API key. In Python, you can call it via the OpenAI-compatible client by setting the base URL to Tavily's API and passing your search query in the messages parameter.
PREREQUISITES
Python 3.8+Tavily API keypip install openai>=1.0Basic knowledge of HTTP APIs
Setup
Install the openai Python package (v1+) to use the OpenAI-compatible client. Set your Tavily API key as an environment variable for secure authentication. The Tavily API uses an OpenAI-compatible endpoint, so you must specify the base_url when creating the client.
pip install openai output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
Use the OpenAI SDK with the base_url set to Tavily's API endpoint. Pass your search query as a user message to the chat.completions.create method. The response will contain the search results in the message content.
import os
from openai import OpenAI
# Initialize client with Tavily base URL and API key from environment
client = OpenAI(
api_key=os.environ["TAVILY_API_KEY"],
base_url="https://api.tavily.com/v1"
)
# Define the search query
query = "Find recent AI research papers on natural language processing"
# Call the Tavily search API via chat completions
response = client.chat.completions.create(
model="tavily-search-1",
messages=[{"role": "user", "content": query}]
)
# Extract and print the search results
search_results = response.choices[0].message.content
print("Search results:\n", search_results) output
Search results: 1. "Attention Is All You Need" - Vaswani et al., 2017 2. "BERT: Pre-training of Deep Bidirectional Transformers" - Devlin et al., 2019 3. "GPT-4 Technical Report" - OpenAI, 2023 ... (more results)
Common variations
You can use asynchronous calls with asyncio for non-blocking requests. Change the model parameter to use different Tavily search versions if available. Streaming responses are not typically supported for search APIs but check Tavily docs for updates.
import os
import asyncio
from openai import OpenAI
async def async_search():
client = OpenAI(
api_key=os.environ["TAVILY_API_KEY"],
base_url="https://api.tavily.com/v1"
)
response = await client.chat.completions.create(
model="tavily-search-1",
messages=[{"role": "user", "content": "Latest AI breakthroughs in 2026"}]
)
print("Async search results:\n", response.choices[0].message.content)
asyncio.run(async_search()) output
Async search results: 1. "Gemini 2.5 Pro Release" - Google, 2026 2. "Mistral Large Model" - Mistral AI, 2026 3. "Claude Sonnet 4-5" - Anthropic, 2026 ... (more results)
Troubleshooting
- If you get authentication errors, verify your
TAVILY_API_KEYenvironment variable is set correctly. - If the API returns 404 or connection errors, confirm the
base_urlis correct and reachable. - Check that the
modelparameter matches a valid Tavily search model name. - For unexpected response formats, consult Tavily API documentation for any updates or changes.
Key Takeaways
- Use the OpenAI SDK with
base_urlset to Tavily's API endpoint for compatibility. - Pass your search query as a user message to
chat.completions.createwith the appropriate model. - Set your Tavily API key securely via environment variables to avoid hardcoding.
- Async calls are supported for non-blocking search queries using
asyncio. - Verify model names and endpoint URLs to avoid common connection and authentication errors.