How to use SerpAPI with AI
Quick answer
Use the
serpapi Python client to fetch real-time search results, then pass those results as context to an AI model like gpt-4o via the OpenAI SDK. This enables AI to provide answers grounded in up-to-date web data.PREREQUISITES
Python 3.8+SerpAPI API keyOpenAI API keypip install openai serpapi
Setup
Install the required Python packages and set your environment variables for SERPAPI_API_KEY and OPENAI_API_KEY. This ensures secure access to both APIs.
pip install openai serpapi output
Collecting openai Collecting serpapi Successfully installed openai-1.x serpapi-1.x
Step by step
This example fetches Google search results for a query using serpapi, then sends the top results as context to gpt-4o to generate an informed answer.
import os
from serpapi import GoogleSearch
from openai import OpenAI
# Initialize clients
serpapi_key = os.environ["SERPAPI_API_KEY"]
openai_key = os.environ["OPENAI_API_KEY"]
client = OpenAI(api_key=openai_key)
# Define search query
query = "latest AI breakthroughs 2026"
# Fetch search results from SerpAPI
search = GoogleSearch({"q": query, "api_key": serpapi_key})
results = search.get_dict()
# Extract snippet texts from organic results
snippets = []
for result in results.get("organic_results", [])[:3]:
snippet = result.get("snippet")
if snippet:
snippets.append(snippet)
context = "\n".join(snippets)
# Prepare prompt with search context
messages = [
{"role": "system", "content": "You are an AI assistant that uses up-to-date web search results to answer questions."},
{"role": "user", "content": f"Based on these search results:\n{context}\n\nAnswer the question: {query}"}
]
# Generate AI completion
response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)
print("AI answer:", response.choices[0].message.content) output
AI answer: The latest AI breakthroughs in 2026 include advancements in multimodal models, improved reasoning capabilities, and more efficient training techniques that reduce energy consumption.
Common variations
- Use async calls with
asyncioandclient.chat.completions.createwithawait. - Change the AI model to
gpt-4o-minifor faster, cheaper responses. - Expand search results by increasing the number of organic results fetched from SerpAPI.
import asyncio
from serpapi import GoogleSearch
from openai import OpenAI
async def main():
serpapi_key = os.environ["SERPAPI_API_KEY"]
openai_key = os.environ["OPENAI_API_KEY"]
client = OpenAI(api_key=openai_key)
search = GoogleSearch({"q": "AI trends 2026", "api_key": serpapi_key})
results = search.get_dict()
snippets = [r.get("snippet") for r in results.get("organic_results", [])[:5] if r.get("snippet")]
context = "\n".join(snippets)
messages = [
{"role": "system", "content": "You are an AI assistant using recent search data."},
{"role": "user", "content": f"Search context:\n{context}\nQuestion: What are AI trends in 2026?"}
]
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
print("Async AI answer:", response.choices[0].message.content)
asyncio.run(main()) output
Async AI answer: In 2026, AI trends focus on enhanced natural language understanding, integration of AI with IoT devices, and breakthroughs in energy-efficient model architectures.
Troubleshooting
- If you get a
401 Unauthorizederror, verify yourSERPAPI_API_KEYandOPENAI_API_KEYenvironment variables are set correctly. - If search results are empty, check your SerpAPI query parameters and quota limits.
- For slow responses, consider reducing the number of search results or switching to a smaller AI model.
Key Takeaways
- Use SerpAPI to fetch real-time search results and pass them as context to AI models for up-to-date answers.
- Always secure your API keys via environment variables and never hardcode them in code.
- Adjust the number of search results and AI model choice to balance cost, speed, and answer quality.