How to use Gemini API with LangChain
Quick answer
Use the
gemini-1.5-pro or gemini-2.0-flash model with LangChain by installing langchain_openai, setting your OPENAI_API_KEY environment variable, and creating a ChatOpenAI instance specifying the Gemini model. Then pass this instance to LangChain chains or prompts for chat completions.PREREQUISITES
Python 3.8+OpenAI API key (set as OPENAI_API_KEY in environment)pip install langchain_openai>=0.2.0
Setup
Install the langchain_openai package to access Gemini models via LangChain. Ensure your OPENAI_API_KEY environment variable is set with your Google Gemini API key.
Run this command to install:
pip install langchain_openai>=0.2.0 Step by step
This example shows how to create a LangChain ChatOpenAI client using the Gemini model and generate a chat completion.
import os
from langchain_openai import ChatOpenAI
# Initialize the LangChain ChatOpenAI client with Gemini model
chat = ChatOpenAI(model_name="gemini-1.5-pro", temperature=0.7)
# Define a simple chat prompt
messages = [{"role": "user", "content": "Explain LangChain integration with Gemini API."}]
# Generate completion
response = chat(messages)
print(response.content) output
LangChain integrates with Gemini API by using the ChatOpenAI wrapper, allowing you to call Gemini models seamlessly for chat-based AI tasks.
Common variations
You can switch models by changing model_name to gemini-2.0-flash for faster responses or lower latency. LangChain also supports async calls with await chat.agenerate() in async functions.
Example for async usage:
import asyncio
from langchain_openai import ChatOpenAI
async def main():
chat = ChatOpenAI(model_name="gemini-2.0-flash", temperature=0.5)
messages = [{"role": "user", "content": "What is LangChain?"}]
response = await chat.agenerate(messages)
print(response.generations[0][0].text)
asyncio.run(main()) output
LangChain is a framework that simplifies building applications with language models like Gemini by providing composable chains and prompt management.
Troubleshooting
- If you get authentication errors, verify your
OPENAI_API_KEYis correctly set in your environment. - For model not found errors, confirm you are using a valid Gemini model name like
gemini-1.5-proorgemini-2.0-flash. - Timeouts may require lowering
temperatureor using the fastergemini-2.0-flashmodel.
Key Takeaways
- Use
ChatOpenAIfromlangchain_openaito access Gemini models in LangChain. - Set your Gemini API key as
OPENAI_API_KEYin the environment for authentication. - Switch between
gemini-1.5-proandgemini-2.0-flashmodels for different speed and quality trade-offs.