How to integrate LangChain with FastAPI
Quick answer
Use
LangChain to create a chat or LLM chain and expose it via a FastAPI endpoint. Instantiate a LangChain ChatOpenAI client with your OpenAI API key, then define an async FastAPI route that calls the chain and returns the AI response.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install fastapi uvicorn langchain_openai openai
Setup
Install the required packages and set your OpenAI API key as an environment variable.
pip install fastapi uvicorn langchain_openai openai
# Set your API key in your shell environment
export OPENAI_API_KEY=os.environ["OPENAI_API_KEY"] Step by step
Create a FastAPI app that uses LangChain's ChatOpenAI to generate AI responses. Define a POST endpoint that accepts user input and returns the AI's reply.
import os
from fastapi import FastAPI
from pydantic import BaseModel
from langchain_openai import ChatOpenAI
from langchain_core.schema import HumanMessage
app = FastAPI()
# Initialize LangChain ChatOpenAI client
chat = ChatOpenAI(
model_name="gpt-4o",
openai_api_key=os.environ["OPENAI_API_KEY"]
)
class Query(BaseModel):
prompt: str
@app.post("/chat")
async def chat_endpoint(query: Query):
messages = [HumanMessage(content=query.prompt)]
response = await chat.agenerate(messages=messages)
return {"response": response.generations[0][0].text}
# To run:
# uvicorn filename:app --reload output
{
"response": "Hello! How can I assist you today?"
} Common variations
- Use synchronous calls with
chat.generate()instead ofagenerate(). - Switch to other models like
gpt-4.1or Anthropic'sclaude-3-5-sonnet-20241022by changing the client. - Add streaming responses with FastAPI's
StreamingResponsefor real-time output.
from langchain_openai import ChatOpenAI
from langchain_core.schema import HumanMessage
# Synchronous example
chat_sync = ChatOpenAI(
model_name="gpt-4.1",
openai_api_key=os.environ["OPENAI_API_KEY"]
)
response = chat_sync.generate(messages=[HumanMessage(content="Hello")])
print(response.generations[0][0].text) output
Hello! How can I help you today?
Troubleshooting
- If you get authentication errors, verify your
OPENAI_API_KEYenvironment variable is set correctly. - For import errors, ensure you installed
langchain_openaiandfastapiwith compatible versions. - If the server doesn't start, check you are running
uvicorn filename:app --reloadfrom the correct directory.
Key Takeaways
- Use LangChain's
ChatOpenAIclient with FastAPI to build AI chat endpoints. - Always load API keys from environment variables for security and portability.
- Async endpoints with
agenerate()improve FastAPI performance for AI calls. - Switch models easily by changing the
model_nameparameter in LangChain clients. - Test your FastAPI app locally with
uvicornbefore deployment.