How to create a chain with prompt and LLM in LangChain
Quick answer
Use
ChatPromptTemplate to define your prompt and ChatOpenAI as the LLM in LangChain. Then create a LLMChain by passing the prompt and LLM instances, and call run() with input variables to generate completions.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install langchain_openai>=0.2 openai>=1.0
Setup
Install the required packages and set your OpenAI API key as an environment variable.
- Install LangChain OpenAI integration and OpenAI SDK:
pip install langchain_openai openai Step by step
This example creates a prompt template with a variable, initializes the GPT-4o model, builds an LLMChain, and runs it with input to get a response.
import os
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
# Set your OpenAI API key in environment variable before running
# export OPENAI_API_KEY='your_api_key'
# Define a prompt template with a variable
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}.")
# Initialize the LLM with GPT-4o
llm = ChatOpenAI(model_name="gpt-4o", temperature=0.7, openai_api_key=os.environ["OPENAI_API_KEY"])
# Create the chain with prompt and LLM
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain with input variable
result = chain.run({"topic": "computers"})
print(result) output
Why did the computer go to therapy? Because it had too many bytes of emotional baggage!
Common variations
You can use different models like gpt-4o-mini or claude-3-5-sonnet-20241022 with their respective LangChain integrations. Async calls and streaming outputs are also supported by LangChain's LLM classes.
from langchain_openai import ChatOpenAI
# Using a smaller model
llm_mini = ChatOpenAI(model_name="gpt-4o-mini", temperature=0.5, openai_api_key=os.environ["OPENAI_API_KEY"])
# Async example (Python 3.8+)
import asyncio
async def async_run():
result = await llm_mini.acall([{"role": "user", "content": "Say hello"}])
print(result.content)
asyncio.run(async_run()) output
Hello! How can I assist you today?
Troubleshooting
- If you get an authentication error, verify your
OPENAI_API_KEYenvironment variable is set correctly. - If the chain fails to run, ensure your prompt variables match the input dictionary keys.
- For rate limits, consider lowering
temperatureor switching to a smaller model.
Key Takeaways
- Use
ChatPromptTemplateto create reusable prompt templates with variables. - Instantiate
ChatOpenAIwith your API key and desired model to create the LLM. - Combine prompt and LLM in
LLMChainto run completions with input variables. - LangChain supports async calls and multiple models for flexible integration.
- Always verify environment variables and input keys to avoid common errors.