How to build a simple LLM chain in LangChain
Quick answer
Use the
ChatOpenAI class from langchain_openai to instantiate an LLM, then create a LLMChain with a PromptTemplate to build a simple chain. Call the chain with input variables to get the generated output.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install langchain_openai>=0.2 openai>=1.0
Setup
Install the required packages and set your OpenAI API key as an environment variable.
- Install LangChain OpenAI integration and OpenAI SDK:
pip install langchain_openai openai Step by step
This example creates a simple LLM chain that takes a user's name and returns a greeting using ChatOpenAI and LLMChain.
import os
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
# Initialize the LLM with your OpenAI API key
llm = ChatOpenAI(model_name="gpt-4o", temperature=0, openai_api_key=os.environ["OPENAI_API_KEY"])
# Define the prompt template with an input variable
prompt = ChatPromptTemplate.from_messages([
HumanMessagePromptTemplate.from_template("Say hello to {name}!")
])
# Create the LLM chain
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain with input
result = chain.run({"name": "Alice"})
print(result) output
Hello, Alice!
Common variations
You can customize the chain by:
- Using different models like
gpt-4o-minifor faster responses. - Running chains asynchronously with
await chain.arun(). - Streaming output by enabling streaming in
ChatOpenAI.
import asyncio
async def async_chain():
llm_async = ChatOpenAI(model_name="gpt-4o-mini", temperature=0, streaming=True, openai_api_key=os.environ["OPENAI_API_KEY"])
chain_async = LLMChain(llm=llm_async, prompt=prompt)
response = await chain_async.arun({"name": "Bob"})
print(response)
asyncio.run(async_chain()) output
Hello, Bob!
Troubleshooting
- If you see authentication errors, verify your
OPENAI_API_KEYenvironment variable is set correctly. - If the chain returns unexpected output, check your prompt template syntax and input variables.
- For rate limit errors, consider lowering request frequency or switching to a smaller model.
Key Takeaways
- Use
ChatOpenAIandLLMChainto build simple chains easily. - Define prompts with
ChatPromptTemplateand input variables for flexibility. - Leverage async and streaming features for advanced use cases.
- Always set your API key securely via environment variables.
- Test prompt templates carefully to ensure expected outputs.