How to intermediate · 3 min read

How to chain multiple steps in LangChain LCEL

Quick answer
Use LangChain's LCEL (LangChain Expression Language) to define multi-step chains by composing expressions that call different tools or LLMs sequentially. Each step's output can be referenced in subsequent steps using LCEL syntax, enabling complex workflows in a single expression.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai>=0.2.0

Setup

Install the required LangChain packages and set your OpenAI API key as an environment variable.

  • Run pip install langchain_openai to install LangChain OpenAI bindings.
  • Set your API key in your shell: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows).
bash
pip install langchain_openai

Step by step

This example demonstrates chaining two steps in LCEL: first generating a topic, then generating a blog intro based on that topic. The output of the first step is referenced in the second step using ${step1}.

python
import os
from langchain_openai import ChatOpenAI
from langchain_core.chains import LCELChain

# Initialize the OpenAI chat model
llm = ChatOpenAI(model_name="gpt-4o", temperature=0.7, api_key=os.environ["OPENAI_API_KEY"])

# Define the LCEL expression chaining two steps
lcel_expression = '''
step1 = llm("Generate a blog post topic about AI in healthcare.")
step2 = llm(f"Write a compelling introduction for a blog post titled: {step1}")
return step2
'''

# Create the LCELChain with the expression
chain = LCELChain.from_lcel(lcel_expression, llm=llm)

# Run the chain
result = chain.invoke({})
print("Blog intro:\n", result)
output
Blog intro:
 In recent years, artificial intelligence has revolutionized healthcare by enhancing diagnostics, personalizing treatment, and improving patient outcomes. This blog explores how AI is transforming medicine and what the future holds.

Common variations

You can chain more than two steps by defining additional variables in the LCEL expression. Use async calls by integrating with async LangChain clients. To use a different model, change the model_name parameter in ChatOpenAI. Streaming outputs can be enabled by setting streaming=True in the client.

python
import os
import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.chains import LCELChain

async def async_chain():
    llm = ChatOpenAI(model_name="gpt-4o", temperature=0.7, streaming=True, api_key=os.environ["OPENAI_API_KEY"])

    lcel_expression = '''
    step1 = await llm("Generate a creative story title.")
    step2 = await llm(f"Write a summary for the story titled: {step1}")
    step3 = await llm(f"Suggest three hashtags for the story titled: {step1}")
    return step2 + "\nHashtags: " + step3
    '''

    chain = LCELChain.from_lcel(lcel_expression, llm=llm)
    result = await chain.ainvoke({})
    print(result)

asyncio.run(async_chain())
output
Once upon a time in a distant galaxy, a young explorer discovers the secrets of the stars.
Hashtags: #SpaceAdventure #GalacticJourney #YoungExplorer

Troubleshooting

  • If you see KeyError: 'OPENAI_API_KEY', ensure your environment variable is set correctly.
  • If the chain fails to parse the LCEL expression, check for syntax errors or missing variable references.
  • For slow responses, reduce max_tokens or increase temperature for faster but less deterministic outputs.

Key Takeaways

  • Use LCEL expressions to define multi-step chains by assigning outputs to variables and referencing them in subsequent steps.
  • Initialize LangChain's ChatOpenAI with your API key and pass it to LCELChain.from_lcel() for chaining.
  • Async and streaming support enable more interactive and efficient multi-step workflows in LCEL.
  • Always validate your LCEL syntax and environment variables to avoid runtime errors.
Verified 2026-04 · gpt-4o
Verify ↗