How to use RunnableParallel in LangChain
RunnableParallel in LangChain to run multiple Runnable components concurrently and combine their outputs. Instantiate RunnableParallel with a list of Runnable objects, then call it with input data to get a list of results from each runnable executed in parallel.PREREQUISITES
Python 3.8+pip install langchain>=0.2OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install LangChain and OpenAI Python SDK, and set your OpenAI API key as an environment variable.
- Install packages:
pip install langchain openai - Set environment variable in your shell:
export OPENAI_API_KEY='your_api_key'(Linux/macOS) orsetx OPENAI_API_KEY "your_api_key"(Windows)
pip install langchain openai Step by step
This example shows how to create two separate OpenAI chat runnables and run them in parallel using RunnableParallel. The outputs are collected as a list.
import os
from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnableParallel
# Initialize two separate chat models
chat1 = ChatOpenAI(model_name="gpt-4o", temperature=0, openai_api_key=os.environ["OPENAI_API_KEY"])
chat2 = ChatOpenAI(model_name="gpt-4o-mini", temperature=0.5, openai_api_key=os.environ["OPENAI_API_KEY"])
# Create RunnableParallel with the two chat runnables
parallel = RunnableParallel([chat1, chat2])
# Input prompt
prompt = "Explain the benefits of using RunnableParallel in LangChain."
# Run both models in parallel
results = parallel.invoke(prompt)
# Print results from each model
for i, res in enumerate(results, 1):
print(f"Result from model {i}:\n{res}\n") Result from model 1: RunnableParallel allows concurrent execution of multiple runnables, improving efficiency and throughput. Result from model 2: Using RunnableParallel in LangChain enables parallel AI calls, reducing latency and combining outputs effectively.
Common variations
You can use RunnableParallel with different types of runnables, including custom ones or chains. It supports async usage with await parallel.invoke_async(input). You can also combine more than two runnables or mix models like OpenAI and Anthropic.
import asyncio
async def async_example():
results = await parallel.invoke_async(prompt)
for i, res in enumerate(results, 1):
print(f"Async result from model {i}:\n{res}\n")
asyncio.run(async_example()) Async result from model 1: RunnableParallel allows concurrent execution of multiple runnables, improving efficiency and throughput. Async result from model 2: Using RunnableParallel in LangChain enables parallel AI calls, reducing latency and combining outputs effectively.
Troubleshooting
If you get errors like AttributeError or TypeError, ensure all runnables passed to RunnableParallel implement the invoke method. Also, verify your API keys are set correctly in os.environ. For async errors, confirm you are running inside an async event loop.
Key Takeaways
- Use
RunnableParallelto run multiple LangChain runnables concurrently and get combined results. - You can run both synchronous and asynchronous calls with
RunnableParallel. - Ensure all runnables implement the
invokemethod and API keys are set in environment variables. - Mix different models or chains inside
RunnableParallelfor flexible parallel AI workflows.