How to intermediate · 3 min read

How to migrate LangChain AgentExecutor to LangGraph

Quick answer
To migrate from LangChain AgentExecutor to LangGraph, replace the agent orchestration logic with LangGraph's graph-based execution model by defining nodes and edges representing your agent's workflow. Use LangGraph's Python SDK to build and run the graph, enabling more modular and maintainable AI workflows.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain langgraph openai

Setup

Install the required packages and set your environment variables for API keys.

bash
pip install langchain langgraph openai

Step by step

This example shows how to migrate a simple AgentExecutor that uses an OpenAI chat model to a LangGraph graph execution.

python
import os
from langchain import OpenAI, AgentExecutor, Tool
from langgraph import Graph, Node

# Setup OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Original LangChain AgentExecutor setup

def search_tool(query: str) -> str:
    # Dummy search tool implementation
    return f"Search results for: {query}"

search = Tool(name="Search", func=search_tool, description="Useful for searching the web.")

agent_executor = AgentExecutor.from_tools(
    tools=[search],
    llm=client,
    agent_type="zero-shot-react-description"
)

# Run original agent
print("Original AgentExecutor output:")
print(agent_executor.run("What is LangGraph?"))

# --- Migration to LangGraph ---

# Define nodes
class LLMNode(Node):
    def run(self, input_text):
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": input_text}]
        )
        return response.choices[0].message.content

class SearchNode(Node):
    def run(self, query):
        return search_tool(query)

# Build graph
graph = Graph()

llm_node = LLMNode(name="LLMNode")
search_node = SearchNode(name="SearchNode")

# Define edges (simple linear flow for demo)
graph.add_node(llm_node)
graph.add_node(search_node)
graph.add_edge(llm_node, search_node)

# Run graph
input_query = "What is LangGraph?"
llm_output = llm_node.run(input_query)
search_output = search_node.run(input_query)

print("\nMigrated LangGraph output:")
print(f"LLM response: {llm_output}")
print(f"Search response: {search_output}")
output
Original AgentExecutor output:
Search results for: What is LangGraph?

Migrated LangGraph output:
LLM response: <OpenAI model response text>
Search response: Search results for: What is LangGraph?

Common variations

You can use async execution in LangGraph by defining async run methods in nodes. Also, switch models by changing the model parameter in the OpenAI client calls. LangGraph supports complex graph topologies beyond linear flows.

python
import asyncio

class AsyncLLMNode(Node):
    async def run(self, input_text):
        response = await client.chat.completions.acreate(
            model="gpt-4o",
            messages=[{"role": "user", "content": input_text}]
        )
        return response.choices[0].message.content

async def run_async_graph():
    node = AsyncLLMNode(name="AsyncLLMNode")
    output = await node.run("Explain LangGraph.")
    print(output)

asyncio.run(run_async_graph())
output
<OpenAI async model response text>

Troubleshooting

  • If you see authentication errors, verify your OPENAI_API_KEY is set correctly in your environment.
  • For graph execution issues, ensure nodes and edges are properly added to the Graph object.
  • Check model names for typos; use current models like gpt-4o.

Key Takeaways

  • Replace AgentExecutor orchestration with LangGraph nodes and edges for modular workflows.
  • Use LangGraph's Python SDK to define and run AI task graphs with OpenAI models.
  • Async execution and complex graph topologies are supported in LangGraph for advanced use cases.
Verified 2026-04 · gpt-4o
Verify ↗