How to Intermediate · 4 min read

How to use E2B with LangChain

Quick answer
Use the e2b_code_interpreter package to create a Sandbox instance and run code securely. Integrate it with LangChain by wrapping sandbox calls inside LangChain custom tools or chains to execute code safely within your workflows.

PREREQUISITES

  • Python 3.8+
  • pip install langchain>=0.2.0
  • pip install e2b-code-interpreter
  • OpenAI API key (for LangChain LLM usage)
  • E2B API key (set as environment variable E2B_API_KEY)

Setup

Install the required packages langchain and e2b-code-interpreter. Set your environment variables for OPENAI_API_KEY and E2B_API_KEY before running the code.

bash
pip install langchain e2b-code-interpreter
output
Collecting langchain
Collecting e2b-code-interpreter
Successfully installed langchain-0.2.0 e2b-code-interpreter-1.0.0

Step by step

This example shows how to create an e2b_code_interpreter.Sandbox instance and integrate it with LangChain by defining a custom tool that runs Python code securely in the sandbox. Then, use LangChain's LLMChain with an OpenAI model to generate code and execute it via the sandbox tool.

python
import os
from e2b_code_interpreter import Sandbox
from langchain import LLMChain, PromptTemplate
from langchain.llms import OpenAI

# Initialize the E2B sandbox with API key from environment
sandbox = Sandbox(api_key=os.environ["E2B_API_KEY"])

# Define a function to run code in the sandbox
# This will be used as a LangChain tool

def run_code_tool(code: str) -> str:
    execution = sandbox.run_code(code)
    return execution.text

# Initialize LangChain OpenAI LLM
llm = OpenAI(model_name="gpt-4o-mini", openai_api_key=os.environ["OPENAI_API_KEY"])

# Define a prompt template that asks the LLM to generate Python code
prompt = PromptTemplate(
    input_variables=["task"],
    template="""
You are a Python coding assistant. Generate Python code to perform the following task:
{task}
"""
)

# Create an LLMChain with the prompt
chain = LLMChain(llm=llm, prompt=prompt)

# Example task
task_description = "Calculate the factorial of 5 and print the result."

# Generate Python code from the LLM
generated_code = chain.run(task=task_description)
print("Generated code:\n", generated_code)

# Run the generated code securely in the E2B sandbox
output = run_code_tool(generated_code)
print("Sandbox output:\n", output)

# Close the sandbox when done
sandbox.close()
output
Generated code:

factorial = 1
for i in range(1, 6):
    factorial *= i
print(f"Factorial of 5 is {factorial}")

Sandbox output:
Factorial of 5 is 120

Common variations

  • Use async calls with sandbox.run_code_async() for asynchronous execution.
  • Integrate the sandbox as a LangChain Tool or Agent for more complex workflows.
  • Use different LLM models by changing model_name in OpenAI initialization.
python
import asyncio

async def async_run():
    output = await sandbox.run_code_async(generated_code)
    print("Async sandbox output:\n", output.text)

asyncio.run(async_run())
output
Async sandbox output:
Factorial of 5 is 120

Troubleshooting

  • If you see AuthenticationError, verify your E2B_API_KEY environment variable is set correctly.
  • If code execution hangs or times out, check your network connection and sandbox usage limits.
  • Always call sandbox.close() to release resources after use.

Key Takeaways

  • Use e2b_code_interpreter.Sandbox to run Python code securely within LangChain workflows.
  • Wrap sandbox calls as LangChain tools or functions for seamless integration.
  • Always set E2B_API_KEY and OPENAI_API_KEY in your environment variables.
  • Close the sandbox after use to free resources and avoid leaks.
  • Async execution is supported for scalable and responsive applications.
Verified 2026-04 · gpt-4o-mini
Verify ↗