How to Intermediate · 3 min read

How to create custom LangChain tools

Quick answer
Create custom LangChain tools by subclassing BaseTool or Tool and implementing the _run method for synchronous logic or _arun for async. Register your tool with the agent to enable AI-driven function calling within LangChain workflows.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai langchain_community

Setup

Install the required packages and set your OpenAI API key as an environment variable.

  • Install LangChain and OpenAI SDK:
bash
pip install langchain_openai langchain_community openai
output
Collecting langchain_openai
Collecting langchain_community
Collecting openai
Successfully installed langchain_openai langchain_community openai

Step by step

Define a custom tool by subclassing BaseTool or Tool from langchain.tools. Implement the _run method for synchronous execution. Then create an agent and pass your tool to it.

python
import os
from langchain.tools import BaseTool
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType

# Custom tool example
class MultiplyByTwoTool(BaseTool):
    name = "multiply_by_two"
    description = "Multiplies a given integer by two."

    def _run(self, query: str) -> str:
        try:
            num = int(query.strip())
            return str(num * 2)
        except ValueError:
            return "Input must be an integer."

    async def _arun(self, query: str) -> str:
        # Async version
        return self._run(query)

# Instantiate the tool
multiply_tool = MultiplyByTwoTool()

# Initialize LLM client
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0, api_key=os.environ["OPENAI_API_KEY"])

# Create an agent with the custom tool
agent = initialize_agent(
    tools=[multiply_tool],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

# Run the agent with a prompt that triggers the tool
response = agent.run("Multiply 15 by two using the tool.")
print("Agent response:", response)
output
> Entering new AgentExecutor chain...
Thought: I need to use the multiply_by_two tool to multiply 15 by two.
Action: multiply_by_two
Action Input: 15
Observation: 30
Thought: I have the result, now to return it.
Final Answer: 30
Agent response: 30

Common variations

You can create async tools by implementing _arun for asynchronous calls. Use different models by changing the model parameter in ChatOpenAI. For streaming responses, integrate LangChain's streaming callbacks or use the OpenAI SDK directly.

python
import asyncio

async def async_example():
    response = await agent.arun("Multiply 42 by two asynchronously.")
    print("Async agent response:", response)

asyncio.run(async_example())
output
Async agent response: 84

Troubleshooting

  • If the agent does not call your tool, ensure the tool's name and description clearly describe its purpose.
  • Check that your OpenAI API key is set correctly in os.environ["OPENAI_API_KEY"].
  • For async errors, verify you are calling agent.arun() inside an async context.

Key Takeaways

  • Subclass BaseTool and implement _run or _arun to create custom LangChain tools.
  • Register your tools with an agent to enable AI-driven function calling in LangChain workflows.
  • Use clear tool names and descriptions to help the agent decide when to invoke your tool.
  • Switch models or add streaming by configuring the ChatOpenAI client accordingly.
  • Always set your API key securely via environment variables to avoid credential leaks.
Verified 2026-04 · gpt-4o-mini
Verify ↗