How to Intermediate · 4 min read

How to use LangChain with function calling

Quick answer
Use LangChain's OpenAI chat model with the tools parameter to enable function calling. Define your functions as tools with JSON schema, pass them to the chat completion call, and handle tool_calls in the response to invoke functions programmatically.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai openai>=1.0

Setup

Install the required packages and set your OpenAI API key as an environment variable.

  • Install LangChain OpenAI bindings and OpenAI SDK:
bash
pip install langchain_openai openai>=1.0
output
Collecting langchain_openai
Collecting openai
Successfully installed langchain_openai openai

Step by step

Define your function as a tool with a JSON schema, then create a LangChain OpenAI client and call chat.completions.create with the tools parameter. Detect tool_calls in the response to invoke the function and send the result back to the model.

python
import os
import json
from openai import OpenAI

# Define a simple tool for function calling
tools = [{
    "type": "function",
    "function": {
        "name": "get_current_weather",
        "description": "Get the current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City and state, e.g. San Francisco, CA"}
            },
            "required": ["location"]
        }
    }
}]

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# User prompt asking for weather
messages = [{"role": "user", "content": "What's the weather in New York?"}]

# First call to the model with tools enabled
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=messages,
    tools=tools
)

choice = response.choices[0]

if choice.finish_reason == "tool_calls":
    tool_call = choice.message.tool_calls[0]
    args = json.loads(tool_call.function.arguments)
    location = args.get("location")

    # Simulate function execution
    weather_result = f"The current weather in {location} is sunny, 75°F."

    # Send function result back to the model
    followup_messages = messages + [choice.message.to_dict()] + [
        {"role": "function", "name": tool_call.function.name, "content": weather_result}
    ]

    final_response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=followup_messages
    )

    print("Assistant:", final_response.choices[0].message.content)
else:
    print("Assistant:", choice.message.content)
output
Assistant: The current weather in New York is sunny, 75°F.

Common variations

You can use async calls with LangChain or OpenAI SDK, switch to different models like gpt-4o, or implement streaming responses. Also, you can define multiple tools for complex workflows.

python
import asyncio
import os
import json
from openai import OpenAI

async def async_function_calling():
    tools = [{
        "type": "function",
        "function": {
            "name": "get_time",
            "description": "Get the current time",
            "parameters": {"type": "object", "properties": {}, "required": []}
        }
    }]

    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

    messages = [{"role": "user", "content": "What time is it?"}]

    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools
    )

    choice = response.choices[0]

    if choice.finish_reason == "tool_calls":
        tool_call = choice.message.tool_calls[0]
        # Simulate time function
        time_result = "It is 3:00 PM UTC."

        followup_messages = messages + [choice.message.to_dict()] + [
            {"role": "function", "name": tool_call.function.name, "content": time_result}
        ]

        final_response = await client.chat.completions.create(
            model="gpt-4o",
            messages=followup_messages
        )

        print("Assistant:", final_response.choices[0].message.content)
    else:
        print("Assistant:", choice.message.content)

asyncio.run(async_function_calling())
output
Assistant: It is 3:00 PM UTC.

Troubleshooting

  • If you get no tool_calls in the response, ensure your tools parameter is correctly formatted and the model supports function calling.
  • Check your API key and environment variables if authentication errors occur.
  • Use finish_reason == 'tool_calls' to detect when to invoke your functions.

Key Takeaways

  • Use the tools parameter in LangChain/OpenAI chat calls to enable function calling.
  • Detect tool_calls in the response to trigger your function execution and feed results back to the model.
  • You can implement both synchronous and asynchronous function calling workflows with LangChain and OpenAI SDK.
  • Always define your functions with JSON schema inside the tools list for proper integration.
  • Check finish_reason to handle function calls correctly and avoid missing tool invocation.
Verified 2026-04 · gpt-4o-mini, gpt-4o
Verify ↗