How to beginner · 3 min read

How to trace LangChain calls with LangSmith

Quick answer
To trace LangChain calls with LangSmith, set the environment variables LANGCHAIN_TRACING_V2 to true, LANGCHAIN_API_KEY to your LangSmith API key, and LANGCHAIN_PROJECT to your project name. This enables automatic tracing of all LangChain calls without code changes.

PREREQUISITES

  • Python 3.8+
  • LangChain v0.2+ installed
  • LangSmith API key
  • pip install langsmith

Setup

Install the langsmith package and set environment variables to enable tracing.

  • Install LangSmith SDK: pip install langsmith
  • Set environment variables in your shell or .env file:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=your_langsmith_api_key
export LANGCHAIN_PROJECT=my-project
bash
pip install langsmith

Step by step

Here is a complete example showing how to enable LangChain tracing with LangSmith and run a simple LangChain prompt.

python
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import LLMChain

# Environment variables must be set before running this script:
# LANGCHAIN_TRACING_V2=true
# LANGCHAIN_API_KEY=your_langsmith_api_key
# LANGCHAIN_PROJECT=my-project

# Initialize LangChain LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# Create a simple prompt template
prompt = ChatPromptTemplate.from_template("Say hello to {name}!")

# Create a chain
chain = LLMChain(llm=llm, prompt=prompt)

# Run the chain
result = chain.invoke({"name": "LangSmith"})
print("Output:", result)

# All calls are automatically traced to LangSmith dashboard
output
Output: Say hello to LangSmith!

Common variations

You can also use LangSmith tracing with asynchronous LangChain calls or different LLM models. The tracing works automatically once environment variables are set.

  • Async example: use await chain.invoke_async(...) in an async function.
  • Use other LLMs like ChatAnthropic or ChatOpenAI with the same tracing setup.
  • Trace non-LangChain code by using the LangSmith Client and @traceable decorator.
python
import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import LLMChain

async def main():
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
    prompt = ChatPromptTemplate.from_template("Say hello to {name}!")
    chain = LLMChain(llm=llm, prompt=prompt)
    result = await chain.invoke_async({"name": "Async LangSmith"})
    print("Async output:", result)

asyncio.run(main())
output
Async output: Say hello to Async LangSmith!

Troubleshooting

  • If you do not see traces in the LangSmith dashboard, verify that LANGCHAIN_TRACING_V2 is set to true and your API key is correct.
  • Ensure you are using LangChain v0.2 or newer, as older versions do not support LangSmith tracing.
  • Restart your Python environment after setting environment variables.

Key Takeaways

  • Set LANGCHAIN_TRACING_V2=true and LANGCHAIN_API_KEY to enable automatic LangChain tracing with LangSmith.
  • No code changes are needed; tracing works by environment configuration and LangChain v0.2+ integration.
  • Use langsmith.Client and @traceable decorator for manual tracing outside LangChain.
  • Async and different LLM models are supported with the same tracing setup.
  • Verify environment variables and LangChain version if traces do not appear in LangSmith.
Verified 2026-04 · gpt-4o-mini
Verify ↗