How to debug LangChain with LangSmith
Quick answer
Use
LangSmith by setting environment variables LANGCHAIN_TRACING_V2, LANGCHAIN_API_KEY, and LANGCHAIN_PROJECT to enable automatic tracing of LangChain calls. For manual tracing, use the langsmith.Client and the @traceable decorator to capture detailed execution data and debug effectively. ERROR TYPE
config_error ⚡ QUICK FIX
Set
LANGCHAIN_TRACING_V2 to true and provide LANGCHAIN_API_KEY and LANGCHAIN_PROJECT environment variables to enable LangSmith tracing.Why this happens
Debugging LangChain workflows without proper tracing leads to limited visibility into chain execution, making it hard to identify where errors or unexpected behavior occur. Missing or incorrect LangSmith configuration causes no trace data to be captured, resulting in no logs or insights in the LangSmith dashboard.
Typical broken code example:
import os
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
# Missing LangSmith environment variables
os.environ.pop("LANGCHAIN_TRACING_V2", None)
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = PromptTemplate(template="Say hello to {name}", input_variables=["name"])
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.invoke({"name": "Alice"})
print(result) output
Hello Alice
The fix
Enable LangSmith tracing by setting the required environment variables before running your LangChain app. This activates automatic tracing and sends detailed execution data to LangSmith for debugging.
Example fixed code with environment setup:
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = os.environ.get("LANGSMITH_API_KEY", "")
os.environ["LANGCHAIN_PROJECT"] = "my-langchain-project"
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = PromptTemplate(template="Say hello to {name}", input_variables=["name"])
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.invoke({"name": "Alice"})
print(result) output
Hello Alice
Preventing it in production
To ensure robust debugging in production, implement these best practices:
- Always set
LANGCHAIN_TRACING_V2=trueand provide validLANGCHAIN_API_KEYandLANGCHAIN_PROJECTenvironment variables in your deployment environment. - Use the
@traceabledecorator fromlangsmithfor manual tracing of custom functions or chains. - Implement retry logic and error handling in your chains to capture failures with trace context.
- Regularly monitor the LangSmith dashboard for anomalies and performance metrics.
Key Takeaways
- Set
LANGCHAIN_TRACING_V2=trueand provideLANGCHAIN_API_KEYandLANGCHAIN_PROJECTenv vars to enable automatic LangChain tracing. - Use
@traceabledecorator fromlangsmithto manually trace custom functions for detailed debugging. - Monitor LangSmith dashboard regularly to identify and fix issues early in your LangChain workflows.