How to trace LLM calls with LangSmith
Quick answer
Use the
LangSmith Python SDK to trace LLM calls by setting environment variables LANGCHAIN_TRACING_V2, LANGCHAIN_API_KEY, and LANGCHAIN_PROJECT. Then initialize a LangSmith client and decorate your LLM call functions with @traceable to capture detailed traces automatically.PREREQUISITES
Python 3.8+OpenAI API keyLangSmith API keypip install langsmith openai>=1.0
Setup
Install the langsmith package and set environment variables for tracing. These variables enable automatic tracing of LangChain and OpenAI LLM calls.
- Install LangSmith SDK:
pip install langsmith - Set environment variables in your shell or .env file:
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="<your_langsmith_api_key>"
export LANGCHAIN_PROJECT="my-llm-tracing-project"
pip install langsmith Step by step
This example shows how to initialize the LangSmith client and trace an OpenAI LLM call by decorating the function with @traceable. The trace data will be sent to LangSmith automatically.
import os
from langsmith import Client, traceable
from openai import OpenAI
# Initialize LangSmith client with API key from environment
langsmith_client = Client(api_key=os.environ["LANGCHAIN_API_KEY"])
# Initialize OpenAI client
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@traceable()
def generate_text(prompt: str) -> str:
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
if __name__ == "__main__":
output = generate_text("Explain RAG in simple terms.")
print("LLM output:", output) output
LLM output: Retrieval-Augmented Generation (RAG) is a technique that combines information retrieval with language generation to produce more accurate and relevant responses.
Common variations
You can trace asynchronous LLM calls by using @traceable() on async functions. LangSmith also supports tracing other LLM providers by initializing their clients similarly. For LangChain users, setting the environment variables enables automatic tracing without code changes.
import asyncio
from langsmith import traceable
from openai import OpenAI
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@traceable()
async def async_generate_text(prompt: str) -> str:
response = await openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
async def main():
output = await async_generate_text("What is LangSmith?")
print("Async LLM output:", output)
if __name__ == "__main__":
asyncio.run(main()) output
Async LLM output: LangSmith is a platform for tracing and managing AI model calls and workflows.
Troubleshooting
- If you do not see traces in the LangSmith dashboard, verify that
LANGCHAIN_TRACING_V2is set totrueand the API key is correct. - Ensure your environment variables are loaded before running your script.
- Check network connectivity to LangSmith endpoints.
- For manual tracing, ensure functions are decorated with
@traceable().
Key Takeaways
- Set environment variables
LANGCHAIN_TRACING_V2,LANGCHAIN_API_KEY, andLANGCHAIN_PROJECTto enable LangSmith tracing. - Use the
@traceable()decorator on your LLM call functions to capture detailed traces automatically. - LangSmith integrates seamlessly with OpenAI and LangChain for easy observability of LLM usage.
- Async and sync LLM calls can both be traced with LangSmith using the same decorator.
- Verify environment variables and network connectivity if traces do not appear in the LangSmith dashboard.