How to beginner · 3 min read

How to use LangSmith for LLM monitoring

Quick answer
Use LangSmith by integrating its Python SDK to log, track, and visualize LLM interactions for monitoring model performance and debugging. Initialize the LangSmith client with your API key, wrap your LLM calls to capture inputs, outputs, and metadata, then view detailed traces and analytics in the LangSmith dashboard.

PREREQUISITES

  • Python 3.8+
  • LangSmith API key
  • pip install langsmith
  • Basic familiarity with LLM usage in Python

Setup

Install the langsmith Python package and set your LangSmith API key as an environment variable to authenticate your monitoring client.

bash
pip install langsmith

Step by step

This example shows how to initialize the LangSmith client, wrap an LLM call to log inputs and outputs, and send the data for monitoring.

python
import os
from langsmith import LangSmith

# Initialize LangSmith client with API key from environment
client = LangSmith(api_key=os.environ["LANGSMITH_API_KEY"])

# Example function to simulate an LLM call

def call_llm(prompt: str) -> str:
    # Simulate LLM response
    return f"Response to: {prompt}"

# Wrap the LLM call with LangSmith tracking
with client.track("llm_interaction") as tracker:
    prompt = "Explain LangSmith for LLM monitoring"
    tracker.log_input(prompt=prompt)
    response = call_llm(prompt)
    tracker.log_output(response=response)

print("Logged LLM interaction to LangSmith")
output
Logged LLM interaction to LangSmith

Common variations

You can use LangSmith with asynchronous LLM calls, different LLM providers, or integrate it into frameworks like LangChain by wrapping chains or agents for automatic logging.

python
import asyncio
from langsmith import LangSmith

client = LangSmith(api_key=os.environ["LANGSMITH_API_KEY"])

async def async_call_llm(prompt: str) -> str:
    await asyncio.sleep(0.1)  # Simulate async call
    return f"Async response to: {prompt}"

async def main():
    async with client.track("async_llm_interaction") as tracker:
        prompt = "Async LangSmith example"
        tracker.log_input(prompt=prompt)
        response = await async_call_llm(prompt)
        tracker.log_output(response=response)
    print("Logged async LLM interaction to LangSmith")

asyncio.run(main())
output
Logged async LLM interaction to LangSmith

Troubleshooting

  • If you see authentication errors, verify your LANGSMITH_API_KEY environment variable is set correctly.
  • If logs do not appear in the dashboard, check your network connectivity and ensure the client is properly initialized.
  • For missing or incomplete logs, confirm you are calling tracker.log_input() and tracker.log_output() within the tracking context.

Key Takeaways

  • Initialize LangSmith client with your API key from environment variables for secure authentication.
  • Wrap LLM calls with LangSmith tracking context to capture inputs, outputs, and metadata automatically.
  • Use LangSmith dashboard to visualize and analyze LLM interactions for performance monitoring and debugging.
Verified 2026-04
Verify ↗