How to beginner · 3 min read

How to use Langfuse with OpenAI

Quick answer
Use the langfuse Python SDK to wrap your OpenAI API calls for automatic tracing and observability. Initialize Langfuse with your API keys, then decorate your functions with @observe() to capture detailed telemetry of your OpenAI client usage.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key
  • Langfuse public and secret keys
  • pip install openai>=1.0 langfuse

Setup

Install the required packages and set environment variables for your OpenAI and Langfuse API keys.

  • Install packages: pip install openai langfuse
  • Set environment variables: OPENAI_API_KEY, LANGFUSE_PUBLIC_KEY, and LANGFUSE_SECRET_KEY
bash
pip install openai langfuse

Step by step

This example shows how to initialize the Langfuse client and use the @observe() decorator to trace an OpenAI chat completion call.

python
import os
from openai import OpenAI
from langfuse import Langfuse
from langfuse.decorators import observe

# Initialize Langfuse client with public and secret keys
langfuse = Langfuse(
    public_key=os.environ["LANGFUSE_PUBLIC_KEY"],
    secret_key=os.environ["LANGFUSE_SECRET_KEY"],
    host="https://cloud.langfuse.com"
)

# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@observe()
def generate_response(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

if __name__ == "__main__":
    answer = generate_response("Explain Langfuse integration with OpenAI.")
    print("AI response:", answer)
output
AI response: Langfuse enables automatic tracing and observability of your OpenAI API calls by wrapping your functions with decorators that capture telemetry data.

Common variations

You can use @observe() with async functions, customize the Langfuse host URL, or integrate Langfuse tracing with other OpenAI models by changing the model parameter.

Example async usage:

python
import asyncio

@observe()
async def generate_response_async(prompt: str) -> str:
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

async def main():
    answer = await generate_response_async("Async Langfuse tracing example.")
    print("Async AI response:", answer)

if __name__ == "__main__":
    asyncio.run(main())
output
Async AI response: This example demonstrates how to use Langfuse to trace asynchronous OpenAI calls.

Troubleshooting

  • If you see no traces in Langfuse dashboard, verify your LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables are set correctly.
  • Ensure your OpenAI API key is valid and has access to the specified model.
  • Check network connectivity to https://cloud.langfuse.com.
  • Use @observe() only on functions that make OpenAI calls to capture telemetry.

Key Takeaways

  • Initialize Langfuse with both public and secret keys for full tracing capabilities.
  • Use the @observe() decorator to automatically capture telemetry on OpenAI API calls.
  • Langfuse supports both synchronous and asynchronous OpenAI client usage.
  • Always set API keys via environment variables to keep credentials secure.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗