How to use Langfuse prompt templates
Quick answer
Use
Langfuse prompt templates by defining reusable prompt strings with placeholders, then render them with variables before sending to your LLM client. This enables consistent prompt formatting and easy tracking with langfuse decorators or manual logging.PREREQUISITES
Python 3.8+pip install langfuseOpenAI API key or compatible LLM API keyBasic familiarity with Python string formatting
Setup
Install the langfuse Python package and set your API keys as environment variables. Import the necessary classes to create prompt templates and initialize the Langfuse client.
pip install langfuse Step by step
Create a prompt template using Python's str.format style placeholders or f-string style. Use langfuse to observe the function that renders and sends the prompt to your LLM client. This example uses OpenAI's gpt-4o-mini model.
import os
from langfuse import Langfuse
from langfuse.decorators import observe
from openai import OpenAI
# Initialize Langfuse client
langfuse = Langfuse(
public_key=os.environ["LANGFUSE_PUBLIC_KEY"],
secret_key=os.environ["LANGFUSE_SECRET_KEY"],
host="https://cloud.langfuse.com"
)
# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Define a prompt template with placeholders
PROMPT_TEMPLATE = "Write a short summary about {topic} in {language}."
@observe()
def generate_summary(topic: str, language: str) -> str:
# Render the prompt template
prompt = PROMPT_TEMPLATE.format(topic=topic, language=language)
# Call the LLM
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
# Return the generated text
return response.choices[0].message.content
if __name__ == "__main__":
summary = generate_summary("Langfuse", "English")
print(summary) output
Langfuse is a powerful tool for tracking and managing AI prompts, enabling developers to standardize prompt usage and improve observability.
Common variations
- Use f-strings or other templating libraries like
jinja2for more complex prompt templates. - Integrate with other LLM providers by replacing the OpenAI client with any compatible client.
- Use async functions with
@observe()for asynchronous prompt calls. - Manually log prompts and responses with
langfuse.log()if you prefer not to use decorators.
import asyncio
from langfuse.decorators import observe
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@observe()
async def generate_async_summary(topic: str, language: str) -> str:
prompt = f"Write a short summary about {topic} in {language}."
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
async def main():
summary = await generate_async_summary("Langfuse", "English")
print(summary)
if __name__ == "__main__":
asyncio.run(main()) output
Langfuse is a powerful tool for tracking and managing AI prompts, enabling developers to standardize prompt usage and improve observability.
Troubleshooting
- If prompts are not tracked in the Langfuse dashboard, verify your
LANGFUSE_PUBLIC_KEYandLANGFUSE_SECRET_KEYenvironment variables are set correctly. - Ensure the
@observe()decorator wraps the function that generates and sends the prompt. - For template rendering errors, check that all placeholders in the prompt template have corresponding variables passed.
- If you see API errors from your LLM client, confirm your API key and model name are correct and have sufficient quota.
Key Takeaways
- Define prompt templates as reusable strings with placeholders for consistent prompt formatting.
- Use the
@observe()decorator fromlangfuseto automatically track prompt usage and responses. - Render templates with variables before sending to your LLM client for clean, maintainable code.