How to trace LlamaIndex with LlamaTrace
Quick answer
Use
LlamaTrace to wrap your LlamaIndex calls for detailed tracing of query execution and data flow. Initialize LlamaTrace with your API key, then pass it as a tracer to LlamaIndex components to capture and inspect traces programmatically.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install llama-index llamatrace openai
Setup
Install the required packages and set your environment variables for API keys.
pip install llama-index llamatrace openai Step by step
This example shows how to initialize LlamaTrace and use it to trace a simple LlamaIndex query.
import os
from llama_index import SimpleDirectoryReader, GPTVectorStoreIndex
from llamatrace import LlamaTrace
# Set environment variables before running:
# export OPENAI_API_KEY='your_openai_api_key'
# export LLAMATRACE_API_KEY='your_llamatrace_api_key'
# Initialize LlamaTrace tracer
tracer = LlamaTrace(api_key=os.environ["LLAMATRACE_API_KEY"])
# Load documents
documents = SimpleDirectoryReader("./data").load_data()
# Create LlamaIndex index with tracer
index = GPTVectorStoreIndex(documents, tracer=tracer)
# Query the index
query = "What is LlamaTrace?"
response = index.query(query)
print("Response:", response.response)
# Access trace data
trace_data = tracer.get_trace()
print("Trace data:", trace_data) output
Response: LlamaTrace is a tool to trace and debug LlamaIndex calls.
Trace data: {...detailed trace JSON...} Common variations
- Use async calls with
asyncioandLlamaTracefor asynchronous tracing. - Switch models by passing different model names to
GPTVectorStoreIndexor other LlamaIndex components. - Integrate
LlamaTracewith other LlamaIndex index types likeGPTListIndexorGPTTreeIndex.
import asyncio
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader
from llamatrace import LlamaTrace
async def async_query():
tracer = LlamaTrace(api_key=os.environ["LLAMATRACE_API_KEY"])
documents = await SimpleDirectoryReader("./data").aload_data()
index = GPTVectorStoreIndex(documents, tracer=tracer)
response = await index.aquery("Explain LlamaTrace.")
print("Async response:", response.response)
asyncio.run(async_query()) output
Async response: LlamaTrace helps trace LlamaIndex calls asynchronously.
Troubleshooting
- If you see
AuthenticationError, verify yourLLAMATRACE_API_KEYenvironment variable is set correctly. - If no trace data appears, ensure you pass the
tracerparameter when creating yourLlamaIndexindex. - For network issues, check your internet connection and API endpoint accessibility.
Key Takeaways
- Initialize LlamaTrace with your API key to enable tracing for LlamaIndex calls.
- Pass the LlamaTrace instance as the tracer parameter when creating LlamaIndex indexes.
- Use tracer.get_trace() to retrieve detailed trace information for debugging.
- LlamaTrace supports both synchronous and asynchronous LlamaIndex usage.
- Verify environment variables and tracer integration if trace data is missing.