How to beginner · 3 min read

How to add logging to LiteLLM

Quick answer
To add logging to LiteLLM, configure Python's built-in logging module to capture and output relevant events such as model inputs, outputs, and errors. Wrap your LiteLLM calls with logging statements or use a custom logger to track inference details for debugging and monitoring.

PREREQUISITES

  • Python 3.8+
  • pip install litellm
  • Basic knowledge of Python logging module

Setup logging in Python

Use Python's standard logging module to configure log formatting, level, and output destination. This setup enables capturing debug information from your LiteLLM usage.

python
import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[logging.StreamHandler()]
)
logger = logging.getLogger(__name__)

logger.info("Logging is configured.")
output
2026-04-27 12:00:00,000 - INFO - Logging is configured.

Step by step: Add logging to LiteLLM calls

Wrap your LiteLLM model calls with logging statements to record inputs, outputs, and exceptions. This example shows a simple synchronous inference with logging.

python
import logging
from litellm import LLM

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

# Initialize LiteLLM model
model = LLM(model_name="litellm-base")

# Input prompt
prompt = "Explain the benefits of logging in AI applications."

try:
    logger.debug(f"Sending prompt to LiteLLM: {prompt}")
    response = model.generate(prompt)
    logger.debug(f"Received response from LiteLLM: {response}")
    print("Model output:", response)
except Exception as e:
    logger.error(f"Error during LiteLLM inference: {e}")
output
2026-04-27 12:00:01,000 - DEBUG - Sending prompt to LiteLLM: Explain the benefits of logging in AI applications.
2026-04-27 12:00:02,500 - DEBUG - Received response from LiteLLM: Logging helps track model behavior, debug issues, and monitor performance.
Model output: Logging helps track model behavior, debug issues, and monitor performance.

Common variations

  • Async calls: Use asyncio and add logging inside async functions.
  • Custom log files: Configure logging.FileHandler to save logs to a file.
  • Different models: Replace model_name with your LiteLLM variant.
python
import asyncio
import logging
from litellm import AsyncLLM

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[logging.FileHandler("litellm.log"), logging.StreamHandler()]
)
logger = logging.getLogger(__name__)

async def main():
    model = AsyncLLM(model_name="litellm-base")
    prompt = "What is the future of AI?"
    try:
        logger.debug(f"Sending async prompt: {prompt}")
        response = await model.agenerate([prompt])
        logger.debug(f"Async response: {response[0]}")
        print("Async model output:", response[0])
    except Exception as e:
        logger.error(f"Async error: {e}")

asyncio.run(main())
output
2026-04-27 12:00:03,000 - DEBUG - Sending async prompt: What is the future of AI?
2026-04-27 12:00:04,200 - DEBUG - Async response: AI will continue to transform industries and improve lives.
Async model output: AI will continue to transform industries and improve lives.

Troubleshooting logging issues

  • If logs do not appear, ensure logging.basicConfig is called before any logging statements.
  • Check your log level; use DEBUG to capture detailed info.
  • For file logging, verify write permissions and correct file paths.

Key Takeaways

  • Use Python's built-in logging module to capture LiteLLM inputs, outputs, and errors.
  • Wrap LiteLLM calls with logging.debug and logging.error for detailed traceability.
  • Configure logging handlers to output logs to console or files as needed.
Verified 2026-04 · litellm-base
Verify ↗