How to beginner · 3 min read

How to configure LiteLLM proxy with config.yaml

Quick answer
To configure the LiteLLM proxy, create a config.yaml file specifying proxy settings such as host, port, and model_path. Then start the proxy server pointing to this config file to enable local or remote model serving through the proxy.

PREREQUISITES

  • Python 3.8+
  • LiteLLM installed (pip install litellm)
  • Basic YAML syntax knowledge

Setup

Install LiteLLM via pip if not already installed. Prepare a config.yaml file to define proxy parameters such as host, port, and model location.

bash
pip install litellm

Step by step

Create a config.yaml file with the necessary proxy configuration. Then run the LiteLLM proxy server using this config file.

yaml
host: "127.0.0.1"
port: 11434
model_path: "/path/to/your/model"
max_workers: 4
log_level: "info"

Run LiteLLM proxy

Use the following Python script to start the LiteLLM proxy server with the config.yaml file.

python
from litellm.proxy import LiteLLMProxy

proxy = LiteLLMProxy(config_path="config.yaml")
proxy.start()

# The proxy will listen on the configured host and port, serving the specified model.
output
INFO: Starting LiteLLM proxy on 127.0.0.1:11434
INFO: Loaded model from /path/to/your/model
Proxy is running...

Common variations

  • Change host and port in config.yaml to expose the proxy on a different network interface or port.
  • Adjust max_workers to control concurrency.
  • Use relative or absolute paths for model_path.

Troubleshooting

  • If the proxy fails to start, verify the config.yaml syntax and paths.
  • Check for port conflicts if the proxy cannot bind to the specified port.
  • Enable log_level: debug in config.yaml for detailed logs.

Key Takeaways

  • Use a config.yaml file to centralize LiteLLM proxy settings for easy management.
  • Specify host, port, and model_path clearly to ensure proper proxy operation.
  • Run the proxy with the provided Python API pointing to your config file for seamless startup.
  • Adjust concurrency and logging via max_workers and log_level in the config.
  • Check YAML syntax and port availability if the proxy fails to start.
Verified 2026-04
Verify ↗