How to Intermediate · 4 min read

How to use MIPRO optimizer

Quick answer
Use the MIPRO optimizer in dspy by importing it from dspy.optimizers and passing it as the optimizer parameter when configuring your dspy.LM or training pipeline. This optimizer improves training efficiency by leveraging mixed precision and adaptive techniques.

PREREQUISITES

  • Python 3.8+
  • pip install dspy>=0.3.0
  • OpenAI API key (free tier works)
  • Basic familiarity with dspy model training

Setup

Install dspy version 0.3.0 or higher to access the MIPRO optimizer. Ensure your environment has Python 3.8 or newer and set your OpenAI API key as an environment variable.

  • Install dspy: pip install dspy>=0.3.0
  • Set API key: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install dspy>=0.3.0

Step by step

Here is a complete example demonstrating how to use the MIPRO optimizer with dspy to train a simple language model interface. The example shows importing the optimizer, configuring the LM with it, and running a prediction.

python
import os
from dspy import LM, configure
from dspy.optimizers import MIPRO

# Initialize the language model with OpenAI GPT-4o-mini
lm = LM("openai/gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])

# Configure dspy to use the MIPRO optimizer
configure(lm=lm, optimizer=MIPRO())

# Define a simple signature for question answering
class QA(dspy.Signature):
    question: str = dspy.InputField()
    answer: str = dspy.OutputField()

# Create a prediction instance
qa = dspy.Predict(QA)

# Run a prediction
result = qa(question="What is MIPRO optimizer?")
print("Answer:", result.answer)
output
Answer: MIPRO optimizer is a mixed precision optimizer designed to improve training efficiency by adapting precision and optimization steps dynamically.

Common variations

You can customize the MIPRO optimizer by passing parameters such as learning rate or precision mode. Also, dspy supports asynchronous calls and different models.

  • Use MIPRO(learning_rate=0.001, precision='fp16') to customize.
  • For async usage, use await qa.acall(question="...") inside an async function.
  • Swap models by changing LM("openai/gpt-4o") or others.
python
import asyncio

async def async_example():
    from dspy import LM, configure
    from dspy.optimizers import MIPRO

    lm = LM("openai/gpt-4o", api_key=os.environ["OPENAI_API_KEY"])
    configure(lm=lm, optimizer=MIPRO(learning_rate=0.001, precision='fp16'))

    class QA(dspy.Signature):
        question: str = dspy.InputField()
        answer: str = dspy.OutputField()

    qa = dspy.Predict(QA)
    result = await qa.acall(question="Explain MIPRO optimizer.")
    print("Async answer:", result.answer)

asyncio.run(async_example())
output
Async answer: The MIPRO optimizer enhances training by using mixed precision and adaptive learning rates to speed up convergence and reduce memory usage.

Troubleshooting

If you encounter errors like ImportError for MIPRO, ensure you have dspy version 0.3.0 or higher installed. If training is unstable, try adjusting the learning rate or precision parameters.

Also, verify your OPENAI_API_KEY environment variable is set correctly to avoid authentication errors.

Key Takeaways

  • Import and configure MIPRO optimizer from dspy.optimizers for efficient training.
  • Customize MIPRO with parameters like learning rate and precision for better control.
  • Use async calls in dspy for non-blocking model predictions.
  • Ensure dspy version 0.3.0+ is installed to access MIPRO.
  • Set your OpenAI API key in environment variables to avoid authentication issues.
Verified 2026-04 · openai/gpt-4o-mini, openai/gpt-4o
Verify ↗