Concept beginner · 3 min read

What is wandb sweeps

Quick answer
wandb sweeps are a feature of Weights & Biases that automate hyperparameter optimization by orchestrating multiple training runs with different parameter combinations. They enable systematic exploration of model configurations to find the best performing setup efficiently.
wandb sweeps is a hyperparameter sweep system that automates running and tracking multiple machine learning experiments to optimize model performance.

How it works

wandb sweeps automate the process of hyperparameter tuning by defining a sweep configuration that specifies which parameters to vary and their ranges or distributions. The system then launches multiple training runs, each with a different set of hyperparameters sampled from the defined search space. It tracks metrics and logs results centrally, allowing you to analyze and identify the best hyperparameter combination. Think of it as running many experiments in parallel or sequentially, guided by strategies like grid search, random search, or Bayesian optimization.

Concrete example

Here is a minimal example of setting up a wandb sweep for tuning learning rate and batch size in a PyTorch training script.

python
import wandb
import random

def train():
    # Initialize a new wandb run
    wandb.init()
    config = wandb.config

    # Simulate training with hyperparameters
    lr = config.learning_rate
    batch_size = config.batch_size

    # Dummy metric: higher is better
    accuracy = 0.8 + random.uniform(-0.05, 0.05) + (0.1 if lr < 0.01 else 0) + (0.05 if batch_size >= 64 else 0)

    # Log the metric
    wandb.log({"accuracy": accuracy})

# Sweep configuration
sweep_config = {
    'method': 'random',  # random search
    'metric': {'name': 'accuracy', 'goal': 'maximize'},
    'parameters': {
        'learning_rate': {'values': [0.001, 0.005, 0.01, 0.02]},
        'batch_size': {'values': [32, 64, 128]}
    }
}

sweep_id = wandb.sweep(sweep_config, project="my-project")

# Start sweep agent to run 10 trials
wandb.agent(sweep_id, function=train, count=10)

When to use it

Use wandb sweeps when you need to optimize hyperparameters such as learning rate, batch size, or model architecture parameters to improve model performance. It is ideal for systematic experimentation and tracking in machine learning projects. Avoid using sweeps for trivial or one-off experiments where manual tuning suffices or when computational resources are extremely limited.

Key terms

TermDefinition
SweepA collection of hyperparameter configurations to explore.
AgentA process that runs training jobs with different hyperparameters in a sweep.
MetricA performance measure tracked to evaluate each run.
Search methodThe strategy used to select hyperparameter combinations (e.g., random, grid, Bayesian).

Key Takeaways

  • wandb sweeps automate hyperparameter tuning by running multiple experiments with varied parameters.
  • Sweeps support different search methods like random and grid search to efficiently explore parameter space.
  • Use sweeps to systematically improve model performance and track results centrally in Weights & Biases.
  • Sweeps require defining a configuration file specifying parameters, metrics, and search strategy.
  • Running a sweep involves launching agents that execute training runs with sampled hyperparameters.
Verified 2026-04
Verify ↗