How to beginner · 3 min read

How to batch generate images with Stable Diffusion

Quick answer
Use the StableDiffusionPipeline from the diffusers library or the Stability AI API to batch generate images by looping over prompts or passing a list of prompts. Automate this with Python scripts that call the model multiple times or use batch input features if supported.

PREREQUISITES

  • Python 3.8+
  • pip install diffusers transformers torch
  • Access to Stability AI API key (optional for cloud API usage)

Setup

Install the required Python packages and set up your environment. For local generation, install diffusers, transformers, and torch. For cloud API usage, obtain your Stability AI API key and set it as an environment variable.

bash
pip install diffusers transformers torch

Step by step

This example shows how to batch generate images locally using StableDiffusionPipeline by iterating over a list of prompts and saving each output image.

python
import os
from diffusers import StableDiffusionPipeline
import torch

# Load model and move to GPU if available
pipe = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float16
)
pipe = pipe.to("cuda" if torch.cuda.is_available() else "cpu")

prompts = [
    "A futuristic cityscape at sunset",
    "A fantasy forest with glowing plants",
    "A portrait of a cyberpunk character"
]

output_dir = "batch_outputs"
os.makedirs(output_dir, exist_ok=True)

for i, prompt in enumerate(prompts):
    image = pipe(prompt).images[0]
    filename = os.path.join(output_dir, f"image_{i+1}.png")
    image.save(filename)
    print(f"Saved: {filename}")
output
Saved: batch_outputs/image_1.png
Saved: batch_outputs/image_2.png
Saved: batch_outputs/image_3.png

Common variations

For cloud API batch generation, use the Stability AI REST API by sending multiple requests programmatically. You can also implement asynchronous calls to speed up batch processing. Adjust model versions or use different pipelines like StableDiffusionXL for higher quality.

python
import os
import requests
import base64

API_KEY = os.environ["STABILITY_API_KEY"]
endpoint = "https://api.stability.ai/v2beta/stable-image/generate/core"

prompts = [
    "A futuristic cityscape at sunset",
    "A fantasy forest with glowing plants",
    "A portrait of a cyberpunk character"
]

headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}

for i, prompt in enumerate(prompts):
    payload = {
        "text_prompts": [{"text": prompt}],
        "cfg_scale": 7,
        "clip_guidance_preset": "FAST_BLUE",
        "height": 512,
        "width": 512,
        "samples": 1,
        "steps": 30
    }
    response = requests.post(endpoint, headers=headers, json=payload)
    if response.status_code == 200:
        data = response.json()
        image_base64 = data["artifacts"][0]["base64"]
        with open(f"api_image_{i+1}.png", "wb") as f:
            f.write(base64.b64decode(image_base64))
        print(f"Saved: api_image_{i+1}.png")
    else:
        print(f"Error: {response.status_code} - {response.text}")
output
Saved: api_image_1.png
Saved: api_image_2.png
Saved: api_image_3.png

Troubleshooting

  • If you get CUDA out of memory errors, reduce batch size or image resolution.
  • For API errors, verify your API key and check rate limits.
  • If images are not saving, ensure the output directory exists and you have write permissions.

Key Takeaways

  • Use StableDiffusionPipeline for local batch image generation by looping over prompts.
  • Leverage Stability AI API for scalable cloud batch generation with asynchronous requests.
  • Manage GPU memory by adjusting batch size and image resolution to avoid out-of-memory errors.
Verified 2026-04 · runwayml/stable-diffusion-v1-5, StableDiffusionPipeline
Verify ↗