How to beginner · 3 min read

How to run Stable Diffusion on CPU

Quick answer
Run Stable Diffusion on CPU by using the diffusers Python library with CPU-compatible settings and disabling GPU acceleration. Install dependencies like torch with CPU support and load the model with torch_device="cpu" to generate images without a GPU.

PREREQUISITES

  • Python 3.8+
  • pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
  • pip install diffusers transformers scipy
  • Basic knowledge of Python scripting

Setup

Install the necessary Python packages with CPU-only support. Use the official PyTorch CPU wheels and the diffusers library for Stable Diffusion inference.

bash
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install diffusers transformers scipy

Step by step

Use the diffusers library to load the Stable Diffusion pipeline on CPU and generate an image from a prompt. Set torch_device="cpu" to ensure CPU usage.

python
import torch
from diffusers import StableDiffusionPipeline

# Load the Stable Diffusion model with CPU device
pipe = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float32
)
pipe = pipe.to("cpu")

# Generate an image from a prompt
prompt = "A fantasy landscape, vivid colors"
image = pipe(prompt, num_inference_steps=50).images[0]

# Save the image
image.save("output_cpu.png")
print("Image saved as output_cpu.png")
output
Image saved as output_cpu.png

Common variations

  • Use torch_dtype=torch.float16 only if your CPU supports it, otherwise stick to float32.
  • For faster CPU inference, reduce num_inference_steps at the cost of image quality.
  • Use alternative Stable Diffusion models by changing the model name in from_pretrained.
  • Async execution is not typical for CPU inference but can be implemented with Python async libraries if needed.

Troubleshooting

  • If you get CUDA errors, ensure PyTorch is installed with CPU-only wheels and no GPU drivers are interfering.
  • Low performance is expected on CPU; consider using a smaller model or fewer inference steps.
  • If you see memory errors, reduce batch size or image resolution.
  • Verify your Python environment is clean and dependencies are compatible.

Key Takeaways

  • Install PyTorch with CPU-only wheels to avoid GPU dependencies.
  • Use diffusers with pipe.to("cpu") to run Stable Diffusion on CPU.
  • Expect slower inference on CPU; reduce steps or model size for better speed.
  • Troubleshoot CUDA errors by reinstalling CPU-only PyTorch and checking environment.
  • Save generated images locally with image.save() for verification.
Verified 2026-04 · runwayml/stable-diffusion-v1-5
Verify ↗