How to beginner · 3 min read

How to use Hugging Face Diffusers

Quick answer
Use the diffusers Python library to load Stable Diffusion models and generate images by passing text prompts. Install with pip install diffusers transformers torch, then create a pipeline with StableDiffusionPipeline.from_pretrained() and call it with your prompt to get generated images.

PREREQUISITES

  • Python 3.8+
  • pip install diffusers transformers torch
  • Hugging Face account with access token (for some models)

Setup

Install the required libraries diffusers, transformers, and torch using pip. Optionally, set your Hugging Face access token as an environment variable if you want to access gated models.

bash
pip install diffusers transformers torch

Step by step

Load the Stable Diffusion pipeline from Hugging Face, then generate an image by passing a text prompt. The pipeline returns a PIL image object you can save or display.

python
from diffusers import StableDiffusionPipeline
import torch
import os

# Optionally set your Hugging Face token
# os.environ["HUGGINGFACE_TOKEN"] = "your_token_here"

model_id = "runwayml/stable-diffusion-v1-5"

pipe = StableDiffusionPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16
)
pipe = pipe.to("cuda" if torch.cuda.is_available() else "cpu")

prompt = "A beautiful sunset over mountains"
image = pipe(prompt).images[0]

image.save("output.png")
print("Image saved as output.png")
output
Image saved as output.png

Common variations

  • Use pipe(prompt, num_inference_steps=50) to control image quality and speed.
  • Run on CPU by setting pipe.to("cpu") if no GPU is available.
  • Use different Stable Diffusion models by changing model_id, e.g., stabilityai/stable-diffusion-xl-base-1.0 for SDXL.
  • For asynchronous usage, use asyncio with pipe in an async function.

Troubleshooting

  • If you get a CUDA out of memory error, reduce batch size or switch to CPU.
  • If model download fails, verify your Hugging Face token and internet connection.
  • For slow generation, increase num_inference_steps for better quality or decrease it for speed.

Key Takeaways

  • Install diffusers, transformers, and torch to use Hugging Face Diffusers.
  • Load Stable Diffusion models with StableDiffusionPipeline.from_pretrained() and generate images by calling the pipeline with a prompt.
  • Adjust num_inference_steps and device placement for quality and performance trade-offs.
  • Use a Hugging Face access token for private or gated models.
  • Handle common errors by checking GPU memory and token authentication.
Verified 2026-04 · runwayml/stable-diffusion-v1-5, stabilityai/stable-diffusion-xl-base-1.0
Verify ↗