How to beginner to intermediate · 3 min read

How to use SDXL with Diffusers

Quick answer
Use the diffusers library to load the stabilityai/stable-diffusion-xl-base-1.0 model pipeline for SDXL. Install diffusers and torch, then run the pipeline with your prompt to generate images.

PREREQUISITES

  • Python 3.8+
  • pip install diffusers>=0.19.0 torch torchvision
  • Hugging Face account with access to SDXL model (token)
  • Set environment variable HF_TOKEN with your Hugging Face access token

Setup

Install the required Python packages and authenticate with Hugging Face to access the SDXL model.

bash
pip install diffusers>=0.19.0 torch torchvision
export HF_TOKEN="your_huggingface_token"

Step by step

Load the SDXL base pipeline from stabilityai/stable-diffusion-xl-base-1.0 and generate an image from a text prompt.

python
import os
from diffusers import StableDiffusionPipeline
import torch

# Load Hugging Face token from environment
hf_token = os.environ.get("HF_TOKEN")

# Load the SDXL pipeline
pipe = StableDiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    use_auth_token=hf_token
)
pipe = pipe.to("cuda")  # Use GPU if available

# Generate an image
prompt = "A futuristic cityscape at sunset, highly detailed"
image = pipe(prompt).images[0]

# Save the image
image.save("sdxl_output.png")
print("Image saved as sdxl_output.png")
output
Image saved as sdxl_output.png

Common variations

  • Use pipe(prompt, num_inference_steps=50, guidance_scale=7.5) to control quality and creativity.
  • Run inference on CPU by changing pipe.to("cpu") but expect slower generation.
  • Use pipe(prompt, num_images_per_prompt=3) to generate multiple images per prompt.
  • For asynchronous usage, use asyncio with pipe in an async function.
python
import asyncio

async def generate_async(prompt):
    from diffusers import StableDiffusionPipeline
    import torch
    hf_token = os.environ.get("HF_TOKEN")
    pipe = StableDiffusionPipeline.from_pretrained(
        "stabilityai/stable-diffusion-xl-base-1.0",
        torch_dtype=torch.float16,
        use_auth_token=hf_token
    )
    pipe = pipe.to("cuda")
    image = await pipe(prompt)
    image = image.images[0]
    image.save("sdxl_async_output.png")
    print("Async image saved as sdxl_async_output.png")

# asyncio.run(generate_async("A serene forest with magical creatures"))
output
Async image saved as sdxl_async_output.png

Troubleshooting

  • If you get authentication errors, verify your Hugging Face token is set correctly in HF_TOKEN.
  • For CUDA out-of-memory errors, reduce num_inference_steps or switch to CPU.
  • If diffusers version is incompatible, upgrade with pip install --upgrade diffusers.
  • Ensure your GPU drivers and CUDA toolkit are up to date for best performance.

Key Takeaways

  • Use the official diffusers pipeline to run SDXL models easily with Python.
  • Set your Hugging Face token in HF_TOKEN to authenticate model downloads.
  • Run on GPU with torch_dtype=torch.float16 for faster inference.
  • Adjust num_inference_steps and guidance_scale to balance speed and image quality.
  • Troubleshoot common errors by verifying tokens, updating packages, and managing GPU memory.
Verified 2026-04 · stabilityai/stable-diffusion-xl-base-1.0
Verify ↗