How to install Stable Diffusion
Quick answer
To install Stable Diffusion, set up a Python environment with Python 3.8+, then install the diffusers and transformers libraries via pip. Use the StableDiffusionPipeline from diffusers to load the model and generate images.
PREREQUISITES
Python 3.8+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 (for CUDA GPU)pip install diffusers transformers scipy ftfy accelerateHugging Face account with access token (for model download)
Setup
Install Python 3.8 or higher. For GPU acceleration, install PyTorch with CUDA support. Then install the diffusers library and its dependencies. You also need a Hugging Face account to access the Stable Diffusion model weights.
python3 -m venv sd-env
source sd-env/bin/activate
pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
pip install diffusers transformers scipy ftfy accelerate
huggingface-cli login Step by step
Use the following Python script to load the Stable Diffusion pipeline and generate an image from a prompt. Replace YOUR_HF_TOKEN with your Hugging Face access token if needed.
import os
from diffusers import StableDiffusionPipeline
import torch
# Set your Hugging Face token as environment variable
os.environ["HUGGINGFACE_TOKEN"] = os.environ.get("HUGGINGFACE_TOKEN")
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
use_auth_token=os.environ["HUGGINGFACE_TOKEN"],
torch_dtype=torch.float16
).to("cuda")
prompt = "A fantasy landscape, vivid colors"
image = pipe(prompt).images[0]
image.save("output.png")
print("Image saved as output.png") output
Image saved as output.png
Common variations
- For CPU-only machines, remove
torch_dtype=torch.float16and.to("cuda"). - Use
pipe.enable_attention_slicing()to reduce VRAM usage on GPUs with limited memory. - Try other Stable Diffusion versions like
stabilityai/stable-diffusion-2by changingmodel_id. - For asynchronous usage, wrap calls with
asyncioand usepipe(prompt)inside async functions.
Troubleshooting
- If you get authentication errors, ensure your Hugging Face token is valid and set in
HUGGINGFACE_TOKENenvironment variable. - On CUDA errors, verify your GPU drivers and CUDA toolkit versions match the installed PyTorch version.
- If out of memory errors occur, enable attention slicing or reduce image resolution.
- For slow performance on CPU, consider using a smaller model or running on GPU.
Key Takeaways
- Use Python 3.8+ and install PyTorch with CUDA for best performance.
- Install the diffusers library and authenticate with Hugging Face to access Stable Diffusion models.
- Run the StableDiffusionPipeline to generate images with simple Python code.
- Enable attention slicing to reduce GPU memory usage if needed.
- Check environment variables and GPU setup to troubleshoot common errors.