How to run Stable Diffusion locally
Quick answer
Run Stable Diffusion locally by installing the
diffusers Python library and its dependencies, then load a pre-trained model like runwayml/stable-diffusion-v1-5 with StableDiffusionPipeline. Use a GPU-enabled environment for best performance and generate images by passing prompts to the pipeline.PREREQUISITES
Python 3.8+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 (for CUDA GPU)pip install diffusers transformers scipy ftfyA CUDA-compatible GPU (recommended) or CPU fallback
Setup
Install the required Python packages and ensure you have a CUDA-enabled GPU for optimal performance. Use the official PyTorch installation command for your system and then install the diffusers library along with dependencies.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
pip install diffusers transformers scipy ftfy Step by step
Use the following Python code to load the Stable Diffusion model and generate an image from a text prompt. This example uses the runwayml/stable-diffusion-v1-5 model and outputs the generated image to a file.
import torch
from diffusers import StableDiffusionPipeline
# Load the pipeline with the model
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda") # Use GPU
# Generate an image from a prompt
prompt = "A fantasy landscape with mountains and a river"
image = pipe(prompt).images[0]
# Save the image
image.save("output.png")
print("Image saved as output.png") output
Image saved as output.png
Common variations
- For CPU-only systems, remove
torch_dtype=torch.float16andpipe.to("cuda"), but expect slower generation. - Use other models like
stabilityai/stable-diffusion-xl-base-1.0by changing the model name infrom_pretrained. - Enable mixed precision or use
torch_dtype=torch.float32if you encounter precision issues.
Troubleshooting
- If you get CUDA out of memory errors, reduce the batch size or use a smaller model.
- Ensure your GPU drivers and CUDA toolkit are up to date.
- If
diffusersfails to load the model, verify your internet connection or download the model manually from Hugging Face.
Key Takeaways
- Use the
diffuserslibrary with a CUDA-enabled GPU for efficient local Stable Diffusion inference. - Install PyTorch with the correct CUDA version before installing
diffusersand dependencies. - Adjust model and precision settings based on your hardware capabilities for best results.