How to share LoRA adapter on Hugging Face Hub
Quick answer
To share a
LoRA adapter on Hugging Face Hub, save your adapter as a PEFT compatible directory and use the huggingface_hub Python SDK's upload_folder or push_to_hub methods to upload it under a new or existing repository. Ensure you authenticate with your Hugging Face token and include necessary metadata files like adapter_config.json for proper usage.PREREQUISITES
Python 3.8+pip install huggingface_hub peft transformersHugging Face account with an access token
Setup
Install the required libraries and set your Hugging Face API token as an environment variable for authentication.
pip install huggingface_hub peft transformers Step by step
Save your trained LoRA adapter locally and upload it to Hugging Face Hub using the huggingface_hub SDK. This example assumes you have a LoRA adapter saved in ./lora_adapter.
import os
from huggingface_hub import HfApi, Repository
# Set your HF token in environment variable HF_TOKEN
hf_token = os.environ["HF_TOKEN"]
# Define repo details
repo_id = "your-username/your-lora-adapter"
adapter_local_path = "./lora_adapter"
# Initialize API
api = HfApi()
# Create repo if it doesn't exist
try:
api.create_repo(repo_id=repo_id, token=hf_token, repo_type="model")
except Exception as e:
print(f"Repo might already exist: {e}")
# Clone repo locally
repo = Repository(local_dir="./temp_repo", clone_from=repo_id, use_auth_token=hf_token)
# Copy adapter files to repo folder
import shutil
import pathlib
adapter_path = pathlib.Path(adapter_local_path)
repo_path = pathlib.Path("./temp_repo")
for item in adapter_path.iterdir():
dest = repo_path / item.name
if item.is_dir():
if dest.exists():
shutil.rmtree(dest)
shutil.copytree(item, dest)
else:
shutil.copy2(item, dest)
# Commit and push
repo.git_add(auto_lfs_track=True) # Track large files if any
repo.git_commit("Add LoRA adapter files")
repo.git_push()
print(f"LoRA adapter uploaded to https://huggingface.co/{repo_id}") output
LoRA adapter uploaded to https://huggingface.co/your-username/your-lora-adapter
Common variations
You can also use peft library's push_to_hub method directly from your PeftModel instance after training. For asynchronous uploads or partial adapter sharing, adjust the code accordingly.
from peft import PeftModel
# Assume you have a trained PeftModel instance
model = PeftModel.from_pretrained("base-model", adapter_name="lora_adapter")
# Push adapter to HF Hub
model.push_to_hub(repo_id="your-username/your-lora-adapter", use_auth_token=os.environ["HF_TOKEN"])
print("LoRA adapter pushed to Hugging Face Hub") output
LoRA adapter pushed to Hugging Face Hub
Troubleshooting
- If you get authentication errors, verify your Hugging Face token is set correctly in
HF_TOKEN. - Ensure your adapter directory contains
adapter_config.jsonandpytorch_model.binfiles. - If the repo already exists, handle exceptions or use
repo.git_pull()before pushing.
Key Takeaways
- Use the Hugging Face Hub SDK to upload LoRA adapter files as a model repository.
- Authenticate with your Hugging Face token stored in environment variables.
- Include all necessary adapter files like
adapter_config.jsonfor compatibility. - You can push adapters directly from
peftmodel instances usingpush_to_hub. - Handle repo existence and authentication errors gracefully during upload.