How to install Whisper locally
Quick answer
To install
Whisper locally, use pip install openai-whisper to get the official package. Then load the model with whisper.load_model() and transcribe audio files directly on your machine without API calls.PREREQUISITES
Python 3.8+pip install openai-whisperffmpeg installed and in system PATH
Setup
Install the openai-whisper package via pip and ensure ffmpeg is installed on your system for audio processing.
pip install openai-whisper Step by step
Use the following Python code to load the Whisper model and transcribe an audio file locally.
import whisper
# Load the base Whisper model
model = whisper.load_model("base")
# Transcribe an audio file
result = model.transcribe("audio.mp3")
print(result["text"]) output
This is the transcribed text from the audio file.
Common variations
- Use different model sizes like
tiny,small,medium, orlargeby changingload_model("model_name"). - For faster transcription on CPU, use smaller models.
- Use
faster-whisperfor optimized performance on CPUs.
import whisper
# Load a smaller model for faster CPU transcription
model = whisper.load_model("small")
result = model.transcribe("audio.mp3")
print(result["text"]) output
Transcribed text from audio using the small model.
Troubleshooting
- If you get an error about
ffmpeg, install it and ensure it's in your system PATH. - For Windows, download
ffmpegfrom the official site and add to PATH. - If transcription is slow, try a smaller model or run on GPU if available.
Key Takeaways
- Install Whisper locally with
pip install openai-whisperandffmpeg. - Load models like
baseorsmallfor transcription without API calls. - Use smaller models for faster CPU performance or GPU for best speed.
- Ensure
ffmpegis properly installed to avoid runtime errors.