How to beginner · 3 min read

How to install Whisper locally

Quick answer
To install Whisper locally, use pip install openai-whisper to get the official package. Then load the model with whisper.load_model() and transcribe audio files directly on your machine without API calls.

PREREQUISITES

  • Python 3.8+
  • pip install openai-whisper
  • ffmpeg installed and in system PATH

Setup

Install the openai-whisper package via pip and ensure ffmpeg is installed on your system for audio processing.

bash
pip install openai-whisper

Step by step

Use the following Python code to load the Whisper model and transcribe an audio file locally.

python
import whisper

# Load the base Whisper model
model = whisper.load_model("base")

# Transcribe an audio file
result = model.transcribe("audio.mp3")

print(result["text"])
output
This is the transcribed text from the audio file.

Common variations

  • Use different model sizes like tiny, small, medium, or large by changing load_model("model_name").
  • For faster transcription on CPU, use smaller models.
  • Use faster-whisper for optimized performance on CPUs.
python
import whisper

# Load a smaller model for faster CPU transcription
model = whisper.load_model("small")

result = model.transcribe("audio.mp3")
print(result["text"])
output
Transcribed text from audio using the small model.

Troubleshooting

  • If you get an error about ffmpeg, install it and ensure it's in your system PATH.
  • For Windows, download ffmpeg from the official site and add to PATH.
  • If transcription is slow, try a smaller model or run on GPU if available.

Key Takeaways

  • Install Whisper locally with pip install openai-whisper and ffmpeg.
  • Load models like base or small for transcription without API calls.
  • Use smaller models for faster CPU performance or GPU for best speed.
  • Ensure ffmpeg is properly installed to avoid runtime errors.
Verified 2026-04 · whisper-1, openai-whisper
Verify ↗