How to beginner · 4 min read

How to create Dockerfile for ML model

Quick answer
To create a Dockerfile for an ML model, start with a base image like python:3.8-slim, copy your model code and dependencies, install required packages via pip, and define the command to run your model server or script. This containerizes your ML model for consistent deployment across environments.

PREREQUISITES

  • Python 3.8+
  • Docker installed on your machine
  • Basic knowledge of Dockerfile syntax
  • ML model code and requirements.txt file

Setup

Ensure you have Docker installed and running on your system. Prepare your ML model code directory with a requirements.txt listing all Python dependencies. This setup allows the Docker build to install necessary packages.

bash
docker --version
output
Docker version 24.0.2, build abcdefg

Step by step

Create a Dockerfile in your ML project root. Use a lightweight Python base image, copy your code and dependencies, install packages, and specify the command to run your model server or script.

python
FROM python:3.8-slim

# Set working directory
WORKDIR /app

# Copy requirements and install dependencies
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . ./

# Expose port if running a server (e.g., FastAPI, Flask)
EXPOSE 8000

# Command to run your model server or script
CMD ["python", "app.py"]

Common variations

  • Use a different Python version or base image like python:3.10-slim or python:3.8-alpine for smaller image size.
  • For async model servers, adjust CMD to run async frameworks like uvicorn app:app --host 0.0.0.0 --port 8000.
  • Use multi-stage builds to reduce final image size by separating build dependencies from runtime.
python
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . ./
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]

Troubleshooting

  • If you see ModuleNotFoundError, verify all dependencies are listed in requirements.txt.
  • If the container fails to start, check the CMD command syntax and ensure your entry script exists.
  • For permission errors, ensure files copied have correct permissions or add RUN chmod +x app.py if needed.

Key Takeaways

  • Use a minimal Python base image and install dependencies via requirements.txt for reproducibility.
  • Copy your ML model code and set the working directory properly in the Dockerfile.
  • Define a clear command to run your model server or script inside the container.
  • Adjust the Dockerfile for async servers or multi-stage builds to optimize image size.
  • Troubleshoot by verifying dependencies, file permissions, and command correctness.
Verified 2026-04
Verify ↗