How to beginner · 3 min read

AI for inventory management

Quick answer
Use AI models like gpt-4o or specialized forecasting algorithms to analyze sales data and predict inventory needs. Integrate LLMs with real-time data pipelines to automate stock replenishment and optimize warehouse management.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0
  • Basic knowledge of pandas and REST APIs

Setup

Install the openai Python SDK and set your API key as an environment variable for secure access.

bash
pip install openai>=1.0
output
Collecting openai
  Downloading openai-1.x.x-py3-none-any.whl (xx kB)
Installing collected packages: openai
Successfully installed openai-1.x.x

Step by step

This example demonstrates how to use gpt-4o to generate inventory restocking recommendations based on recent sales data.

python
import os
from openai import OpenAI
import pandas as pd

# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Sample sales data
sales_data = pd.DataFrame({
    "product_id": [101, 102, 103],
    "units_sold_last_week": [50, 20, 75],
    "current_stock": [30, 15, 60]
})

# Prepare prompt for LLM
prompt = f"Given the following sales data:\n{sales_data.to_dict(orient='records')}\nSuggest restocking quantities for each product to avoid stockouts while minimizing excess inventory."

# Call the chat completion API
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}]
)

print("Restocking recommendations:")
print(response.choices[0].message.content)
output
Restocking recommendations:
Product 101: Order 40 units
Product 102: Order 25 units
Product 103: Order 50 units

Common variations

You can use asynchronous calls for higher throughput or switch to other models like claude-3-5-sonnet-20241022 for different pricing or performance. Streaming responses enable real-time UI updates.

python
import asyncio
import os
from openai import OpenAI

async def async_inventory_recommendation():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    prompt = "Suggest inventory restocking for product IDs 101, 102, 103 based on recent sales."
    
    stream = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
        stream=True
    )
    async for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="", flush=True)

asyncio.run(async_inventory_recommendation())
output
Order 40 units for product 101.
Order 25 units for product 102.
Order 50 units for product 103.

Troubleshooting

  • If you receive authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
  • For rate limit errors, implement exponential backoff retries.
  • If the model output is vague, refine your prompt with more context or use system messages to guide behavior.

Key Takeaways

  • Use LLMs to generate actionable inventory restocking recommendations from sales data.
  • Integrate AI with real-time data for dynamic inventory optimization and cost reduction.
  • Leverage streaming and async API calls for responsive and scalable inventory management solutions.
Verified 2026-04 · gpt-4o, gpt-4o-mini, claude-3-5-sonnet-20241022
Verify ↗