How to Intermediate · 3 min read

How to build ecommerce chatbot with AI

Quick answer
Build an ecommerce chatbot by integrating a large language model like gpt-4o with your product database using retrieval-augmented generation (RAG). Use OpenAI API for chat completions and vector search tools like FAISS to enable product recommendations and customer support.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0
  • pip install langchain>=0.2.0
  • pip install faiss-cpu

Setup

Install required Python packages and set your OPENAI_API_KEY environment variable.

  • Use pip install openai langchain faiss-cpu to install dependencies.
  • Export your API key in your shell: export OPENAI_API_KEY='your_api_key'.
bash
pip install openai langchain faiss-cpu
output
Collecting openai
Collecting langchain
Collecting faiss-cpu
Successfully installed openai langchain faiss-cpu

Step by step

This example shows a simple ecommerce chatbot that answers user queries by searching a product catalog using vector embeddings and then generating responses with gpt-4o.

python
import os
from openai import OpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA
from langchain.llms import ChatOpenAI

# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Sample product catalog
products = [
    {"id": "p1", "name": "Wireless Mouse", "description": "Ergonomic wireless mouse with USB receiver."},
    {"id": "p2", "name": "Mechanical Keyboard", "description": "RGB backlit mechanical keyboard with blue switches."},
    {"id": "p3", "name": "Noise Cancelling Headphones", "description": "Over-ear headphones with active noise cancellation."}
]

# Prepare documents for vector store
texts = [p["name"] + ": " + p["description"] for p in products]

# Create embeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-small", client=client)

# Build FAISS vector store
vectorstore = FAISS.from_texts(texts, embeddings)

# Create a retriever
retriever = vectorstore.as_retriever(search_type="similarity", search_kwargs={"k": 2})

# Setup LLM for answer generation
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)

# Build RetrievalQA chain
qa = RetrievalQA.from_chain_type(llm=llm, retriever=retriever)

# Example user query
query = "Do you have wireless accessories for laptops?"

# Get answer
answer = qa.run(query)
print("User query:", query)
print("Chatbot answer:", answer)
output
User query: Do you have wireless accessories for laptops?
Chatbot answer: Yes, we offer a Wireless Mouse which is an ergonomic wireless mouse with a USB receiver, perfect for laptop use.

Common variations

You can enhance your ecommerce chatbot by:

  • Using async calls with the OpenAI SDK for better performance.
  • Streaming responses for real-time user experience.
  • Switching to other models like claude-3-5-haiku-20241022 or gemini-2.0-flash for different response styles.
  • Integrating with databases or APIs for live inventory and order tracking.
python
import asyncio
from openai import OpenAI

async def async_chat():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    stream = await client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "What wireless accessories do you have?"}],
        stream=True
    )
    async for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="", flush=True)

asyncio.run(async_chat())
output
Yes, we have wireless accessories including an ergonomic wireless mouse with USB receiver and Bluetooth headphones.

Troubleshooting

  • If you get authentication errors, verify your OPENAI_API_KEY is set correctly in your environment.
  • If vector search returns irrelevant results, increase k in search_kwargs or improve your product descriptions.
  • For slow responses, consider using smaller models like gpt-4o-mini or enable streaming.

Key Takeaways

  • Use vector embeddings and FAISS to enable product search in your chatbot.
  • Integrate OpenAI's gpt-4o for natural language response generation.
  • Leverage LangChain for chaining retrieval and generation easily.
  • Use streaming and async calls to improve user experience and performance.
  • Ensure environment variables and API keys are correctly configured to avoid errors.
Verified 2026-04 · gpt-4o, gpt-4o-mini, claude-3-5-haiku-20241022, gemini-2.0-flash, text-embedding-3-small
Verify ↗