How to beginner · 3 min read

How to use FAISS with LangChain

Quick answer
Use FAISS from langchain_community.vectorstores to create a vector store with embeddings generated by OpenAIEmbeddings. Load documents, embed them, and index with FAISS for efficient similarity search within LangChain pipelines.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0 langchain_openai langchain_community faiss-cpu

Setup

Install the required packages and set your OpenAI API key in the environment variables.

  • Install packages: openai, langchain_openai, langchain_community, and faiss-cpu.
  • Set environment variable OPENAI_API_KEY with your OpenAI API key.
bash
pip install openai langchain_openai langchain_community faiss-cpu

Step by step

This example demonstrates loading text documents, embedding them with OpenAI embeddings, indexing with FAISS, and performing a similarity search.

python
import os
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.document_loaders import TextLoader

# Load documents from a text file
loader = TextLoader("example.txt")
docs = loader.load()

# Initialize OpenAI embeddings
embeddings = OpenAIEmbeddings(api_key=os.environ["OPENAI_API_KEY"])

# Create FAISS vector store from documents
vectorstore = FAISS.from_documents(docs, embeddings)

# Query the vector store
query = "What is LangChain?"
results = vectorstore.similarity_search(query, k=3)

for i, doc in enumerate(results, 1):
    print(f"Result {i}: {doc.page_content}")
output
Result 1: LangChain is a framework for building applications with LLMs.
Result 2: LangChain supports vector stores like FAISS for semantic search.
Result 3: You can embed documents and query them efficiently using FAISS.

Common variations

  • Use different embedding models by swapping OpenAIEmbeddings with other LangChain-compatible embeddings.
  • Use FAISS.load_local() and FAISS.save_local() to persist and reload the index.
  • Integrate with LangChain chains or agents for advanced workflows.
python
from langchain_community.vectorstores import FAISS

# Save FAISS index locally
vectorstore.save_local("faiss_index")

# Load FAISS index later
loaded_vectorstore = FAISS.load_local("faiss_index", embeddings)

Troubleshooting

  • If you get import errors for faiss, ensure you installed faiss-cpu or the appropriate FAISS package for your platform.
  • If embeddings fail, verify your OPENAI_API_KEY is set correctly in the environment.
  • For large document sets, monitor memory usage as FAISS loads indexes in memory.

Key Takeaways

  • Use FAISS from langchain_community.vectorstores to build fast vector search indexes.
  • Generate embeddings with OpenAIEmbeddings for seamless integration with LangChain.
  • Persist FAISS indexes locally with save_local and load_local for reuse.
  • Ensure environment variables and dependencies are correctly set to avoid runtime errors.
Verified 2026-04 · gpt-4o, OpenAIEmbeddings
Verify ↗