How to Intermediate · 3 min read

How to add vector search to Semantic Kernel

Quick answer
Add vector search to Semantic Kernel by integrating an embedding service like OpenAIEmbeddings and a vector store such as FAISS. Use langchain_community.vectorstores.FAISS with Semantic Kernel to perform semantic similarity search on documents or data.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install semantic-kernel openai langchain-openai langchain-community faiss-cpu

Setup

Install the required Python packages for Semantic Kernel, OpenAI embeddings, and FAISS vector store integration.

bash
pip install semantic-kernel openai langchain-openai langchain-community faiss-cpu

Step by step

This example shows how to create a Semantic Kernel instance, add OpenAIEmbeddings for vectorization, build a FAISS vector store, add documents, and perform a vector similarity search.

python
import os
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIEmbeddings
from langchain_openai import OpenAIEmbeddings as LcOpenAIEmbeddings
from langchain_community.vectorstores import FAISS

# Initialize Semantic Kernel
kernel = Kernel()

# Create OpenAI embeddings client
# Semantic Kernel expects an embeddings service, so we wrap LangChain's OpenAIEmbeddings
openai_api_key = os.environ["OPENAI_API_KEY"]
embeddings = OpenAIEmbeddings(
    service_id="openai_embeddings",
    api_key=openai_api_key,
    ai_model_id="text-embedding-3-small"
)

# Sample documents to index
documents = [
    {"id": "1", "text": "Semantic Kernel enables AI orchestration."},
    {"id": "2", "text": "Vector search allows semantic similarity retrieval."},
    {"id": "3", "text": "OpenAI provides powerful embedding models."}
]

# Generate embeddings for documents
texts = [doc["text"] for doc in documents]
vectors = [embeddings.get_embedding(text) for text in texts]

# Build FAISS index
index = FAISS.from_texts(texts, embeddings)

# Query vector search
query = "How to perform semantic search with AI?"
query_vector = embeddings.get_embedding(query)

# Search top 2 similar documents
results = index.similarity_search_by_vector(query_vector, k=2)

print("Top 2 similar documents:")
for res in results:
    print(f"- {res.page_content}")
output
Top 2 similar documents:
- Semantic Kernel enables AI orchestration.
- Vector search allows semantic similarity retrieval.

Common variations

  • Use gpt-4o or other embedding models by changing ai_model_id.
  • Use async calls if your embedding client supports it.
  • Replace FAISS with other vector stores like Chroma or Weaviate for cloud-based vector search.

Troubleshooting

  • If you get authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
  • If FAISS installation fails, ensure you have the correct platform wheel or install faiss-cpu for CPU-only support.
  • For embedding errors, confirm the model name text-embedding-3-small is supported and your API key has access.

Key Takeaways

  • Use OpenAIEmbeddings with Semantic Kernel to generate vector representations.
  • Integrate FAISS from langchain_community.vectorstores for efficient local vector search.
  • Ensure environment variables and dependencies are correctly configured for smooth embedding and search operations.
Verified 2026-04 · text-embedding-3-small, gpt-4o
Verify ↗