How to beginner · 3 min read

How to use Pinecone with LangChain

Quick answer
Use the pinecone SDK to create and manage a vector index, then connect it with LangChain's FAISS or Chroma vectorstore wrappers for seamless semantic search. Initialize Pinecone with your API key, create an index, and use LangChain's document loaders and embeddings to store and query vectors.

PREREQUISITES

  • Python 3.8+
  • Pinecone API key
  • OpenAI API key (for embeddings)
  • pip install pinecone-client langchain langchain_openai

Setup

Install the required packages and set environment variables for Pinecone and OpenAI API keys.

bash
pip install pinecone-client langchain langchain_openai

Step by step

This example shows how to initialize Pinecone, create an index, embed documents with OpenAI embeddings, and query using LangChain.

python
import os
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.document_loaders import TextLoader
from pinecone import Pinecone

# Initialize Pinecone client
pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"])

# Create or connect to an index
index_name = "langchain-demo"
if index_name not in pc.list_indexes():
    pc.create_index(name=index_name, dimension=1536)  # 1536 for OpenAI embeddings
index = pc.Index(index_name)

# Load documents
loader = TextLoader("example.txt")
docs = loader.load()

# Initialize OpenAI embeddings
embeddings = OpenAIEmbeddings()

# Create FAISS vectorstore backed by Pinecone
vectorstore = FAISS.from_documents(docs, embeddings, index=index)

# Query example
query = "What is LangChain?"
results = vectorstore.similarity_search(query, k=3)
for i, doc in enumerate(results, 1):
    print(f"Result {i}: {doc.page_content}")
output
Result 1: LangChain is a framework for building applications with LLMs.
Result 2: LangChain supports vector databases like Pinecone.
Result 3: You can use LangChain to perform semantic search with Pinecone.

Common variations

  • Use Chroma instead of FAISS for vectorstore.
  • Use different embedding models like gemini-2.5-pro or claude-3-5-sonnet-20241022.
  • Use async Pinecone client methods if supported.

Troubleshooting

  • If you see Index not found, ensure the index is created before use.
  • Check environment variables PINECONE_API_KEY and OPENAI_API_KEY are set correctly.
  • Verify the embedding dimension matches the model used.

Key Takeaways

  • Initialize Pinecone client with your API key before creating or connecting to an index.
  • Use LangChain's vectorstore wrappers like FAISS to integrate Pinecone for semantic search.
  • Ensure embedding dimensions match between your embeddings model and Pinecone index.
  • Set environment variables securely and verify them to avoid connection errors.
Verified 2026-04 · gpt-4o, gemini-2.5-pro, claude-3-5-sonnet-20241022
Verify ↗