How to beginner · 3 min read

How to use Pinecone with LangChain

Quick answer
Use langchain_community.vectorstores.Pinecone to connect Pinecone with LangChain by initializing Pinecone with your API key and environment, then create a vector store from LangChain embeddings. This enables semantic search and retrieval over documents stored in Pinecone.

PREREQUISITES

  • Python 3.8+
  • Pinecone API key (free tier available)
  • OpenAI API key (for embeddings)
  • pip install langchain_openai langchain_community pinecone-client openai

Setup

Install required packages and set environment variables for Pinecone and OpenAI API keys.

bash
pip install langchain_openai langchain_community pinecone-client openai

Step by step

This example shows how to initialize Pinecone, create embeddings with OpenAI, and use LangChain's Pinecone vector store for semantic search.

python
import os
import pinecone
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Pinecone

# Set your environment variables before running
# export PINECONE_API_KEY="your-pinecone-api-key"
# export PINECONE_ENVIRONMENT="your-pinecone-environment"
# export OPENAI_API_KEY="your-openai-api-key"

# Initialize Pinecone client
pinecone.init(
    api_key=os.environ["PINECONE_API_KEY"],
    environment=os.environ["PINECONE_ENVIRONMENT"]
)

index_name = "langchain-demo"

# Create Pinecone index if it doesn't exist
if index_name not in pinecone.list_indexes():
    pinecone.create_index(index_name, dimension=1536)  # 1536 for OpenAI embeddings

# Connect to the index
index = pinecone.Index(index_name)

# Initialize OpenAI embeddings
embeddings = OpenAIEmbeddings(api_key=os.environ["OPENAI_API_KEY"])

# Create LangChain Pinecone vector store
vectorstore = Pinecone(index, embeddings.embed_query, text_key="text")

# Add documents to the vector store
texts = ["LangChain makes working with LLMs easier.", "Pinecone is a vector database for similarity search."]
ids = ["doc1", "doc2"]
vectorstore.add_texts(texts=texts, ids=ids)

# Query the vector store
query = "What helps with LLM workflows?"
results = vectorstore.similarity_search(query, k=1)

print("Top result:", results[0].page_content)
output
Top result: LangChain makes working with LLMs easier.

Common variations

  • Use different embedding models by swapping OpenAIEmbeddings with other LangChain embeddings.
  • Use async Pinecone client for asynchronous workflows.
  • Change k in similarity_search to retrieve more results.

Troubleshooting

  • If you get Index does not exist, ensure you created the Pinecone index with the correct dimension.
  • Check your API keys and environment variables if authentication fails.
  • Verify your Pinecone environment matches the region of your index.

Key Takeaways

  • Initialize Pinecone with your API key and environment before using it with LangChain.
  • Use OpenAIEmbeddings or other LangChain embeddings to generate vectors for Pinecone.
  • Create the Pinecone index with the correct embedding dimension (e.g., 1536 for OpenAI).
  • Add texts with unique IDs to the Pinecone vector store for semantic search.
  • Use similarity_search on the LangChain Pinecone vector store to retrieve relevant documents.
Verified 2026-04 · gpt-4o, OpenAIEmbeddings
Verify ↗