What is langchain-core vs langchain-community vs langchain
langchain-core package contains the essential, stable core components of LangChain for building LLM applications. langchain-community hosts experimental, community-contributed integrations and extensions. The original langchain package bundles both core and community features but is being deprecated in favor of splitting functionality for modularity.VERDICT
langchain-core for stable, production-ready LangChain features; use langchain-community to access experimental or community-built integrations.| Tool | Key strength | Stability | API access | Best for |
|---|---|---|---|---|
| langchain-core | Stable core LLM building blocks | High (production-ready) | Yes | Production apps needing core features |
| langchain-community | Experimental and community integrations | Medium (experimental) | Yes | Trying new or niche integrations |
| langchain | Combined core + community (legacy) | Low (deprecated) | Yes | Legacy projects, transitioning to modular packages |
Key differences
langchain-core provides the foundational, stable components like prompt templates, chains, and memory modules essential for building LLM applications. langchain-community offers experimental, community-driven integrations such as new vectorstores, document loaders, or connectors not yet stable. The original langchain package bundles both but is deprecated to encourage modular usage and faster iteration.
Side-by-side example
Using langchain-core to create a simple prompt template and chain:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.chains import LLMChain
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template("Hello, {name}!")
llm = ChatOpenAI(model="gpt-4o", api_key=os.environ["OPENAI_API_KEY"])
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run({"name": "Alice"})
print(result) Hello, Alice!
Community integrations example
Using langchain-community to load documents from a new experimental source:
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFLoader("example.pdf")
docs = loader.load()
print(f"Loaded {len(docs)} pages") Loaded 10 pages
When to use each
Use langchain-core when building stable, production-grade LLM applications requiring core LangChain features. Use langchain-community to experiment with new integrations or access community-contributed modules not yet stable. Avoid using the legacy langchain package for new projects as it is deprecated.
| Package | Use case | Stability | Example |
|---|---|---|---|
| langchain-core | Stable core features | High | Prompt templates, chains, memory |
| langchain-community | Experimental integrations | Medium | New vectorstores, loaders |
| langchain | Legacy combined package | Low | Older projects, transitioning |
Pricing and access
All three packages are open-source and free to use. API access depends on the underlying LLM providers like OpenAI or Anthropic.
| Option | Free | Paid | API access |
|---|---|---|---|
| langchain-core | Yes | No | Depends on LLM provider |
| langchain-community | Yes | No | Depends on LLM provider |
| langchain | Yes | No | Depends on LLM provider |
Key Takeaways
- Use
langchain-corefor stable, production-ready LangChain features. - Use
langchain-communityto access experimental and community-built integrations. - Avoid the legacy
langchainpackage for new projects; prefer modular packages. - Both packages are open-source and free; API usage costs depend on your LLM provider.
- Modular separation enables faster updates and clearer dependency management.