Debug Fix intermediate · 3 min read

LangChain pydantic v1 vs v2 compatibility error fix

Quick answer
LangChain v0.2+ upgraded to pydantic v2, causing compatibility errors if your code uses pydantic v1 syntax or imports. Fix this by updating your imports to use langchain_core and adjusting model initialization to the new pydantic v2 style, such as replacing BaseModel.Config with model_config.
ERROR TYPE code_error
⚡ QUICK FIX
Update your LangChain imports to langchain_core and refactor your pydantic models to use model_config instead of Config for compatibility with pydantic v2.

Why this happens

LangChain v0.2+ migrated from pydantic v1 to pydantic v2, which introduced breaking changes in model configuration and validation. Legacy code using from pydantic import BaseModel with class Config inside models or relying on old LangChain import paths (from langchain.llms import OpenAI) will raise errors like TypeError or AttributeError.

Typical error output includes:

TypeError: __init__() got an unexpected keyword argument 'arbitrary_types_allowed'

or

AttributeError: module 'langchain.llms' has no attribute 'OpenAI'

This happens because pydantic v2 replaced Config with model_config and LangChain reorganized its package structure.

python
from langchain.llms import OpenAI

class MyModel(BaseModel):
    name: str

    class Config:
        arbitrary_types_allowed = True

model = MyModel(name='test')
output
AttributeError: module 'langchain.llms' has no attribute 'OpenAI'

The fix

Update your imports to the new LangChain v0.2+ structure and refactor pydantic models to use model_config. Replace:

  • from langchain.llms import OpenAI with from langchain_openai import ChatOpenAI
  • Inside pydantic models, replace class Config with model_config = { ... }

This aligns with pydantic v2 expectations and LangChain's new modular design.

python
from langchain_openai import ChatOpenAI
from pydantic import BaseModel

class MyModel(BaseModel):
    name: str

    model_config = {
        "arbitrary_types_allowed": True
    }

model = MyModel(name='test')

llm = ChatOpenAI(model="gpt-4o", temperature=0)
print(llm)
output
<langchain_openai.chat.ChatOpenAI object at 0x7f8c2b3d1d60>

Preventing it in production

Use dependency management tools like pip-tools or poetry to lock LangChain and pydantic versions together. Validate your environment with pip freeze to ensure pydantic>=2.0 and LangChain v0.2+ are installed. Add automated tests that instantiate your models and LangChain clients to catch compatibility issues early. Consider adding retry logic around API calls to handle transient errors gracefully.

Key Takeaways

  • LangChain v0.2+ requires pydantic v2-compatible model definitions using model_config.
  • Update imports from deprecated langchain.llms to langchain_openai and other new packages.
  • Lock your dependencies and test model instantiation to catch compatibility issues early.
Verified 2026-04 · gpt-4o
Verify ↗