How to Intermediate · 3 min read

How to use JsonOutputParser in LangChain

Quick answer
Use JsonOutputParser in LangChain to convert AI-generated text responses into structured JSON objects automatically. Instantiate JsonOutputParser with a JSON schema or keys, then pass it to a StructuredOutputParser or directly use it with a prompt template to parse model outputs into Python dictionaries.

PREREQUISITES

  • Python 3.8+
  • pip install langchain>=0.2
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the required packages and set your OpenAI API key as an environment variable.

  • Install LangChain and OpenAI SDK:
bash
pip install langchain openai

Step by step

This example shows how to use JsonOutputParser to parse a model's JSON response into a Python dictionary. We define a JSON schema for the expected output, create a prompt template that instructs the model to respond in JSON, then parse the output.

python
import os
from langchain_openai import ChatOpenAI
from langchain.output_parsers import JsonOutputParser
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate

# Set up OpenAI client
client = ChatOpenAI(model_name="gpt-4o", temperature=0, openai_api_key=os.environ["OPENAI_API_KEY"])

# Define the JSON schema for expected output
json_schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer"},
        "email": {"type": "string", "format": "email"}
    },
    "required": ["name", "age", "email"]
}

# Instantiate the JsonOutputParser with the schema
parser = JsonOutputParser(schema=json_schema)

# Create a prompt template that instructs the model to respond in JSON
prompt_template = ChatPromptTemplate.from_messages([
    HumanMessagePromptTemplate.from_template(
        "Provide user info in JSON format with fields: name, age, email."
    )
])

# Generate the prompt text
prompt_text = prompt_template.format_messages().to_dict()['content']

# Call the model
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt_text}]
)

# Extract the raw text
raw_output = response.choices[0].message.content

# Parse the JSON output
parsed_output = parser.parse(raw_output)

print("Raw output from model:", raw_output)
print("Parsed JSON output:", parsed_output)
output
Raw output from model: {"name": "Alice", "age": 30, "email": "alice@example.com"}
Parsed JSON output: {'name': 'Alice', 'age': 30, 'email': 'alice@example.com'}

Common variations

You can use JsonOutputParser asynchronously with async LangChain clients, or combine it with other output parsers for nested structures. Different models like gpt-4o-mini or Anthropic's claude-3-5-sonnet-20241022 also work similarly. Adjust the JSON schema to match your expected output structure.

python
import asyncio
import os
from langchain_openai import ChatOpenAI
from langchain.output_parsers import JsonOutputParser

async def async_example():
    client = ChatOpenAI(model_name="gpt-4o-mini", temperature=0, openai_api_key=os.environ["OPENAI_API_KEY"])
    parser = JsonOutputParser(schema={"type": "object", "properties": {"result": {"type": "string"}}, "required": ["result"]})

    messages = [{"role": "user", "content": "Respond with JSON containing a result string."}]

    response = await client.acreate(messages=messages)
    raw = response.choices[0].message.content
    parsed = parser.parse(raw)
    print("Async parsed output:", parsed)

asyncio.run(async_example())
output
Async parsed output: {'result': 'Success'}

Troubleshooting

  • If parsing fails with a JSONDecodeError, ensure the model output is strictly valid JSON by refining your prompt to emphasize JSON format.
  • Use temperature=0 to reduce randomness and improve output consistency.
  • If the output contains extra text around JSON, consider using regex or LangChain's OutputFixingParser to clean it before parsing.

Key Takeaways

  • Use JsonOutputParser with a JSON schema to reliably parse AI outputs into structured Python objects.
  • Always instruct the model clearly in your prompt to output valid JSON and set temperature=0 for deterministic results.
  • Combine JsonOutputParser with LangChain prompt templates for seamless integration in chat workflows.
Verified 2026-04 · gpt-4o, gpt-4o-mini, claude-3-5-sonnet-20241022
Verify ↗