How to beginner · 3 min read

How to create ChatPromptTemplate in LangChain

Quick answer
Use ChatPromptTemplate from langchain_core.prompts to define structured chat prompts by specifying input variables and message templates. Instantiate it with a list of HumanMessagePromptTemplate or other message prompt templates, then format with input data to generate messages for model calls.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai langchain_core

Setup

Install the required LangChain packages and set your OpenAI API key as an environment variable.

bash
pip install langchain_openai langchain_core

# Set your API key in your shell environment
export OPENAI_API_KEY=os.environ["OPENAI_API_KEY"]

Step by step

Create a ChatPromptTemplate by defining message templates with input variables, then format it with data to produce chat messages ready for model input.

python
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate

# Define a human message prompt template with a variable
human_template = HumanMessagePromptTemplate.from_template("Tell me a joke about {topic}.")

# Create the chat prompt template with the human message
chat_prompt = ChatPromptTemplate.from_messages([human_template])

# Format the prompt with a specific topic
messages = chat_prompt.format_messages(topic="computers")

# Print the formatted messages
for message in messages:
    print(f"Role: {message.type}, Content: {message.content}")
output
Role: human, Content: Tell me a joke about computers.

Common variations

  • Use other message prompt templates like SystemMessagePromptTemplate or AIMessagePromptTemplate to build multi-turn conversations.
  • Integrate with ChatOpenAI from langchain_openai to send the formatted messages to an OpenAI chat model.
  • Use async methods if your environment supports asynchronous calls.
python
from langchain_core.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os

# Define system and human messages
system_msg = SystemMessagePromptTemplate.from_template("You are a helpful assistant.")
human_msg = HumanMessagePromptTemplate.from_template("Explain {topic} in simple terms.")

# Create chat prompt template
chat_prompt = ChatPromptTemplate.from_messages([system_msg, human_msg])

# Format messages
messages = chat_prompt.format_messages(topic="quantum computing")

# Initialize OpenAI client
client = ChatOpenAI(model="gpt-4o", openai_api_key=os.environ["OPENAI_API_KEY"])

# Send messages to model
response = client(messages=messages)
print(response.content)
output
Quantum computing is a type of computation that uses quantum bits, or qubits, which can be in multiple states at once, enabling powerful processing capabilities.

Troubleshooting

  • If you see ImportError, ensure you installed langchain_core and langchain_openai packages.
  • If formatting fails, verify your input variables match the template placeholders exactly.
  • For API errors, confirm your OPENAI_API_KEY is set correctly in the environment.

Key Takeaways

  • Use ChatPromptTemplate.from_messages() with message prompt templates to build structured chat prompts.
  • Format the prompt with input variables using format_messages() to generate messages for model input.
  • Combine system, human, and AI message templates for multi-turn conversations.
  • Integrate with ChatOpenAI to send formatted messages to OpenAI chat models.
  • Always match template variables with input keys and set API keys via environment variables.
Verified 2026-04 · gpt-4o
Verify ↗