🧩 Full Code:
contextualize_question_prompt = ChatPromptTemplate.from_messages([
("system", (
"Given a conversation history and the most recent user query, rewrite the query as a standalone question "
"that makes sense without relying on the previous context. Do not provide an answer—only reformulate the "
"question if necessary; otherwise, return it unchanged."
)),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
])
This prompt template is designed to rewrite a user’s query so that it’s self-contained — meaning the question can stand on its own, without needing the earlier chat context.
It’s often used in retrieval-augmented generation (RAG) or chat memory pipelines in LangChain to make the user’s query more meaningful before searching documents or giving responses.
ChatPromptTemplateFrom:
langchain_core.prompts.ChatPromptTemplate
This is a LangChain class used to define structured prompts for chat-based models (like GPT).
It helps combine multiple message types — system, user (human), AI, etc. — into one prompt sequence that an LLM can understand.
Why use it:
It separates the roles and content of each message (like system instructions, previous messages, user input) — making the prompt modular and reusable.
.from_messages([...])This creates the ChatPromptTemplate from a list of message tuples or message-like objects.
Each element in the list defines a role and the content for that message.
The roles can be:
"system" → sets the rules or behavior of the AI.
"human" → represents user input.
"ai" → represents model responses (if used).
MessagesPlaceholder(...) → placeholder for dynamically inserted messages.
("system", "...")This defines the system message.
🗣️ The model reads it like a “rulebook” before responding.
In your code:
("system", (
"Given a conversation history and the most recent user query, rewrite the query as a standalone question "
"that makes sense without relying on the previous context. Do not provide an answer—only reformulate the "
"question if necessary; otherwise, return it unchanged."
))
This tells the model:
It will receive a chat history and the latest user message.
Its job is not to answer, but to reformulate the user’s question so it can stand alone.
If the question already makes sense, return it unchanged.
✅ Example:
Chat history:
Human: Who is Elon Musk?
AI: He is the CEO of Tesla.
Human: What’s his age?
Reformulated output:
"What is the age of Elon Musk?"
MessagesPlaceholder("chat_history")This acts as a dynamic placeholder for past messages (the conversation history).
When you later run this template (e.g. using .format() or .invoke()), you can insert actual chat messages here.
Example of runtime usage:
contextualize_question_prompt.format(
chat_history=[
HumanMessage(content="Who is Elon Musk?"),
AIMessage(content="He is the CEO of Tesla.")
],
input="What’s his age?"
)
→ The placeholder {chat_history} will be replaced by the two messages.
("human", "{input}")This defines the latest user message (the new query to contextualize).
{input} is a variable placeholder — it will be filled with the latest user input text during runtime.
For example:
input = "What’s his age?"
The prompt becomes:
Human: What’s his age?
At runtime, LangChain merges everything into a structured sequence like this:
System: Given a conversation history and the most recent user query, rewrite the query as a standalone question...
Human: [previous chat history messages...]
Human: What’s his age?
Then the LLM receives this complete prompt and outputs something like:
"What is the age of Elon Musk?"
| Component | Type | Purpose |
|---|---|---|
ChatPromptTemplate |
LangChain Class | Builds structured chat prompts |
.from_messages([...]) |
Method | Creates prompt from message list |
("system", "...") |
System message | Instructs model behavior |
MessagesPlaceholder("chat_history") |
Dynamic placeholder | Inserts conversation history dynamically |
("human", "{input}") |
User message | Represents the latest user input variable |
"contextualize_question_prompt" |
Variable name | Stores the entire prompt template for reuse |