Some text some message..
Back ChatPromptTemplate for a context-based (Document Chat) Question Answering (QA) system. 28 Oct, 2025

🔍 Full Code:

context_qa_prompt = ChatPromptTemplate.from_messages([
    ("system", (
        "You are an assistant designed to answer questions using the provided context. Rely only on the retrieved "
        "information to form your response. If the answer is not found in the context, respond with 'I don't know.' "
        "Keep your answer concise and no longer than three sentences.\n\n{context}"
    )),
    MessagesPlaceholder("chat_history"),
    ("human", "{input}"),
])

🧠 Explanation by Component

🧩 1. ChatPromptTemplate

This is a LangChain class used to create structured, multi-turn chat prompts.

It allows you to define prompts with:

  • Roles: "system", "human", "ai", etc.

  • Dynamic placeholders (like {input} and {context})

  • Message history slots (via MessagesPlaceholder)

When used with an LLM, it formats messages properly for a chat-based model like GPT.


🧩 2. .from_messages([...])

This method takes a list of message tuples, where each tuple defines:

(role, message_content)

It builds a chat prompt with specific roles and templates.


🧱 3. System message

("system", (
    "You are an assistant designed to answer questions using the provided context. ..."
))

Role:

The system message defines the rules and personality of the assistant.

Purpose:

It instructs the model to:

  • Use only the given {context} to answer.

  • Say "I don't know" if the context doesn’t contain the answer.

  • Keep responses short (≤ 3 sentences).

{context}:

This is a placeholder variable that will later be filled with retrieved text (for example, from a document or database).

Example:

context = "The capital of France is Paris."

Then the system message becomes:

You are an assistant designed to answer questions...
...
The capital of France is Paris.

🧩 4. MessagesPlaceholder("chat_history")

This tells the prompt that there may be previous messages (conversation history) between the user and the assistant.

LangChain uses it to dynamically insert past conversation when building the prompt.

So if the user asked:

Who founded Microsoft?

and later asked:

When was it founded?

The model can use chat_history to recall that “it” refers to Microsoft.


🧩 5. Human message

("human", "{input}")

This defines the latest user query (the actual question the user asks).

  • {input} is a variable placeholder.

  • It will be replaced by the user’s latest question at runtime.

Example:

input = "What is the capital of France?"

🧩 Putting it all together

When executed, LangChain builds a structured chat like this:

System:
You are an assistant designed to answer questions using the provided context...
(context text here)

Chat History:
User: Who founded Microsoft?
AI: Microsoft was founded by Bill Gates and Paul Allen.

Human:
When was it founded?

🧩 Final Behavior Summary

Component Role Example
System Defines rules and inserts context "You are an assistant… {context}"
MessagesPlaceholder Keeps chat history Prior user & AI messages
Human Inserts current question "{input}"

🧾 Example Runtime

If you call:

context_qa_prompt.format(
    context="The capital of France is Paris.",
    input="What is the capital of France?"
)

The final formatted prompt will look like:

System: You are an assistant designed to answer questions using the provided context...
The capital of France is Paris.

Human: What is the capital of France?

✅ The LLM will then answer:

The capital of France is Paris.