PromptTemplate
is a template system for building dynamic prompts in LangChain and other LLM-based tools.
It lets you define a prompt structure with placeholders, which can be filled with actual values at runtime.
from langchain.prompts import PromptTemplate
template = PromptTemplate(
input_variables=["name", "age"],
template="My name is {name} and I am {age} years old."
)
output = template.format(name="Abhi", age=30)
print(output)
🖨️ Output:
My name is Abhi and I am 30 years old.
Parameter | Description |
---|---|
template |
The prompt string with {variables} as placeholders |
input_variables |
List of variable names used in the prompt |
partial_variables |
(Optional) Predefine some values that always stay the same |
PromptTemplate
?✅ Reusable prompts with dynamic content
✅ Separates prompt logic from data
✅ Helps in prompt engineering with LLM chains
✅ Works with output parsers, agents, and tools
from langchain.prompts import PromptTemplate
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel
# Define model
class Person(BaseModel):
name: str
age: int
# Parser
parser = PydanticOutputParser(pydantic_object=Person)
# Prompt with format instructions
prompt = PromptTemplate(
input_variables=["text"],
template="""
Extract the person's details from the text below.
Text: {text}
{format_instructions}
""",
partial_variables={"format_instructions": parser.get_format_instructions()}
)
formatted_prompt = prompt.format(text="My name is Abhi and I am 30.")
print(formatted_prompt)
partial_prompt = prompt.partial(name="Abhi")
output = partial_prompt.format(age=30)
If you want more control (e.g., loops, conditions), LangChain supports Jinja2 templates too.
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
llm = ChatOpenAI()
prompt = PromptTemplate(
input_variables=["product"],
template="Write an advertisement for the following product: {product}"
)
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run("eco-friendly toothbrush")
print(response)
Feature | Benefit |
---|---|
Dynamic Prompting | Fill in variables at runtime |
Input Validation | Ensures required variables are supplied |
Reusability | One template, many use cases |
Integration | Works with LangChain chains, agents, tools, output parsers |