Install core and provider-specific packages:
pip install langchain # Core package
pip install langchain-openai # OpenAI chat models
pip install langchain-google-vertexai # Google VertexAI chat
pip install langchain-anthropic # Anthropic chat
# ...other providers (aws, cohère, groq, huggingface, etc.) :contentReference[oaicite:2]{index=2}
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(streaming=True) # OpenAI chat model with token streaming :contentReference[oaicite:3]{index=3}
from langchain.chat_models import init_chat_model
model = init_chat_model("gemini-2.0-flash", model_provider="google_genai") # Google Gemini :contentReference[oaicite:4]{index=4}
Use the LangChain Expression Language (LCEL) for clean chain building:
from langchain_core.prompts import PromptTemplate
prompt = PromptTemplate.from_template("Translate {input} to {output_language}:")
chain = prompt | llm
response = chain.invoke({"input": "Hello", "output_language": "Spanish"})
(Note: older chain constructors like LLMChain
are deprecated in favor of |
chaining) (LangChain, LangChain Python API)
Built-in tools for models to call:
Search tools: BingSearch
, DuckDuckGoSearch
, GoogleSearch
, SerpAPI
, etc. (LangChain, Wikipedia)
Code interpreters: Azure Container Apps
, Bearly Code Interpreter
, etc. (LangChain)
Productivity & automation: i.e. Twilio
, IFTTT WebHooks
, Zapier
, etc. (Wikipedia)
Instantiate a tool:
from langchain.tools import DuckDuckGoSearch
tool = DuckDuckGoSearch()
result = tool.run("Latest news on LangChain")
Bootstrap a new provider package:
pip install langchain-cli poetry
langchain-cli integration new --name parrot-link
cd parrot-link
poetry add langchain-core
# implement integration modules (chat_models.py, vectorstores.py)
Use langchain-cli
template for consistent package structure and tests (LangChain, LangChain)
Define tools that return commands to update agent state:
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.types import Command
from langchain_core.messages import ToolMessage
@tool
def update_user_name(new_name: str, tool_call_id: InjectedToolCallId) -> Command:
return Command(update={"user_name": new_name},
messages=[ToolMessage(f"Updated name to {new_name}", tool_call_id)])
Use with create_react_agent
or custom graph execution (LangChain)
from langgraph.store.memory import InMemoryStore
store = InMemoryStore()
store.put(("users",), "user123", {"name": "John", "language": "English"})
Retrieve state in tools:
from langchain_core.runnables import RunnableConfig
from langgraph.config import get_store
@tool
def get_user_info(config: RunnableConfig) -> str:
store = get_store()
data = store.get(("users",), config["configurable"]["user_id"])
return str(data.value) if data else "Unknown"
Allows persisting conversation and context across runs (LangChain)
Check these essential resources for hands-on examples:
How to: chain runnables, stream, parallel, inspect, add memory, fallbacks (LangChain, LangChain)
Integration authoring: detailed guide for custom component creation (LangChain, LangChain)
Category | Key Commands/Patterns |
---|---|
Install | pip install langchain + provider |
Chat Setup | ChatOpenAI(...) , init_chat_model(...) |
Chain Composition | `prompt |
Tools | Search, code execution, productivity tools |
Custom Integrations | langchain-cli integration new |
Agent Workflow | @tool functions, create_react_agent |
Memory | InMemoryStore , get_store() |
Docs | LangChain official how‑to / contrib guides |