1️⃣ Synchronous vs Asynchronous execution
2️⃣ Coroutine
3️⃣ Event Loop
Once these are clear, the difference between:
await self.app.ainvoke()
and
asyncio.run(self.app.ainvoke())
becomes very simple.
Most Python code runs synchronously.
Example:
import time
def task():
print("Start")
time.sleep(3)
print("End")
task()
print("Done")
Execution flow:
Start
(wait 3 seconds)
End
Done
Python waits for each task to finish.
Task1 → finish → Task2 → finish → Task3
This is called blocking execution.
Async allows multiple tasks to run without waiting.
Example:
import asyncio
async def task():
print("Start")
await asyncio.sleep(3)
print("End")
asyncio.run(task())
Key difference:
await asyncio.sleep(3)
Instead of blocking, Python says:
"While waiting, I can do something else."
Execution model:
Task1 waiting
Task2 running
Task3 running
This improves performance in I/O operations like:
API calls
database queries
LLM requests
web scraping
Your Groq call is an I/O operation.
A coroutine is an async function.
Example:
async def fetch_data():
return "data"
Calling it:
fetch_data()
does NOT run it.
It returns a coroutine object:
<coroutine object fetch_data at 0x123>
To execute it you must use:
await
or
asyncio.run()
The event loop is the engine that runs async code.
Think of it like a task manager.
Event Loop
│
├── Task 1
├── Task 2
├── Task 3
└── Task 4
It decides:
which task runs
which task waits
which task resumes
result = await self.app.ainvoke(initial_question)
This means:
"Run this coroutine inside the existing event loop."
But it only works inside an async function.
Example:
async def run_pipeline():
result = await self.app.ainvoke(initial_question)
print(result)
Flow:
Async function
│
▼
Event loop already running
│
▼
await ainvoke()
│
▼
Return result
result = asyncio.run(self.app.ainvoke(initial_question))
This means:
"Create an event loop and execute this coroutine."
Used in normal synchronous scripts.
Example:
def run_pipeline():
result = asyncio.run(self.app.ainvoke(initial_question))
print(result)
Flow:
Normal Python script
│
▼
asyncio.run()
│
▼
Create event loop
│
▼
Run coroutine
│
▼
Close loop
awaitAsync function
│
▼
Existing event loop
│
▼
Run coroutine
asyncio.run()Normal script
│
▼
asyncio.run()
│
▼
Create event loop
│
▼
Run coroutine
│
▼
Destroy loop
awaitThis is illegal:
def run():
result = await my_function()
Error:
SyntaxError: 'await' outside async function
Because no event loop exists.
Python doesn't know where to run it.
FastAPI already runs an event loop.
Client Request
│
▼
FastAPI event loop
│
▼
await ainvoke()
Example:
@app.post("/chat")
async def chat(req: Request):
result = await app.ainvoke({
"messages":[HumanMessage(content=req.question)]
})
return result
Here asyncio.run() would crash.
Suppose You are running:
python -m prod_assistant.workflow.agentic_workflow
So this is synchronous environment.
Best options:
result = self.app.invoke(initial_question)
result = asyncio.run(self.app.ainvoke(initial_question))
| Feature | await | asyncio.run |
|---|---|---|
| Event loop | existing | created |
| Speed | faster | slightly slower |
| Use case | async apps | scripts |
| Concurrency | high | moderate |
LangGraph supports:
invoke()
ainvoke()
stream()
astream()
Example:
invoke → sync
ainvoke → async
stream → sync streaming
astream → async streaming
Modern AI assistants use async streaming:
User Question
│
▼
LangGraph
│
▼
astream()
│
▼
Tokens stream to UI
This creates the ChatGPT typing effect.
✅ Simple Rule
Normal Python script → asyncio.run()
Async framework (FastAPI, LangServe) → await
🔥 Why LangGraph uses async internally
This allows LLM + Retriever + WebSearch + Tools to run concurrently, making AI agents 3-5x faster.