graph TD
A[State<br/>Shared data structure] --> B[Nodes<br/>Functions that transform state]
B --> C[Edges<br/>Connections between nodes]
C --> D[Conditional Edges<br/>Branching based on state]
D --> E[Checkpointing<br/>Save & resume state]
15 LangGraph for AI Workflows
“LangChain builds pipelines. LangGraph builds agents that can think, loop, branch, and decide.”
15.1 13.1 Why LangGraph?
LangChain’s LCEL is excellent for linear pipelines: A → B → C. But real-world AI workflows need:
- Loops: Retry when quality is insufficient
- Branching: Take different paths based on LLM decisions
- State persistence: Remember what happened across turns
- Human-in-the-loop: Pause and ask for approval
- Parallel execution: Run multiple tasks simultaneously
LangGraph adds all of this through a graph-based workflow model inspired by state machines.
15.2 13.2 Core Concepts
15.2.1 The State Machine Model
from typing import TypedDict, Annotated
from operator import add
class WorkflowState(TypedDict):
"""The shared state for our workflow."""
user_query: str
search_results: list[str]
draft_answer: str
quality_score: float
revision_count: int
final_answer: str
messages: Annotated[list, add] # Accumulates with add operator15.3 13.3 Your First LangGraph Workflow
A simple research assistant with quality review:
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
# --- Node Definitions ---
def research_node(state: WorkflowState) -> WorkflowState:
"""Search for information."""
response = llm.invoke(
f"Research the following topic and provide 5 key facts:\n{state['user_query']}"
)
return {"search_results": [response.content], "revision_count": 0}
def draft_node(state: WorkflowState) -> WorkflowState:
"""Draft an answer from research."""
context = "\n".join(state["search_results"])
response = llm.invoke(
f"Based on this research:\n{context}\n\nWrite a clear, concise answer to: {state['user_query']}"
)
return {"draft_answer": response.content}
def quality_check_node(state: WorkflowState) -> WorkflowState:
"""Score the draft answer."""
response = llm.invoke(
f"Score this answer from 0-10 on accuracy and clarity. Return only a number.\n\nAnswer: {state['draft_answer']}"
)
try:
score = float(response.content.strip())
except:
score = 7.0
return {"quality_score": score}
def revise_node(state: WorkflowState) -> WorkflowState:
"""Revise the draft if quality is low."""
response = llm.invoke(
f"Improve this answer to make it clearer and more accurate:\n{state['draft_answer']}"
)
return {
"draft_answer": response.content,
"revision_count": state["revision_count"] + 1
}
def finalise_node(state: WorkflowState) -> WorkflowState:
"""Mark the answer as final."""
return {"final_answer": state["draft_answer"]}
# --- Routing Logic ---
def quality_router(state: WorkflowState) -> str:
"""Route based on quality score."""
if state["quality_score"] >= 8.0 or state["revision_count"] >= 2:
return "finalise"
return "revise"
# --- Build the Graph ---
workflow = StateGraph(WorkflowState)
workflow.add_node("research", research_node)
workflow.add_node("draft", draft_node)
workflow.add_node("quality_check", quality_check_node)
workflow.add_node("revise", revise_node)
workflow.add_node("finalise", finalise_node)
# Define edges
workflow.set_entry_point("research")
workflow.add_edge("research", "draft")
workflow.add_edge("draft", "quality_check")
workflow.add_conditional_edges(
"quality_check",
quality_router,
{"finalise": "finalise", "revise": "revise"}
)
workflow.add_edge("revise", "quality_check") # Loop back!
workflow.add_edge("finalise", END)
app = workflow.compile()
# Run it!
result = app.invoke({"user_query": "What are the main benefits of AI in healthcare?"})
print(f"Quality Score: {result['quality_score']}")
print(f"Revisions: {result['revision_count']}")
print(f"Answer: {result['final_answer']}")15.4 13.4 Human-in-the-Loop
One of LangGraph’s most powerful features: pause workflow and wait for human input.
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import interrupt
# Add a human approval node
def human_approval_node(state: WorkflowState) -> WorkflowState:
"""Pause and request human review."""
# This pauses the workflow — awaiting human input
approval = interrupt({
"message": "Please review this draft:",
"draft": state["draft_answer"],
"quality_score": state["quality_score"]
})
if approval.get("approved"):
return {"final_answer": state["draft_answer"]}
else:
return {"draft_answer": approval.get("revised_text", state["draft_answer"])}
# Compile with memory checkpointer
checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer, interrupt_before=["human_approval"])
# Run until interrupt
config = {"configurable": {"thread_id": "session_1"}}
state = app.invoke({"user_query": "Explain quantum computing"}, config=config)
# Resume with human input
app.invoke(
{"approved": True},
config=config
)15.5 13.5 Parallel Execution
from langgraph.graph import StateGraph
from langchain_core.runnables import RunnableParallel
def parallel_research_node(state: WorkflowState) -> WorkflowState:
"""Run multiple research tasks in parallel."""
# LangGraph supports async for true parallelism
results = {
"technical_details": llm.invoke(f"Technical aspects of: {state['user_query']}"),
"business_impact": llm.invoke(f"Business impact of: {state['user_query']}"),
"risks": llm.invoke(f"Risks and challenges of: {state['user_query']}")
}
combined = "\n\n".join([
f"Technical: {results['technical_details'].content}",
f"Business: {results['business_impact'].content}",
f"Risks: {results['risks'].content}"
])
return {"search_results": [combined]}15.6 13.6 Interactive Simulation: Workflow Designer
15.7 Chapter Summary
- LangGraph enables stateful, cyclic, and conditional AI workflows
- Nodes transform state; edges define flow; conditional edges handle branching
- Checkpointing enables persistence and human-in-the-loop patterns
- Loops allow quality-controlled iteration (retry until good enough)
- Parallelism speeds up multi-step research tasks
15.8 What’s Next
Chapter 14: Build a Stateful AI Workflow — a complete autonomous research agent.