6  How LangChain Works

Note📍 Chapter Overview

Time: ~75 minutes | Level: Intermediate | Prerequisites: Chapter 5

“LangChain is the plumbing that makes AI pipelines possible.”

6.1 4.1 The Problem LangChain Solves

Imagine you want to build an AI assistant that:

  1. Takes a user’s question
  2. Searches your company documents for relevant information
  3. Summarises the findings
  4. Formats the answer in a specific style
  5. Logs the interaction for analysis

Without LangChain, you’d write all the glue code yourself — managing API calls, handling errors, chaining outputs, managing memory. LangChain abstracts this into composable building blocks.


6.2 4.2 Core Concepts

graph TD
    A[LangChain Core Concepts] --> B[Models<br/>LLMs & Chat Models]
    A --> C[Prompts<br/>Templates & Variables]
    A --> D[Chains<br/>Connect components]
    A --> E[Memory<br/>Conversation history]
    A --> F[Tools<br/>External capabilities]
    A --> G[Agents<br/>Autonomous decision-makers]
    A --> H[Retrievers<br/>Fetch relevant context]

6.2.1 4.2.1 Models

LangChain wraps many LLM providers (OpenAI, Anthropic, Google, Ollama) with a unified interface.

6.2.2 4.2.2 Prompt Templates

Instead of hardcoding prompts, templates let you inject dynamic content safely.

6.2.3 4.2.3 Chains (LCEL)

LangChain Expression Language (LCEL) uses the pipe | operator to chain components together elegantly.

6.2.4 4.2.4 Memory

Buffers, summaries, and vector-store-backed memory give LLMs conversational context.

6.2.5 4.2.5 Tools & Agents

Agents can call tools (web search, calculators, APIs) autonomously to complete tasks.


6.3 4.3 Your First LangChain Pipeline

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# 1. Define the model
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)

# 2. Define the prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful AI professor teaching {subject}. Be concise and use examples."),
    ("human", "{question}")
])

# 3. Define the output parser
parser = StrOutputParser()

# 4. Build the chain using LCEL (pipe operator)
chain = prompt | llm | parser

# 5. Run it!
response = chain.invoke({
    "subject": "Artificial Intelligence",
    "question": "What is the difference between supervised and unsupervised learning?"
})

print(response)
library(reticulate)

# Option 1: Use LangChain via reticulate
langchain_openai <- import("langchain_openai")
langchain_prompts <- import("langchain_core.prompts")
langchain_parsers <- import("langchain_core.output_parsers")

llm <- langchain_openai$ChatOpenAI(model = "gpt-4o-mini", temperature = 0.7)

prompt <- langchain_prompts$ChatPromptTemplate$from_messages(list(
  list("system", "You are a helpful AI professor teaching {subject}."),
  list("human", "{question}")
))

parser <- langchain_parsers$StrOutputParser()
chain <- prompt$`__or__`(llm)$`__or__`(parser)

response <- chain$invoke(list(
  subject = "Artificial Intelligence",
  question = "What is supervised vs unsupervised learning?"
))

cat(response)

6.4 4.4 Chains: The LCEL Pipeline

LCEL (LangChain Expression Language) transforms component composition into a readable, functional style:

# Simple chain
chain = prompt | llm | parser

# With branching
chain = (
    RunnableParallel({
        "summary": summary_chain,
        "sentiment": sentiment_chain
    })
    | format_output
)

# With fallbacks
chain = primary_chain.with_fallbacks([backup_chain])

6.5 4.5 Memory: Giving AI a Short-Term Brain

from langchain_openai import ChatOpenAI
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

llm = ChatOpenAI(model="gpt-4o-mini")

# In-memory chat history store
store = {}

def get_session_history(session_id: str):
    if session_id not in store:
        store[session_id] = InMemoryChatMessageHistory()
    return store[session_id]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful AI assistant."),
    ("placeholder", "{chat_history}"),
    ("human", "{input}")
])

chain = prompt | llm | StrOutputParser()

# Wrap with memory
chain_with_memory = RunnableWithMessageHistory(
    chain,
    get_session_history,
    input_messages_key="input",
    history_messages_key="chat_history"
)

# Multi-turn conversation
config = {"configurable": {"session_id": "user_123"}}
print(chain_with_memory.invoke({"input": "My name is Bongo."}, config=config))
print(chain_with_memory.invoke({"input": "What's my name?"}, config=config))

6.6 4.6 Tools: Extending AI Capabilities

Tools give LLMs the ability to call external services, run calculations, or search the web.

from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

@tool
def calculate_roi(investment: float, returns: float) -> str:
    """Calculate Return on Investment (ROI) as a percentage."""
    roi = ((returns - investment) / investment) * 100
    return f"ROI: {roi:.2f}%"

@tool
def get_exchange_rate(from_currency: str, to_currency: str) -> str:
    """Get the current exchange rate between two currencies."""
    # In reality, you'd call an API here
    return f"1 {from_currency} = 1.08 {to_currency} (example)"

# Bind tools to the model
llm_with_tools = ChatOpenAI(model="gpt-4o").bind_tools([
    calculate_roi,
    get_exchange_rate
])

6.7 4.7 Interactive Simulation: LangChain Pipeline Builder

Note🎮 Live Simulation

Drag and drop components to build your own LangChain pipeline visually. See the LCEL code generated in real time.


6.8 Chapter Summary

  • LangChain provides composable building blocks for LLM applications
  • LCEL enables clean, readable pipeline construction with |
  • Memory gives conversations context across multiple turns
  • Tools extend LLM capabilities to the real world
  • Agents use tools autonomously to complete tasks

6.9 What’s Next

Time to get your hands dirty! Chapter 5 (Practice Lab) walks you through making your very first AI API call from scratch.


Note📚 Further Reading