mindmap
root((Prompt<br/>Engineering))
Zero-Shot
Direct instruction
No examples needed
Few-Shot
Examples in prompt
Pattern recognition
Chain-of-Thought
Step-by-step reasoning
Show your work
Role Prompting
Persona assignment
Domain expertise
Structured Output
JSON/XML formats
Consistent parsing
Tree-of-Thought
Multiple reasoning paths
Best path selection
9 Prompt Engineering Techniques
“The quality of your prompt determines the quality of your AI output. Garbage in, garbage out — brilliance in, brilliance out.”
9.1 7.1 What is Prompt Engineering?
Prompt engineering is the art and science of crafting inputs to AI models to reliably produce high-quality, specific outputs. As AI becomes more capable, the ability to communicate precisely with AI systems is a critical professional skill.
Think of it like managing a brilliant but very literal intern: they’ll do exactly what you say, so you need to say exactly what you mean.
9.2 7.2 The Six Core Techniques
9.3 7.3 Zero-Shot Prompting
Simply ask the model what you want — no examples needed. Works well for common, well-defined tasks.
# Zero-shot: Direct instruction
zero_shot_prompt = """
Classify the following customer email as: COMPLAINT, INQUIRY, or COMPLIMENT.
Respond with only one word.
Email: "Your delivery was three days late and the package was damaged. Very disappointed."
"""Best for: Classification, translation, summarisation, simple Q&A.
9.4 7.4 Few-Shot Prompting
Provide examples to show the model the pattern you want.
few_shot_prompt = """
Classify emails as COMPLAINT, INQUIRY, or COMPLIMENT.
Email: "My order arrived on time and was perfectly packaged!"
Classification: COMPLIMENT
Email: "I've been waiting 2 weeks for my refund, this is unacceptable."
Classification: COMPLAINT
Email: "Can you tell me if this item comes in blue?"
Classification: INQUIRY
Email: "The app keeps crashing every time I try to checkout. Very frustrating."
Classification:
"""
# Expected: COMPLAINTBest for: Nuanced classification, style matching, format specification.
9.5 7.5 Chain-of-Thought (CoT) Prompting
Ask the model to reason step by step before answering. This dramatically improves performance on complex reasoning tasks.
# Without CoT - often wrong on complex problems
bad_prompt = "If a train travels at 80km/h for 2.5 hours, then speeds up to 120km/h for 1.5 hours, what is the average speed for the whole journey?"
# With CoT - much more reliable
cot_prompt = """
Solve this problem step by step, showing all calculations.
If a train travels at 80km/h for 2.5 hours, then speeds up to 120km/h for 1.5 hours,
what is the average speed for the whole journey?
Let me think through this:
Step 1:
"""9.6 7.6 Role Prompting
Assign the model a persona or expertise role to shape its responses.
# Generic prompt
generic = "Explain inflation."
# Role-prompted for different audiences
for_executive = """You are a seasoned CFO with 25 years experience.
Explain inflation to a board of directors in 3 bullet points,
focusing on business impact."""
for_student = """You are a patient economics teacher.
Explain inflation to a 16-year-old student using
a relatable everyday example."""
for_investor = """You are a hedge fund manager.
Explain inflation risks for a portfolio heavy in
fixed-income instruments."""9.7 7.7 Structured Output Prompting
Instruct the model to respond in a specific format (JSON, XML, Markdown table) for reliable parsing.
from openai import OpenAI
from pydantic import BaseModel
from typing import List
client = OpenAI()
class ProductAnalysis(BaseModel):
product_name: str
strengths: List[str]
weaknesses: List[str]
market_opportunity: str
risk_level: str # LOW, MEDIUM, HIGH
recommendation: str
# Use structured outputs (OpenAI feature)
response = client.beta.chat.completions.parse(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a product analyst."},
{"role": "user", "content": "Analyse: AI-powered personal finance app for African markets"}
],
response_format=ProductAnalysis
)
analysis = response.choices[0].message.parsed
print(f"Risk Level: {analysis.risk_level}")
print(f"Recommendation: {analysis.recommendation}")9.8 7.8 The CLEAR Framework
A memorable framework for crafting excellent prompts:
| Letter | Stands For | What It Means |
|---|---|---|
| C | Context | Background information the model needs |
| L | Length | How long should the response be? |
| E | Examples | Show the pattern you want |
| A | Audience | Who is this for? |
| R | Role | What persona should the model adopt? |
9.8.1 Example: CLEAR in Practice
[ROLE] You are an experienced management consultant.
[CONTEXT] Our company is a 200-person fintech startup considering entering the Nigerian insurance market.
[AUDIENCE] The audience is our executive team — MBA-educated, data-driven, time-poor.
[TASK] Analyse the top 3 risks of this market entry.
[LENGTH] Keep each risk to 2 sentences max, with a one-line mitigation strategy.
[EXAMPLES]
Risk 1: [Risk name] — [2-sentence explanation]. Mitigation: [One line].
9.9 7.9 Common Prompt Engineering Mistakes
| Mistake | Better Approach |
|---|---|
| “Write me something about AI” | Specify topic, length, audience, format |
| “Make it better” | Define what “better” means specifically |
| No system prompt | Always set the model’s role and context |
| Asking too many things at once | Break complex tasks into steps |
| Ignoring temperature | Set low (0–0.3) for factual, high (0.7–1.0) for creative |
9.10 7.10 Interactive Simulation: Prompt Lab
9.11 Chapter Summary
- Zero-shot: Direct instructions for well-defined tasks
- Few-shot: Examples guide the pattern
- Chain-of-Thought: Step-by-step reasoning for complex problems
- Role prompting: Persona shapes expertise and tone
- Structured output: JSON/Pydantic for reliable parsing
- CLEAR framework: Context, Length, Examples, Audience, Role
9.12 What’s Next
Chapter 8 is a full Practice Lab where you’ll systematically test prompt techniques and build a prompt comparison tool.