7  Practice Lab: Your First AI API Call

Important🧪 Lab Overview

Duration: 60–90 minutes | Difficulty: ⭐☆☆☆☆ (Beginner) Goal: Make your first successful call to an AI API and understand every part of the request/response cycle.

7.1 What You’ll Build

By the end of this lab, you will have built a command-line AI assistant that can:

  • Accept questions from the user
  • Call the OpenAI API
  • Display formatted responses
  • Track token usage and costs

7.2 Step 1: Set Up Your Environment

7.2.1 1.1 Install Dependencies

# Create a virtual environment (recommended)
python -m venv ai-env
source ai-env/bin/activate  # Mac/Linux
# ai-env\Scripts\activate   # Windows

# Install packages
pip install openai python-dotenv rich
# Install required packages
install.packages(c("httr2", "jsonlite", "dotenv", "cli"))

7.2.2 1.2 Configure Your API Key

Warning🔐 Security First

Never hardcode API keys in your code. Always use environment variables.

Create a file called .env in your project folder:

# .env file (NEVER commit this to git)
OPENAI_API_KEY=sk-your-key-here

7.3 Step 2: Your First API Call

# file: first_ai_call.py
from openai import OpenAI
from dotenv import load_dotenv
import os

# Load environment variables from .env
load_dotenv()

# Initialise the client
client = OpenAI()  # Automatically reads OPENAI_API_KEY

# Make your first API call!
response = client.chat.completions.create(
    model="gpt-4o-mini",      # The model to use
    messages=[
        {
            "role": "system",
            "content": "You are a helpful AI assistant."
        },
        {
            "role": "user",
            "content": "Hello! Can you explain what an API is in one sentence?"
        }
    ],
    temperature=0.7,           # Creativity (0=deterministic, 2=wild)
    max_tokens=150             # Maximum response length
)

# Extract and print the response
print(response.choices[0].message.content)
print(f"\nTokens used: {response.usage.total_tokens}")
# file: first_ai_call.R
library(httr2)
library(jsonlite)

# Load environment variables
Sys.setenv(OPENAI_API_KEY = readLines(".env")[1] |>
  gsub("OPENAI_API_KEY=", "", x = _))

# Make your first API call
response <- request("https://api.openai.com/v1/chat/completions") |>
  req_headers(
    "Authorization" = paste("Bearer", Sys.getenv("OPENAI_API_KEY")),
    "Content-Type" = "application/json"
  ) |>
  req_body_json(list(
    model = "gpt-4o-mini",
    messages = list(
      list(role = "system", content = "You are a helpful AI assistant."),
      list(role = "user", content = "Hello! Explain what an API is in one sentence.")
    ),
    temperature = 0.7,
    max_tokens = 150L
  )) |>
  req_perform() |>
  resp_body_json()

# Extract the response
cat(response$choices[[1]]$message$content, "\n")
cat("Tokens used:", response$usage$total_tokens, "\n")

7.4 Step 3: Understand the Response Object

graph TD
    A[API Response] --> B[id: unique call identifier]
    A --> C[model: which model responded]
    A --> D[choices: list of completions]
    D --> E[message.role: 'assistant']
    D --> F[message.content: the actual text]
    D --> G[finish_reason: 'stop' or 'length']
    A --> H[usage]
    H --> I[prompt_tokens]
    H --> J[completion_tokens]
    H --> K[total_tokens]


7.5 Step 4: Build the Full CLI Assistant

# file: ai_assistant.py
from openai import OpenAI
from dotenv import load_dotenv
from rich.console import Console
from rich.markdown import Markdown
from rich.panel import Panel
from rich.text import Text

load_dotenv()
client = OpenAI()
console = Console()

def chat_with_ai(conversation_history: list) -> tuple[str, dict]:
    """Send messages and get a response."""
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=conversation_history,
        temperature=0.7
    )
    return (
        response.choices[0].message.content,
        {
            "prompt_tokens": response.usage.prompt_tokens,
            "completion_tokens": response.usage.completion_tokens,
            "total_tokens": response.usage.total_tokens
        }
    )

def run_assistant():
    """Main chat loop."""
    console.print(Panel.fit(
        "[bold blue]🤖 AI Assistant — Powered by GPT-4o Mini[/bold blue]\n"
        "[dim]Type 'quit' to exit | 'clear' to reset conversation[/dim]"
    ))

    conversation = [
        {"role": "system", "content":
         "You are a knowledgeable AI professor teaching Fundamentals of AI. "
         "Be helpful, clear, and use concrete examples."}
    ]

    total_tokens_used = 0

    while True:
        user_input = console.input("\n[bold green]You:[/bold green] ").strip()

        if user_input.lower() == "quit":
            console.print(f"\n[dim]Total tokens used: {total_tokens_used}[/dim]")
            console.print("[bold]Goodbye! 👋[/bold]")
            break
        elif user_input.lower() == "clear":
            conversation = [conversation[0]]  # Keep system message
            console.print("[dim]Conversation cleared.[/dim]")
            continue
        elif not user_input:
            continue

        conversation.append({"role": "user", "content": user_input})

        with console.status("[bold blue]Thinking...[/bold blue]"):
            response_text, usage = chat_with_ai(conversation)

        conversation.append({"role": "assistant", "content": response_text})
        total_tokens_used += usage["total_tokens"]

        console.print("\n[bold blue]AI:[/bold blue]")
        console.print(Markdown(response_text))
        console.print(f"[dim](Tokens: {usage['total_tokens']} | Total: {total_tokens_used})[/dim]")

if __name__ == "__main__":
    run_assistant()

7.6 Step 5: Test Your Assistant

Run the assistant and try these prompts to test it:

python ai_assistant.py

Try asking: - “What is machine learning in simple terms?” - “Give me a 3-step plan to learn Python for AI” - “What are the most in-demand AI skills for business professionals?”


7.7 Lab Challenges 🏆

Try these extensions to deepen your understanding:

  1. Easy: Add a --model command-line argument to switch between gpt-4o-mini and gpt-4o
  2. Medium: Save conversation history to a JSON file so you can resume it
  3. Hard: Add streaming so the response appears word-by-word instead of all at once
Tip💡 Streaming Hint
# Streaming response
for chunk in client.chat.completions.create(
    model="gpt-4o-mini",
    messages=messages,
    stream=True
):
    print(chunk.choices[0].delta.content or "", end="", flush=True)

7.8 Lab Debrief

What you just built: - A working AI API client - A multi-turn conversational assistant - Token usage tracking

Key concepts reinforced: - The role system (system, user, assistant) - Temperature and token limits - Stateless API calls with manual history management


Note✅ Lab Complete!

You’ve made your first AI API call. In the next lab (Chapter 6), you’ll level up to building LangChain pipelines.