Module 08 · The Bigger Picture

Essential AI Vocabulary

Build your AI literacy with interactive visual flashcards — each term anchored to a memorable analogy rather than a dry definition. Click any card to flip it.

⏱ 45 min📊 3 Diagrams🧩 4 Exercises✅ 4-Question Quiz
📖
Lesson Content
Read & Understand

AI comes with its own vocabulary — and using terms correctly matters. It helps you read articles without getting lost, have informed conversations, and critically evaluate claims made about AI systems.

But vocabulary is only useful when it's attached to meaning, not just memorized as a definition. Every term in this module comes with an analogy: a real-world parallel that makes the concept intuitive and sticks in memory.

A few of the most important: A model is the finished product of the training process. Parameters are the model's internal settings — billions of numbers adjusted during training. Training is the learning process. Inference is when you actually use the finished model.

A token is how LLMs break up text — chunks that might be whole words, parts of words, or punctuation. Fine-tuning takes a general model and trains it further on specific data — like a generalist doctor doing a cardiology residency.

Key Takeaways

Model = finished product of the training process
Parameters = internal settings adjusted during training
Training = teaching; Inference = using the finished model
Token = how LLMs chunk text (not exactly words)
Fine-tuning = specializing a general model on specific data
🃏
Visual Flashcards
Click any card to flip it and reveal the definition and analogy
🏗
Model
Click to define
A trained AI system — the finished product of the training process. Takes inputs and produces outputs based on learned patterns.
Like a finished cookbook: training gathered all the recipes (patterns), the model is the completed book you use to cook (infer).
tap to flip
🔧
Parameters
Click to define
The internal numerical values of a model — billions of numbers adjusted during training to minimize prediction error.
Like the settings on a complex mixing board — each knob is tuned during training to minimize the 'wrong' signal.
tap to flip
📚
Training
Click to define
The process of showing a model many examples and adjusting its parameters to minimize prediction error. Can take weeks on thousands of specialized chips.
Like a student studying for an exam — except the student is a neural network and the exam is repeated billions of times.
tap to flip
🔮
Inference
Click to define
Using a trained model to generate outputs — asking it a question, analyzing an image, getting a translation. This is what happens when you use an AI product.
Like a doctor applying their medical knowledge to diagnose a patient — the learning is done; now they're applying it.
tap to flip
🧩
Token
Click to define
The chunks LLMs process text as — not exactly words, but fragments that may be whole words, word-parts, or punctuation. 'Unbelievable' might be 3 tokens.
Like syllables in speech — language broken into digestible pieces easier to process than individual letters or full sentences.
tap to flip
🎓
Fine-Tuning
Click to define
Taking a general pre-trained model and training it further on specific data to specialize it for a narrower task or domain.
Like a general practitioner completing a cardiology residency — same foundation, now specialized for a specific domain.
tap to flip
💬
Prompt
Click to define
Everything you give to an AI model before it responds — your question, instructions, context, and examples. Prompt quality directly shapes output quality.
Like a job brief given to a contractor — the clearer and more detailed the brief, the better the work delivered.
tap to flip
🪟
Context Window
Click to define
The amount of text an AI model can 'see' at once — its working memory. Modern models can handle entire books.
Like a desk — you can only work with what fits on the surface. Older models had a small desk; modern ones have a conference table.
tap to flip
🎭
Hallucination
Click to define
When an AI model generates false information confidently — fabricating citations, statistics, events, or quotes that don't exist.
Like a very confident person who fills in gaps in their knowledge with made-up details — and genuinely can't tell the difference.
tap to flip
🌡
Temperature
Click to define
A setting that controls how random or creative an AI's outputs are. Low = precise and predictable. High = creative and unpredictable.
Like a radio tuner — low gives the clearest signal; high lets you drift and discover unexpected stations.
tap to flip
🏛
Foundation Model
Click to define
A large, general-purpose model trained on massive data that can be adapted for many downstream tasks. GPT-4, Claude, and Gemini are foundation models.
Like a generalist university education — broad, expensive, and the foundation that specialized training builds on top of.
tap to flip
🕸
Neural Network
Click to define
A system of interconnected layers that processes information loosely inspired by the brain. Modern AI is built on deep (many-layered) neural networks.
Like a relay race with many runners — input passes through each layer, each one transforming it slightly, until the final layer produces an output.
tap to flip
🕸
Concept Relationship Web
How all these terms connect
Central concept
The Model
Everything else connects here
Training
creates →
Parameters
stored inside →
Inference
uses →
Fine-Tuning
specializes →
Tokens
input format →
Prompt
triggers →
Context Window
limits what →
Temperature
controls output →
Hallucination
failure mode →
Self-Check Quiz
Click an answer to check your understanding
Q1 of 4
What is the difference between "training" and "inference"?
A
Training is faster; inference is slower
B
Training is the learning process; inference is using the trained model to generate outputs
C
Training happens online; inference happens offline
D
They mean the same thing in modern AI
✓ Training = learning (adjusting parameters from data). Inference = applying the learned model to new inputs.
✗ Training is the learning phase. Inference is the application phase — prompt in, output out.
Q2 of 4
What does "fine-tuning" do to a foundation model?
A
Makes it faster at inference
B
Resets it to a blank state and retrains from scratch
C
Trains it further on specific data to specialize it for a narrower task
D
Reduces its hallucination rate to zero
✓ Fine-tuning starts with a general model and adapts it — like a specialist residency on top of a general medical degree.
✗ Fine-tuning continues training a general model on specific data — preserving general knowledge while adding specialized expertise.
Q3 of 4
What does raising the "temperature" of an AI model do?
A
Makes the model run faster
B
Makes outputs more accurate and factual
C
Makes outputs more random and creative
D
Increases the context window size
✓ High temperature = more randomness and creativity. Low temperature = more precision and predictability.
✗ Temperature controls output randomness. High = creative and unpredictable. Low = focused and consistent.
Q4 of 4
Tokens are best described as:
A
Individual characters processed one at a time
B
Full sentences the model processes at once
C
Text chunks — often word fragments or whole words — that LLMs use as their basic unit of input
D
The cost units AI companies charge per API call
✓ Tokens are sub-word chunks — 'unbelievable' might be 3 tokens. They're the atomic units LLMs read and write.
✗ Tokens are text fragments — not full words, not individual characters. They're the basic processing units LLMs use.
🧩
Exercises & Worksheets
Apply what you learned
1

Define Without Jargon

Choose 5 vocabulary terms from this module. For each one, write a definition in plain language that a 12-year-old would understand — no technical words allowed. Share with someone and see if they get it on the first try.

✍️ Writing
2

Spot the Terms in the Wild

Find a recent AI news article. Read it and highlight every vocabulary term from this module that appears. For each: Does the author use it correctly? Could their explanation be clearer?

🔍 Analysis
3

Create Your Own Analogy

Pick 3 terms you found hardest to remember. For each one, create your own analogy from your personal experience. Test your analogy on someone else — does it help them understand?

💡 Creative
4

Course Reflection Diagram

Return to the mental model drawing from Module 1. Draw a new version of how you now picture AI. Add labels using this module's vocabulary. Write a paragraph comparing the two: What changed? What surprised you most?

🎨 Synthesis
🎓

Course Complete

You've completed all 8 modules of the AI4 Academy visual curriculum. You now have the foundational knowledge to use AI thoughtfully and talk about it confidently.

← Return to Overview