Module 05 · Using AI Well

What AI Can and Can't Do

An honest visual map of AI's genuine strengths and real blind spots — including why it confidently states things that are completely wrong.

⏱ 45 min📊 3 Diagrams🧩 4 Exercises✅ 4-Question Quiz
📖
Lesson Content
Read & Understand

AI is genuinely extraordinary at certain things — and genuinely terrible at others. The problem is that it doesn't always know which is which. This makes it more dangerous to use carelessly than a tool that simply fails — because AI fails gracefully, often with great confidence.

Where AI excels: pattern matching at scale. Summarizing large volumes of text. Translating between languages. Writing first drafts. Recognizing objects. These tasks have one thing in common: there's a pattern in existing data that can be learned and applied to new examples.

Where AI struggles: anything requiring genuine reasoning, common sense, or awareness of the real world. AI has no concept of time passing, no ability to verify what's true, and no lived experience to draw from.

The most important failure mode is hallucination: when AI generates plausible-sounding but completely fabricated information — citations that don't exist, statistics never measured, quotes never said. It doesn't know it's making things up.

Key Takeaways

AI excels at pattern-based tasks: summarizing, classifying, generating
AI struggles with true reasoning, common sense, and real-world awareness
Hallucination is when AI confidently generates false information
AI doesn't know when it's wrong — it always sounds confident
Verify AI outputs for anything factual or consequential
📊
Strengths vs. Weaknesses
AI capability quadrant
✓ Strengths
Summarizing long text
Language translation
Writing first drafts
Classification at scale
Pattern recognition
Generating variations
✗ Weaknesses
Counting / precise math
Real-time information
Spatial reasoning
Verifying facts
Common sense edge cases
Knowing what it doesn't know
📉
Hallucination Risk by Task
When to be most skeptical
Specific citations
Exact statistics
Quotes from people
Recent events
General summaries
Creative writing
Brainstorming ideas
🧩
Common Sense Failure Gallery
Edge cases where AI trips up
Self-Check Quiz
Click an answer to check your understanding
Q1 of 4
What is "hallucination" in the context of AI?
A
When AI produces visual distortions
B
When AI confidently generates false or fabricated information
C
When AI refuses to answer a question
D
When AI produces random-looking outputs
✓ Hallucination is when AI generates plausible-sounding but completely made-up information — confidently and without warning.
✗ Hallucination means AI generates false information confidently — like fabricating a citation, statistic, or quote that doesn't exist.
Q2 of 4
Which task would an LLM most reliably perform well?
A
Counting the exact number of words in a document
B
Telling you today's stock prices
C
Summarizing a long article into 3 key points
D
Verifying whether a news story is true
✓ Summarization is a pattern-based task — exactly what LLMs are designed for and reliably do well.
✗ Summarization plays to AI's core strength. The other options require real-time access or precise computation.
Q3 of 4
Why does an LLM sometimes fail at counting letters in a word?
A
It doesn't know the alphabet
B
LLMs process tokens, not individual characters, so character-level tasks can trip them up
C
Counting requires the internet
D
This is a bug that has been fixed in newer models
✓ LLMs chunk text into tokens — often whole words or word-parts — so character-level counting is architecturally hard.
✗ LLMs process tokens (chunks of text), not individual characters. This makes character-level tasks like letter counting challenging.
Q4 of 4
When should you be MOST skeptical of an AI's output?
A
When it writes a fictional short story
B
When it brainstorms ideas for a project
C
When it cites a specific statistic, study, or quote
D
When it rewrites a paragraph you wrote
✓ Specific citations, statistics, and quotes are the highest hallucination risk — always verify independently.
✗ Specific factual claims — especially citations, stats, and quotes — carry the highest risk of hallucination and should always be verified.
🧩
Exercises & Worksheets
Apply what you learned
1

Catch a Hallucination

Ask any AI chatbot to cite 3 academic papers on a specific niche topic. Then search Google Scholar for each citation. How many were real? What did the fake ones look like?

🔍 Experiment
2

Task Sorting

Create two columns: 'Good AI task' and 'Bad AI task.' Sort: writing a legal contract, counting words, translating a menu, verifying a tweet is real, summarizing a meeting, giving medical advice, generating product names, predicting weather.

📝 Categorization
3

Break the AI

Try to find 3 questions that trip up an AI chatbot. Document: the question, the AI's response, and why the response was wrong or unreliable. Look for failures around spatial reasoning, counting, real-time info, or common sense.

🧪 Experiment
4

Build a Verification Checklist

Create a personal 5-step checklist for deciding when and how to verify AI-generated information. Include: criteria for when to verify, how to verify efficiently, and what to do when the AI was wrong.

🛠 Practical