A visual field guide to the AI systems you interact with every day — what makes each one different, and how they work under the hood.
Not all AI is the same. The voice assistant on your phone, the algorithm that recommends your next Netflix show, and the chatbot you use for work are three completely different systems — trained differently, on different data, for different tasks.
Large Language Models (LLMs) like Claude and GPT are trained on massive amounts of text. They predict what words should come next, developing a broad ability to write, reason, summarize, and converse. They don't look things up — they generate.
Image recognition systems use layered filters to detect edges, then shapes, then objects. Each layer builds on the last — like a visual assembly line from pixels to meaning.
Recommendation systems match you to content by finding people who liked similar things. They're powered by your behavior — clicks, watches, purchases — not your stated preferences. Predictive models look at historical data to forecast outcomes like credit scores or weather.
List 10 apps or services you use regularly. For each one, identify which AI type powers it (LLM, image recognition, recommendation, predictive). If unsure, make your best guess and explain your reasoning.
🗺 MappingOpen any chatbot and ask it a question requiring real-time information (e.g., today's stock price). Observe how it responds. What does this reveal about how LLMs work vs. search engines?
🔍 ExperimentLook at your current Netflix or Spotify recommendations. Pick 3 items and write: What past behavior triggered this? What does the algorithm 'think' you like? Do you agree?
📝 ReflectionUsing the LLM token prediction diagram as inspiration, create your own example. Pick a sentence fragment, list 4 possible next words with rough probability estimates, and explain why the model would rank them that way.
🎨 Creative