Skip to main content

Teacher’s Edition

Module 01

How Claude Thinks

Build the right mental model. Claude is a reasoning engine that processes language, follows instructions, and generates responses based on patterns. Once people see that clearly, everything else in the program clicks.

60 minutesAll roles

Charter Oak Strategic Partners · Claude Mastery Program · Version 1.0 · Confidential · Not for distribution to participants

Back to Module|Teacher’s Guide
Concept: Why This Module Exists

Most participants arrive with one of two mental models. The first: Claude is Google with better sentences. The second: Claude is an alien intelligence that understands everything. Both models produce bad prompts. The search-engine model produces keyword queries. The magic model produces vague wishes.

This module installs a third model: Claude is a reasoning engine that processes language, follows instructions, and generates responses based on patterns. It does not retrieve information from a database. It does not remember previous conversations. It works with what you give it in the moment. Once people understand this, every prompting skill that follows has a foundation to sit on.

Opening — 10 minutes

Script: Room Survey

“Let me see hands. Who has used Claude or ChatGPT in the last month?”

Wait. Count visibly. Nod.

“Good. Now keep your hand up if you used it for something that actually mattered at work. Not testing. Not asking it to write a limerick. Something you sent to someone, submitted, or acted on.”

Watch the hands drop. They will. This gap between casual use and real use is the reason this program exists. Say so.

“That gap is what we close today. By the time you leave this room, you will have a working prompt for a real task you do every week. And you will understand why your current approach to Claude is leaving 90% of its capability on the table.”

Concept: The Five Task Categories

Every task you hand Claude falls into one of five categories. Generation: creating something from scratch (drafts, code, plans). Analysis: examining something that exists (data, documents, arguments). Transformation: changing the shape or format of content (summaries, translations, restructuring). Extraction: pulling specific information from a larger body (numbers from contracts, names from transcripts, dates from reports). Reasoning: working through logic, calculations, or multi-step problems where the answer requires thought, not just retrieval.

These categories matter because each one has different prompting requirements. A generation task needs voice, audience, and constraints. An extraction task needs precision about what to pull and what format to return it in. Reasoning tasks benefit from chain-of-thought prompting, which participants will learn in Module 06. For now, the categories are a sorting tool. They help people see that “use Claude” is not one skill. It is five.

Script: Introducing the Categories

“Everything you will ever ask Claude to do falls into one of five buckets. Generation: you need something created. Analysis: you need something examined. Transformation: you need something reshaped. Extraction: you need specific pieces pulled out of something larger. Reasoning: you need Claude to think through a problem step by step.”

“Do not memorize these. The exercise we are about to do will burn them into your brain faster than any definition.”

Ask the Room: Prime the Room

“Before we move on: someone give me one task you did this week at work. Just one. Doesn’t matter how small.”

Take the answer. Categorize it on the spot. If someone says “I wrote a status update,” that is Generation. If someone says “I reviewed a vendor proposal,” that is Analysis. Do two or three of these. It primes the room for the sorting exercise.

Live Demo: Three Versions of the Same Question — 15 minutes

Concept: Why This Demo Works

This demo is the single most effective moment in the entire Tier I curriculum. It takes one topic (employee turnover) and shows three levels of prompting quality. Version A is a search query: “employee turnover.” Claude responds with a generic encyclopedia entry. Version B is a vague request: “Help me understand our turnover problem.” Claude gives general advice that could apply to any company. Version C is a structured prompt with role, company details, specific data, and format instructions. Claude produces work that could walk into a boardroom.

The teaching power comes from the contrast. Do not explain the lesson before the demo. Let the outputs speak. The room will see it.

demo-data/module-01/three-versions-same-question.mdAll three prompts with expected outputs and teaching notes.
Script: Running the Demo

“I am going to show you the same question asked three different ways. Watch what happens to the output.”

Open Claude on the projector. Paste Version A: “employee turnover.” Wait for the response. Let the room read it. Do not comment.

New conversation. Paste Version B: “Help me understand our turnover problem.” Wait. Let them read.

New conversation. Paste Version C: the full structured prompt with role, company details, specific data, and format instructions. Wait. Let them read.

“Same topic. Same tool. Three completely different results.”

Ask the Room: The Exchange Rate

“Which of those three outputs would you bring to a meeting with your CEO?”

The answer is obvious. Let someone say it. Then ask the follow-up:

“How long did it take to write that third prompt? Maybe two minutes? Three? How long would it take to produce that analysis from scratch?”

This is the exchange rate. Two minutes of writing for two hours of work. Let that sit.

Watch For: “But It Made Things Up”

Someone will say this. It happens in almost every session. They will point out that Claude invented details in the response. Clarify: “Claude worked with the data we gave it. If we had uploaded a real turnover spreadsheet, it would have analyzed real numbers. The quality of the output tracks the quality of the input. That is the whole point.”

If the question is about hallucination more broadly, park it briefly: “Good instinct. Knowing when Claude is reliable and when to verify is a skill we build throughout this program. For now, know this: Claude is most reliable when you give it facts to work with and least reliable when you ask it to generate facts from nothing.”

Guided Exercise: Task Category Sort — 20 minutes

demo-data/module-01/task-categories-exercise.md20 workplace tasks with answer key and facilitator notes.
Script: Launching the Exercise

“You have a list of twenty tasks. Real ones. Things like ‘write a product FAQ,’ ‘find all mentions of delivery dates in a contract,’ ‘calculate whether we should lease or buy equipment.’ Each task belongs to one of the five categories: Generation, Analysis, Transformation, Extraction, Reasoning.”

“Work in pairs. You have eight minutes. Sort all twenty. Some tasks could fit more than one category. Pick the best fit and note the ones you debated.”

“Go.”

Concept: The Ambiguous Tasks

Several tasks on the worksheet are deliberately ambiguous. “Summarize a 40-page report” could be Transformation (reshaping long content into short content) or Extraction (pulling key points). “Compare two vendor proposals” could be Analysis (evaluating each one) or Reasoning (deciding which one is better). The ambiguity is the point.

When pairs debate which category a task belongs in, they are developing the habit of thinking about what they are actually asking Claude to do. That thinking is more valuable than getting the “right” answer on the worksheet.

The answer key in the demo file provides the primary category for each task and notes where secondary categories apply. Use it to guide the debrief, not to declare winners.

Script: Debrief the Exercise

“Time. Let us go through these.”

Read through the answer key. For each task, ask the room which category they picked. When there is disagreement, ask both sides to explain. Spend extra time on the ambiguous ones.

Ask the Room: Category Surprise

“Which category surprised you the most? Which tasks did you not expect to fall where they did?”

Reasoning tasks surprise people most often. They do not expect Claude to handle math, logic, or decision frameworks. This sets up the break-even demo that follows.

Live Demo: One Task From Each Category — 10 minutes

demo-data/module-01/sample-interview-transcript.txtInterview with Maria Flores, Greenfield Manufacturing.
demo-data/module-01/sample-contract-excerpt.txtMaster Services Agreement with dollar amounts.
demo-data/module-01/break-even-data.mdAluminum bracket scenario with all figures.
Concept: The Break-Even Math

The break-even calculation uses the scenario from the demo file: Greenfield Manufacturing is considering a new product line of custom aluminum brackets for aerospace. The numbers: $472,000 in annual fixed costs (equipment, facility, operators, certification, insurance, marketing), variable costs of $26.69 per unit (raw material $14.20, consumables $3.85, inspection $2.10, shipping $4.50, 3% sales commission on $68 = $2.04), and a selling price of $68.00 per unit.

The contribution margin is $68.00 minus $26.69 equals $41.31 per unit. Break-even is $472,000 divided by $41.31, which equals approximately 11,426 units.

Why this demo works: the math is straightforward enough that people in the room can follow it, but complex enough that doing it by hand would take ten minutes and invite arithmetic errors. Claude handles the computation correctly and shows every step. That transparency is the teaching moment. Claude is not pulling a number from a database. It is working through the logic, which is exactly what the Reasoning category means.

When the room sees the break-even answer, ask the follow-up from the demo file: “What if I told you the sales commission was 5% instead of 3%? Would you re-run the whole analysis yourself, or would you just tell Claude what changed?” This demonstrates that conversation (Module 03’s topic) is the natural complement to a good first prompt.

Common questions: “Can Claude handle more complex financial models?” Yes, but accuracy decreases with complexity. For models with many interdependent variables, Claude should be treated as a fast first draft, not a calculator of record. This program does not teach participants to trust Claude’s math blindly. It teaches them to use Claude’s math as a starting point and verify the numbers that matter.

Script: Running the Rapid Fire

“Five demos. Two minutes each. One from every category. Watch the range.”

Generation: “Write a job posting for a senior quality engineer at a Midwest manufacturing company. The role reports to the Director of Operations and requires 5+ years of experience in ISO 9001 environments.”

Analysis: Use a brief prompt analyzing a competitor pricing page. Keep this one simple.

Transformation: Upload the interview transcript. “Turn this interview into a 500-word Q&A article. Preserve the speaker’s voice. Cut the filler.”

Extraction: Upload the contract excerpt. “List every dollar amount in this document. Include the clause number, the amount, and what the payment is for. Format as a table.”

Reasoning: Paste the break-even data. “Calculate the break-even volume for this product line. Show your work step by step.”

Save the break-even demo for last. It gets the biggest reaction.

Watch For: Speed

Two minutes per demo means you cannot wait for slow outputs. If Claude is generating a long response, narrate while it writes. “See how it structured the job posting with required qualifications separated from preferred qualifications? That is because the prompt specified the role clearly.” Keep the room’s eyes on the screen and their attention on the pattern: specificity in, quality out.

Facilitator Note

Pre-load all five demo files in separate browser tabs before the session starts. The rapid-fire demo loses its energy if you pause to find files. Tab order: (1) a blank Claude chat for the job posting generation prompt, (2) a competitor pricing page screenshot or text you have prepared, (3) the interview transcript, (4) the contract excerpt, (5) the break-even data. Open a new Claude conversation for each demo so context from previous demos does not bleed into the next one.

If you are short on time, cut the Analysis demo (competitor pricing) and the Generation demo (job posting). Keep Transformation, Extraction, and Reasoning. The transcript-to-Q&A and contract-extraction demos are quick and high-impact, and the break-even demo is the anchor of the segment.

Debrief — 5 minutes

Ask the Room: Four Closing Questions

“Four questions to close this module. First: which of the five categories would save you the most time at work?”

“Second: what was the most surprising thing Claude did in any of those demos?”

“Third: where in your own work are you currently sending Version A prompts, the kind we saw fail at the beginning?”

“Fourth: if Claude can do all five of those tasks in under two minutes each, what does your next week look like if you actually use it?”

Let the third question linger. It connects today’s module to tomorrow’s behavior. People need to see their own habits reflected back to them before they change.

Script: Transition to Module 02

“You saw the difference between a bad prompt and a good one. Module 02 gives you the structure. Five components. Every time. It takes sixty seconds longer to write. It saves you six rounds of revision. Let us take a five-minute break, and then we build the skill.”

SegmentActivityTime
OpeningHands-up survey, five task categories10 min
DemoThree versions of the same question15 min
ExerciseTask category sort (pairs)20 min
DemoOne task per category, rapid fire10 min
DebriefDiscussion questions5 min