Skip to main content

Student Guide

Module 01

How Claude Thinks

Build the right mental model. Claude is a reasoning engine that processes language, follows instructions, and generates responses based on patterns. Once people see that clearly, everything else in the program clicks.

60 minutesAll roles

Charter Oak Strategic Partners · Claude Mastery Program

Back to Module|Student Guide
What You Will Learn
  • The mental model that makes Claude useful: reasoning engine, not search bar
  • Five task categories and how to recognize which one you need
  • Why prompt structure determines output quality
  • How to move from Version A prompts to Version C prompts

The Mental Model

Claude is not a search engine. It does not look up answers in a database. Claude is a reasoning engine that processes language, follows instructions, and generates responses based on patterns. It works with what you give it in the moment. It does not remember previous conversations. The quality of its output depends entirely on the quality of your input.

Two common mental models hold people back. The first treats Claude like Google: type a few keywords, hope for a useful link. The second treats Claude like magic: type a vague wish, expect it to read your mind. The model that works: give Claude a clear role, specific context, a defined task, and explicit constraints. It will reason with what you provide.

The Five Task Categories

Every task you give Claude falls into one of five categories. Recognizing the category helps you write better prompts, because each category has different requirements.

CategoryWhat Claude Is DoingExample
GenerationCreating new content from instructionsWriting a project update email
AnalysisExamining information and drawing conclusionsReviewing a contract for risk
TransformationConverting content from one form to anotherTurning meeting notes into action items
ExtractionPulling specific information from a larger sourceFinding all dates in a 40-page report
ReasoningWorking through logic, math, or multi-step problemsCalculating whether a hiring plan fits the budget

Some tasks blend categories. A summary could be Transformation or Extraction. A comparison could be Analysis or Reasoning. The goal is to develop the habit of thinking about what kind of work you are asking Claude to do, because the prompting approach changes by category.

Three Versions of the Same Question

The same underlying question — “What should we do about employee turnover?” — produces radically different results depending on how you ask.

Version A — The Search Query

“employee turnover solutions”

Generic listicle. Reads like the first page of Google results.

Version B — The Vague Request

“We’re having trouble keeping people. Any ideas?”

Better, but Claude is guessing at your situation. Surface-level output.

Version C — The Structured Prompt

“You are an HR strategist advising a 200-person manufacturing company in the Midwest. We’ve lost 34 employees in the last 12 months, mostly from our second-shift production floor. Exit interviews cite three recurring themes: inconsistent scheduling, lack of advancement opportunities, and a feeling of being ‘invisible’ to management. Our budget for retention initiatives is approximately $150K annually.

Analyze these three exit interview themes and recommend specific, implementable retention strategies. For each strategy, include: what it addresses, estimated cost, timeline to implement, and how we would measure whether it’s working.”

Same question. Specific, structured, actionable output. Ready for a leadership meeting.

The Exchange Rate

The structured prompt takes about two minutes longer to write than the vague one. That investment saves you multiple rounds of “no, that’s not what I meant.” Two minutes of writing for two hours of work. That is the exchange rate for every interaction you have with Claude.

Exercise: Task Category Sort

Activity — 20 minutes

Instructions: You have a list of 20 real workplace tasks. Working in pairs, sort each task into one of the five categories: Generation (G), Analysis (A), Transformation (T), Extraction (E), or Reasoning (R).

Some tasks could fit more than one category. Pick the best fit and note the ones you debate. The conversation about why a task belongs in a particular category is the point of the exercise.

Time: 8 minutes to sort. Then we debrief together.

Reflection

Questions to Consider

  1. Which of the five categories would save you the most time at work?
  2. What was the most surprising thing Claude did in today’s demos?
  3. Where in your own work are you currently sending Version A prompts?
  4. If Claude can handle all five task categories in under two minutes each, what does your next week look like if you actually use it?

What’s Next

You saw the difference between a vague prompt and a structured one. Module 02: Prompt Anatomy gives you the framework: five components that turn any request into a structured prompt. Role, Context, Task, Constraints, Format. It takes sixty seconds longer to write. It saves you six rounds of revision.

Quick Reference — Module 01

The Mental Model

Claude is a reasoning engine. It processes language, follows instructions, and works with what you give it. Give it more, get more back.

Five Task Categories

Generation · Analysis · Transformation · Extraction · Reasoning

The Exchange Rate

Two minutes of structured prompting saves hours of revision. Structure your input. Get usable output.

Version C, Every Time

Include role, context, specifics, and format in every prompt. The structured prompt wins every time.