Skip to main content

Teacher’s Edition

Module 14

The Automation Audit

The final capstone. Audit your organization's operations, score the highest-value automation opportunities, build one end-to-end, calculate the ROI, and walk out with a business case ready for leadership.

120 minutesAll Tier III participantsPrerequisite: Modules 10–13

Charter Oak Strategic Partners · Claude Mastery Program · Version 1.0 · Confidential · Not for distribution to participants

Back to Module|Teacher’s Guide
Concept: Why This Module Exists

Thirteen modules of capability without a plan for implementation is training that evaporates. The Automation Audit is the bridge between learning and doing. It produces a deliverable that participants take back to their desks: a prioritized list of automation opportunities, scored against concrete criteria, organized into a 90-day implementation roadmap.

This is the module that justifies the training investment to leadership. The audit quantifies the time recovered in hours and shows where that time goes instead — toward higher-value work, strategic initiatives, and the tasks that require human judgment. It names specific processes, assigns them to specific tools (Chat, Projects, Cowork, Claude Code, API), and sequences them into an actionable plan.

Concept: The Four-Phase Audit Framework

The audit follows a structured sequence that participants learned across the program:

Phase 1: Inventory. List every recurring process that consumes time. This is brainstorming — no filtering, no judging, no prioritizing. The goal is volume. Most teams underestimate how many recurring processes they run. Ten is typical on first pass. Twenty is common after prompting. The real number is usually forty or more.

Phase 2: Score. Rate each process against five criteria that predict automation success. These criteria are drawn from the tools and concepts taught across all four tiers. The scoring is designed to surface the highest-value opportunities objectively, preventing the common failure mode of automating the wrong things first.

Phase 3: Rank. Sort processes by total score. The highest-scoring processes are the best automation candidates. This is where participants see the pattern: the processes they find most tedious (high pain) are often highly repeatable (high score), which makes them ideal Claude candidates.

Phase 4: Plan. Assign ranked processes to 30-day sprints. Days 1-30: quick wins (highest score, lowest complexity). Days 31-60: core automations (high score, moderate complexity). Days 61-90: advanced workflows (moderate score, higher complexity or integration requirements). This sequencing builds momentum: early wins generate buy-in for harder projects.

Opening: Setting the Frame — 15 minutes

demo-data/module-14/audit-template.mdBlank audit template with scoring rubric, process inventory, and 90-day plan format.
demo-data/module-14/sample-completed-audit.mdA filled-in example audit for a mid-size operations team.
Script: Opening the Capstone

“This is the last module. Everything you have learned — prompt engineering, system prompts, chain-of-thought, writing with Claude, compound workflows, Cowork, Claude Code, Skills, scheduling, the API — comes together here. You are going to audit your own work, identify what Claude can take off your plate, and build a 90-day plan to make it happen.”

“The deliverable from this exercise is not a hypothetical. It is a document you take back to your desk, share with your manager, and use to justify the tools and time needed to implement what you have learned. If you do this well, this exercise pays for the entire training program.”

Script: Showing the Sample Audit

Display the completed sample audit on screen. Walk through it section by section:

“This is an audit from a mid-size operations team. They found 22 recurring processes. After scoring, eight were ranked as prime candidates. The top three: weekly status report compilation (saved 4 hours/week, automated with a Cowork Skill), customer onboarding packet assembly (saved 3 hours per hire, automated with Claude Code), and monthly compliance check (saved 6 hours/month, automated with an API integration).”

“Total annual time savings: 480 hours. That is twelve full working weeks returned to the team. Weeks that used to go to report compilation, data formatting, and compliance checklists — now available for the strategic work the team was hired to do.”

“This is the document that got their team a Claude for Teams subscription and dedicated implementation time.”

Phase 1: Process Inventory — 20 minutes

Script: Launching the Inventory

“Open the audit template. Phase 1: list every recurring process you do at work. Everything that happens on a schedule or in response to a trigger. Everything that takes more than ten minutes. Everything that makes you think ‘I wish I did not have to do this again.’”

“For each process, capture five things: the name, how often it happens, who does it, how many minutes it takes per occurrence, and your pain level from 1 to 5.”

“Do not filter. Do not judge. Do not worry about whether Claude can handle it. Just list. We filter later. You have fifteen minutes.”

Watch For: Participants Who Say “I Do Not Have Recurring Tasks”

Every knowledge worker has recurring tasks. They just do not think of them that way because they are habitual. Use these prompts to surface hidden processes:

“What do you do every Monday morning?” Status updates, inbox triage, meeting prep.

“What do you do before every meeting?” Pull data, draft agendas, prepare talking points.

“What do you do at the end of every month?” Reports, reconciliation, reviews.

“What do you do every time a new employee starts?” Onboarding docs, access setup, training schedules.

“What do you do every time a customer complains?” Case logging, response drafting, escalation.

“What do you do that you wish you could hand to someone else?” This last one often surfaces the highest-value candidates.

Facilitator Note

Walk the room during the inventory phase. If anyone has fewer than five processes at the ten-minute mark, sit with them and run through the prompts above. Most participants end up with 8-15 processes. Anything under 5 means they need help seeing their own work patterns.

At the fifteen-minute mark, give a five-minute extension if the room is still writing actively. Momentum matters more than the schedule here.

Phase 2: Scoring — 25 minutes

Concept: The Five Scoring Criteria

Each criterion is rated 1-5. Total possible score: 25. Walk through each one with examples:

1. Repeatability (1-5). How consistent is the process? Same steps every time = 5. Completely different every time = 1. A weekly sales report compiled the same way from the same data source scores 5. A creative marketing campaign that is different every quarter scores 1.

2. Structured Input (1-5). Does the process start with structured data? CSVs, forms, templates, database exports = 5. Free-form conversations, ambiguous emails, unstructured brainstorming = 1. Processing an expense report from a standard template scores 5. Interpreting a vague client request scores 1.

3. Clear Rules (1-5). Are the decision rules explicit and documentable? Written procedures, compliance checklists, defined criteria = 5. Tribal knowledge, “you just know,” intuition-based = 1. Categorizing support tickets by the existing taxonomy scores 5. Deciding which sales leads to prioritize based on gut feeling scores 1.

4. Error-Prone (1-5). How often do mistakes happen when humans do this? Frequent copy-paste errors, missed deadlines, inconsistent formatting = 5. Rarely wrong = 1. High scores here indicate high automation value: the process is both tedious and error-prone. Data entry from one system to another often scores 4-5. Strategic planning scores 1.

5. Time Value (1-5). Is the person doing this overqualified for the task? A VP spending two hours compiling a report that a junior analyst could do = 5. The task requires the exact expertise of the person doing it = 1. This criterion captures the opportunity cost: senior people doing junior work is the most expensive form of waste.

Script: Running the Scoring

“Go through your inventory. Score each process on all five criteria. Be honest. A 3 is average — use the full range. If something scores 1 on repeatability, give it a 1. If something scores 5 on time value, give it a 5. The scoring only works if you use the full scale.”

“When you are done, add up the five scores for each process. Maximum is 25.”

Give fifteen minutes for scoring. Then five minutes for table discussion: “Compare your top-scored process with your neighbor. Do you agree with each other’s scores? Challenge anything that seems too high or too low.”

Watch For: Inflated Scores

Participants want their processes to score high because high scores feel like validation. Push back on inflated scores: “If you scored everything 4 or 5 on repeatability, that means every process you do is exactly the same every time. Is that really true?” The scoring is most useful when it differentiates clearly between candidates. A flat distribution (everything scores 18-22) means the criteria are not being applied rigorously enough.

Concept: Interpreting Scores

Share this interpretation framework:

20-25: Prime candidate. Automate this first. It is repeatable, structured, rule-based, error-prone, and being done by someone overqualified for it. This is the kind of process that a Cowork Skill or Claude Code script handles on day one.

15-19: Strong candidate. Automate soon. One or two criteria are lower — maybe the input is semi-structured or the rules have some ambiguity — but the overall profile is strong. These processes may need some redesign before automation (e.g., standardizing the input format).

10-14: Moderate candidate. Consider after the high-scorers are done. These processes often benefit from partial automation: Claude handles the structured portions while a human handles the judgment calls.

Below 10: Low priority. The process may need redesign before automation makes sense. Or it may genuinely require human judgment, creativity, or relationship skills that Claude cannot replicate. Putting these on the backlog is the right call.

Phase 3: Ranking and Tool Assignment — 15 minutes

Script: Building the Ranked List

“Sort your processes by total score, highest to lowest. This is your automation priority list.”

“Now, for each of your top five, assign a Claude interface. Which tool handles this process?”

Remind the room of the decision matrix from Module 11:

Chat: one-off tasks, quick analysis, text generation. Projects: ongoing work with persistent context. Cowork: multi-step file-based work, document creation, scheduled tasks. Claude Code: building tools, scripts, automations. API: high-volume, system-integrated, real-time processing.”

“If you are unsure, ask yourself: does the output get read (Cowork), run (Claude Code), or processed at scale (API)?”

Concept: Calculating Annual Impact

For each ranked process, calculate annual time savings:

Formula: (minutes per occurrence) × (occurrences per year) ÷ 60 = annual hours saved

Frequency multipliers: Daily = 250/year. Weekly = 52/year. Bi-weekly = 26/year. Monthly = 12/year. Quarterly = 4/year. Per event = estimate occurrences.

Sum the top five for total addressable time. This is the number of hours per year your team can redirect from routine processing to higher-value work.

“A process that takes 30 minutes per week adds up to 26 hours per year. Five processes like that is 130 hours — more than three full working weeks. That is time your team gets back for the work that requires their judgment, creativity, and expertise.”

Phase 4: The 90-Day Plan — 15 minutes

Script: Building the Roadmap

“Take your ranked list and assign each process to a sprint.”

“Days 1-30: Quick Wins.” “Your highest-scored processes with the simplest implementation. These should be things you can build with a Cowork Skill or a simple Claude Code script. One to three processes. The goal is fast results that build momentum and demonstrate value.”

“Days 31-60: Core Automations.” “Your next tier of processes. These may require more setup — connecting to data sources, building more complex Skills, coordinating with IT for API access. Two to four processes. The goal is establishing the infrastructure for ongoing automation.”

“Days 61-90: Advanced Workflows.” “The processes that require integration, coordination across teams, or more sophisticated tool use. One to three processes. The goal is tackling the harder problems now that you have credibility from the first two sprints.”

“For each sprint, note: what you will build, which tool you will use, who needs to be involved, and what success looks like.”

Facilitator Note

The 90-day plan is the deliverable that participants present to their managers. Spend time here. Walk the room. Help participants make their plans specific enough to be actionable. “Automate reporting” is not a plan. “Build a Cowork Skill that reads the weekly CRM export and generates the Monday pipeline report, targeting 2 hours/week saved, ready by March 15” is a plan.

Watch For: Plans That Are Too Ambitious

Participants energized by the training will try to automate everything in the first month. Redirect: “Pick one or two quick wins for Sprint 1. Prove the concept. Get a result your manager can see. Then expand.” The biggest risk to automation programs is not lack of ambition — it is trying to do too much before proving the value of doing anything at all.

Presentations — 20 minutes

Script: Table Presentations

“Each table: pick the strongest audit from your group. That person presents to the room. Three minutes: what you found, what you are going to automate first, and the numbers.”

Three minutes per presenter, three to four tables. After each: one question from the room, one piece of feedback from you.

Feedback to give: Was the scoring rigorous? Is the 90-day plan specific? Are the time savings realistic? Is the tool assignment correct?

Program Debrief — 10 minutes

Script: The 14-Module Journey

“Let me take you back to where we started.”

“In Module 01, you classified a task for the first time. You learned the difference between generation, analysis, transformation, extraction, and reasoning. That vocabulary gave you a way to think about what Claude does.”

“In Tier I, you learned to prompt. Single tasks, structured requests, iterative refinement. You went from ‘help me write this’ to precise instructions that produce predictable output.”

“In Tier II, you learned to think in systems. System prompts that define behavior. Chain-of-thought that improves reasoning. Compound workflows that chain steps together. You went from one-shot prompts to multi-step processes.”

“In Tier III, you learned to build. Cowork creates documents from your files. Claude Code builds tools from your descriptions. Skills encode your procedures. Schedules automate your recurring work. The API scales everything to production volume.”

“And today, you learned to prioritize. The audit turns everything you learned into a plan with numbers, timelines, and accountability.”

Script: Closing

“Mastery is not knowing everything Claude can do. Mastery is knowing what to delegate and what to keep. Your audit is the starting line, not the finish line. Execute the plan. Measure the results. Then audit again in 90 days. The second audit will be bigger because you will see automation opportunities everywhere now.”

Aggregate the room’s total annual time savings one last time. Write it on the board. “That number is why this program exists. Go make it real.”

Concept: Post-Program Resources

Before participants leave, share these resources:

Claude Help Center (support.claude.com): Product documentation, feature guides, troubleshooting. Bookmark the articles on Projects, Skills, and Cowork — these are the features participants will use most.

Anthropic Documentation (docs.anthropic.com): API reference, tool use guides, the Anthropic Cookbook. Share with engineering teams who will build the API integrations from participants’ audits.

Skills Directory: Partner-built Skills for Notion, Figma, Atlassian, and financial services. Check for Skills that match participants’ audit items before building from scratch.

AgentSkills.io: The open standard for Skills. Reference for anyone building custom Skills for their organization.

The audit template: Participants keep the template and can run the audit with their teams. The exercise works for groups of any size — a team of three or a department of thirty.

SegmentActivityTime
OpeningCapstone frame, sample audit walkthrough15 min
Phase 1Process inventory (brainstorming)20 min
Phase 2Scoring against five criteria25 min
Phase 3Ranking, tool assignment, ROI calculation15 min
Phase 490-day implementation plan15 min
PresentationsTable presentations to the room20 min
Program Debrief14-module recap, closing, resources10 min