ASSESS — Seeing What's Actually There
Learning to identify hidden friction before spending money
Module 2A: Theory
Learning to see what's real before deciding what to change
R — Reveal
Case Study: The Process That Wasn't There
The consulting engagement started with a confident statement from the COO.
"We know exactly where the problem is," Rachel Mendez said, sliding a process flowchart across the conference table. "Order fulfillment. It's taking us four days when our competitors do it in two. We need you to help us automate it."
The flowchart was beautiful—color-coded swimlanes, clean decision diamonds, numbered steps from order receipt to shipment confirmation. It had been created eighteen months ago during a Lean Six Sigma initiative and lived in a binder that Rachel kept on her credenza.
The consultant, a woman who had spent fifteen years watching the gap between documentation and reality, nodded politely and asked a question that would change the trajectory of the engagement.
"May I spend a day watching orders actually move through your building?"
Rachel hesitated. "The flowchart shows you everything. We documented it thoroughly."
"I'm sure you did. But I find I understand better when I can see it happen."
What the Flowchart Showed
According to the documentation, Meridian Industrial Products processed customer orders through a clean seven-step sequence:
- Order received via email, phone, or web portal
- Order entered into ERP system by customer service
- Credit check performed automatically
- Inventory allocated from available stock
- Pick ticket generated and sent to warehouse
- Order picked, packed, and shipped
- Shipping confirmation sent to customer
Average documented cycle time: 36 hours. Target: 24 hours. Current reality, according to the metrics dashboard: 4.2 days.
The gap seemed obvious. Step 4 through Step 6 must be the bottleneck. The warehouse was slow. That's where automation would help.
What Actually Happened
The consultant arrived at 7:30 AM and positioned herself near the customer service team. She carried a notebook and a simple instruction for herself: write down everything that happens, not what's supposed to happen.
By 9 AM, she had filled six pages.
The first order of the day came in by email at 7:47 AM. A longtime customer named Hartfield Construction needed fifty units of a specialty valve—standard item, good margin, no complexity.
Customer service rep Dana Okafor opened the email, read the request, and immediately minimized the ERP system. She opened a different application—a homegrown Access database that someone had built years ago—and typed in the customer name.
Assessment Signal: This is the first reveal of a shadow system—the unofficial infrastructure propping up official systems.
"Checking the history," Dana explained when asked. "The ERP doesn't show their preferred shipping method or their dock hours. If I don't check this, the order will get rejected at receiving and we'll have to reship."
The Access database had been created in 2019 by a customer service manager who had since retired. It contained 340 customer records with notes like "ALWAYS use FedEx—UPS deliveries get refused" and "Call Jim directly if order is over $5K—he needs to approve before we ship." None of this information existed in the official ERP system.
Dana found the Hartfield record, noted the shipping preferences, and then—instead of entering the order immediately—she opened Outlook and sent an email to someone named Marcus in the warehouse.
"Marcus knows if we actually have those valves," Dana said. "The inventory system says we have 200, but half of them are probably the old version that got superseded. If I enter the order and we don't have stock, it's a whole thing to fix."
Marcus replied twelve minutes later. Yes, they had the new version. Forty-seven units available.
Dana frowned. "They need fifty. Let me check if we can pull three from the Columbus warehouse." This involved a phone call to Columbus, a hold time of eight minutes, and a conversation with someone named Terri who had to physically walk to a shelf to verify.
By 8:52 AM—over an hour after the email arrived—Dana finally entered the order into the ERP system. Step 2 of the documented process.
The Invisible Architecture
Over the next six hours, the consultant documented similar patterns across every function.
Assessment Signal: Here we see a verification ritual—manual judgment compensating for unreliable system logic.
In the credit department, she watched an analyst named Jerome run credit checks using a process that bore no resemblance to the flowchart. The "automatic" credit check was automatic only for customers with less than $10,000 in outstanding receivables. For everyone else—about 40% of orders—Jerome had to manually pull aging reports from three different systems, reconcile them (they never matched), and make a judgment call based on experience and relationship knowledge.
"The system flags everything," Jerome said. "So I have to manually clear things that are obviously fine. Big customer pays late sometimes but always pays? Flagged. Customer with one invoice from three years ago that got disputed? Flagged. I spend most of my day unflagging things that shouldn't have been flagged."
Assessment Signal: This introduces tribal knowledge—personal systems that become critical but invisible.
In the warehouse, the consultant watched pick tickets emerge from a printer and immediately get annotated with handwritten notes. The warehouse supervisor, a man named Eddie Tran who had been there nineteen years, had developed a personal coding system: a star meant the customer was picky about packaging, a circle meant the item was actually stored in a different location than the system showed, a triangle meant "call before shipping—there might be a change coming."
"The system doesn't know where things really are," Eddie explained. "It knows where they're supposed to be. But supposed-to-be and actually-are diverge pretty quick around here."
The consultant asked how long it would take to correct the location data in the system. Eddie laughed. "We tried that twice. Takes about six weeks to get it accurate, and then it starts drifting again immediately. The system can't keep up with reality."
The Questions No One Had Time For
At the end of the day, the consultant had documented forty-seven distinct activities that didn't appear on the process flowchart. She had identified eleven different "shadow systems"—spreadsheets, Access databases, paper notes, bookmark folders, and email chains that employees relied on because the official systems didn't contain what they needed.
She had also identified twelve recurring questions that employees answered dozens of times daily—questions that required judgment, institutional knowledge, or information retrieval from unofficial sources:
- Do we actually have this in stock, and is it the right version?
- Can this customer's credit be approved despite the flag?
- What are this customer's actual shipping requirements?
- Is this price correct, or does this customer have a special arrangement?
- Who needs to approve this exception?
- Where is this item actually located in the warehouse?
- Is this order complete, or is the customer expecting something else?
- Has anyone already talked to this customer today about this issue?
- Which carrier should we use for this destination?
- Is this backorder still valid, or did the customer find another source?
- Does this return need inspection before credit?
- Who owns this customer relationship when there's a problem?
These questions represented the real work—the cognitive labor that kept the operation functional despite systems that didn't reflect reality.
The Moment of Recognition
The consultant presented her findings the next morning. Rachel Mendez listened with an expression that shifted from skepticism to recognition to something approaching dismay.
"None of this is in the flowchart," Rachel said quietly.
"No."
"We built that flowchart with input from every department. We had a consultant facilitate it. We validated it with the teams."
"You documented the process you designed. You didn't document the process you run."
Rachel stared at the list of shadow systems. "Dana's Access database. I had no idea that existed."
"Dana has been running it for five years. She maintains it on her own time. It's become critical infrastructure. If Dana leaves, that knowledge leaves with her."
"And the warehouse location problem—Eddie's been complaining about that for years. I thought he was just resistant to the system."
"He's not resistant. He's compensating. The difference matters."
Rachel sat back. "So when you came in and I said we needed to automate order fulfillment—"
"You were proposing to automate a process that doesn't exist. The flowchart process, the clean seven-step sequence, isn't what actually happens. If you automate that, you'll accelerate dysfunction. You'll break all the workarounds that are currently keeping you functional."
"Then what do we do?"
"First, you see what's actually there. You map the real process—the one your people actually run, including the shadow systems and the judgment calls and the institutional knowledge. Then you understand why the gap exists. Then—and only then—you decide what to change."
Rachel looked at the list of twelve questions. "These questions no one has time for. That's where the time goes."
"That's where the four days go. Not in the warehouse. Not in the steps on your flowchart. In the cognitive work of answering questions that your systems should answer but don't."
The Assessment That Changed Everything
Over the following two weeks, the consultant led Meridian through a systematic assessment—not of their documented process, but of their actual operation. They mapped every workaround, catalogued every shadow system, identified every question that required human judgment because systems failed to provide answers.
What emerged was a picture of an organization that had, over years of incremental adaptation, built an invisible architecture of human compensation. The official systems were scaffolding; the real structure was held together by institutional knowledge, relationship memory, and individual heroics.
The order fulfillment problem wasn't a warehouse problem. It was an information problem. The four-day cycle time came from:
- 47 minutes average waiting for inventory confirmation (because the system couldn't be trusted)
- 23 minutes average waiting for credit clarification (because the rules didn't match reality)
- 38 minutes average for shipping specification lookup (because preferences weren't captured)
- 2.1 days average for exception handling (because exceptions were the norm, not the edge case)
None of these problems could be solved by automating the flowchart process. All of them could be solved by closing the gap between what the systems knew and what the work required.
The engagement shifted from "automate order fulfillment" to "make the official process match the real process." The solution wasn't faster automation. It was better architecture—systems that captured what people actually needed to know, workflows that reflected how work actually happened, and elimination of the shadow systems that had become organizational scar tissue.
Eighteen months later, Meridian's order cycle time was 1.8 days. They had automated nothing. They had simply stopped asking people to work around systems that didn't serve them.
O — Observe
The Map Is Not the Territory
Every organization operates two processes simultaneously: the one they document and the one they run.
The documented process exists in flowcharts, procedure manuals, training materials, and system configurations. It represents how work was designed to happen—rational, sequential, clean.
The actual process exists in behavior—what people do when they sit down to work. It includes the workarounds, the shortcuts, the shadow systems, the tribal knowledge, and the accumulated adaptations that have developed over years of reality not matching design.
Alfred Korzybski's famous observation—"the map is not the territory"—applies directly to organizational process. The process map is not the process. And when organizations try to improve or automate based on the map rather than the territory, they optimize for a fiction.
This is why the ASSESS phase must precede any other work. Before calculating ROI, before designing workflows, before building anything—you must see what's actually there.
Before we go deeper, here is the principle that elevates this module from interesting to indispensable: If you skip the assessment step and move directly into automation, AI, tooling, or workflow redesign, every dollar you spend will amplify your existing dysfunction. Speed multiplies whatever architecture it's attached to. This is why seeing the real process—before improving or automating it—is non-negotiable.
Shadow Systems
Waste hides in the gap between documented and actual process. The most visible manifestation is the shadow system—unofficial resources that supplement gaps in official systems.
What Shadow Systems Look Like
- Personal spreadsheets or databases (Dana's Access database)
- Browser bookmark collections
- Paper notes, cheat sheets, or reference cards (Eddie's pick ticket annotations)
- Email folders or saved messages
- Informal contact networks ("Call Marcus to verify")
- Physical artifacts (annotated printouts, color-coded folders)
Why Shadow Systems Emerge
Shadow systems aren't signs of inconsistency or resistance. They're the natural adaptations people create when official systems fail to give them what the work requires. They emerge when:
- Official systems don't contain required information
- Official processes don't match actual work requirements
- System updates lag behind business changes
- Edge cases aren't handled by standard workflows
The Shadow System Paradox
Shadow systems are simultaneously invaluable and dangerous:
- Invaluable: They keep operations running despite system inadequacies
- Dangerous: They create single points of failure, undocumented dependencies, and institutional fragility
The assessment must surface shadow systems, but the goal isn't necessarily to eliminate them—it's to understand what gaps they fill and decide whether those gaps should be closed formally.
Module 1 introduced cognitive tax—the mental overhead that drains practitioners without appearing in efficiency metrics. That module taught you to recognize friction by how it feels: decision fatigue, context switching, uncertainty load. This module teaches you to identify friction by what it looks like. The waste patterns below are the structural causes of cognitive tax.
The Invisible Waste Taxonomy
Beyond shadow systems, waste takes other predictable forms:
- Copy-Paste Loops — Information manually transferred between systems
- Verification Rituals — Activities to confirm system outputs can be trusted
- Tribal Knowledge — Information that exists only in people's heads
- Human Routers — People serving as traffic controllers for information
- Exception Theaters — Elaborate handling of "exceptions" that are common
- Reconciliation Ceremonies — Periodic efforts to align systems with reality
When you observe copy-paste loops, verification rituals, or human router dependencies, you're seeing the process-level mechanisms that generate the exhaustion practitioners reported in your Cognitive Tax Assessment.
Copy-Paste Loops
Information that exists in one system but must be manually transferred to another. Every copy-paste is a failure of integration—and an opportunity for error. Dana's process of checking the Access database before entering orders in the ERP is a copy-paste loop: information exists, but not where the workflow needs it.
Copy-paste loops are easy to identify: look for any activity where someone types information that already exists somewhere else. The labor isn't the problem; the error rate and the cognitive load of remembering to do it are the problems.
Primary cognitive tax: decision fatigue (each transfer requires verification decisions) and context switching (moving between system logics).
Verification Rituals
Activities performed to confirm that systems can be trusted—or to compensate for the fact that they can't. Jerome's manual credit review process was a verification ritual: the system generated an output, but the output couldn't be trusted, so human judgment was inserted to verify.
Verification rituals indicate a breakdown in system reliability. The question isn't "how do we automate the verification?" but "why can't we trust the system?"
Primary cognitive tax: uncertainty load (the ritual confirms distrust) and decision fatigue (judging how thoroughly to verify).
Tribal Knowledge
Information that exists only in people's heads—customer preferences, exception handling procedures, location of items, relationship history. Eddie's handwritten annotations on pick tickets were tribal knowledge made visible. Dana's Access database was tribal knowledge made semi-permanent.
Tribal knowledge is simultaneously invaluable (it keeps operations running) and dangerous (it creates single points of failure). The assessment must surface it, but the goal isn't necessarily to eliminate it—it's to understand what knowledge is critical and ensure it isn't locked in one person's head.
Primary cognitive tax: hidden dependencies (knowledge exists only in people) and uncertainty load (anxiety when the knowledge-holder is unavailable).
Human Routers
People who serve as traffic controllers for information or decisions—not because routing is their job, but because they're the only ones who know where things should go. Questions like "Who owns this customer relationship when there's a problem?" indicate a human router dependency.
Human routers represent process design failures. When routing logic can't be articulated systematically, it can't be scaled, trained, or automated.
Primary cognitive tax: hidden dependencies (the router is a single point of failure) and context switching (every consultation interrupts workflow).
Exception Theaters
Elaborate processes that exist to handle "exceptions" that are actually the norm. Meridian's credit review process treated 40% of orders as exceptions requiring manual handling. When exceptions exceed 15-20% of volume, the exception process is the real process.
Exception theaters often indicate that original process design didn't account for business reality. The "normal" path was designed for ideal cases that rarely occur.
Primary cognitive tax: workaround maintenance (tracking exception rules) and decision fatigue (determining which exception path applies).
Reconciliation Ceremonies
Periodic activities devoted to making systems match each other or match reality. Monthly inventory reconciliation, quarterly customer data cleanup, annual system audits—all are symptoms of systems that diverge from truth over time.
Reconciliation ceremonies consume enormous time and indicate that source-of-truth problems haven't been solved. The assessment should quantify reconciliation effort across the organization.
Primary cognitive tax: workaround maintenance (repeated corrective effort) and context switching (shifting from normal work to reconciliation mode).
Cognitive Burden vs. Operational Burden
The Cognitive Tax Assessment from Module 1 identified mental overhead. The Opportunity Audit adds a complementary distinction: the difference between cognitive burden and operational burden.
Operational burden is the time and effort required to execute a task. It's measurable, visible, and often the focus of efficiency initiatives. "This process takes 6 hours" is an operational burden statement.
Cognitive burden is the mental overhead required to figure out how to execute a task, to verify that execution is correct, or to compensate for system inadequacies. It's less visible and rarely measured, but often more exhausting than the operational work itself.
This distinction matters because teams don't burn out from tasks—they burn out from navigating uncertainty.
The Meridian case illustrates this distinction. The operational burden of entering an order into the ERP was perhaps 3 minutes. The cognitive burden—checking the Access database, verifying inventory, determining shipping preferences, navigating credit exceptions—was 47 minutes on average.
Automation typically addresses operational burden. It makes execution faster. But it rarely addresses cognitive burden—and sometimes increases it by adding new systems to navigate or new verification requirements.
An effective assessment distinguishes between these burdens and quantifies both. An opportunity that reduces operational burden but increases cognitive burden may not be an improvement.
The Questions No One Has Time For
Every organization carries unanswered questions—not because answers don't exist, but because no one has time to find, document, and systematize them.
These questions fall into predictable categories:
Information Location Questions: "Where do I find X?" "Which system has the current version?" "Who knows about Y?"
Interpretation Questions: "What does this status mean?" "Is this exception?" "How should I handle this situation?"
Authority Questions: "Who can approve this?" "Do I need permission?" "What's the escalation path?"
History Questions: "What happened last time?" "What's the context here?" "Why is it this way?"
Relationship Questions: "Who's the right contact?" "What's the history with this customer?" "Who owns this?"
These questions represent decision points that require human judgment because systems don't provide answers. Each question answered is cognitive work performed—work that doesn't appear in any process documentation or efficiency metric.
Assessment reveals these questions through direct observation—watching work as it happens and asking, "What decision are you making right now, and why?" The goal is to surface the real mental workload and see which parts can be made systematic.
What People Think Assessment Is vs. What It Actually Is
Many leaders believe they already understand their processes. They don't. They understand the documented version, not the real one. This table reframes the most common misconceptions:
| Myth / Common Belief | Why It's Wrong | What Assessment Actually Requires |
|---|---|---|
| "We already know how our process works." | People know the designed workflow, not the lived workflow. | Direct observation, behavior mapping, and evidence—not assumptions. |
| "We can automate first and fix later." | Automation accelerates dysfunction when built on inaccurate architecture. | Fix the information and process foundation before adding speed. |
| "Shadow systems mean employees are resisting." | They're compensating for system gaps, not rebelling. | Surface shadow systems to understand what the official system fails to provide. |
| "Documentation updates will fix this." | Documentation always lags reality. | Only tight operational design and continuous observation keep systems aligned with practice. |
| "We just need a better tool." | Tools reflect process—they don't repair it. | Improve the process and architecture first; tools come last. |
This page reframes assessment as a discipline, not an administrative task.
Module 2A establishes the theoretical foundation. Module 2B provides the practical methodology for conducting an Opportunity Audit.
Module 2B: Practice
A systematic methodology for seeing what's actually there
Why This Module Exists
Module 2A established the theory: the map is not the territory. Organizations operate two processes simultaneously—the one they document and the one they run. Shadow systems, tribal knowledge, and invisible waste accumulate in the gap between design and reality.
This module provides the methodology to close that gap.
The Opportunity Audit is not an academic exercise. It is a field discipline—a structured way to observe, document, and prioritize the hidden architecture that keeps organizations running despite their official systems. Every framework in this module has been tested in real operations, refined through failure, and validated by practitioners who needed answers, not theories.
What You Will Learn
By the end of Module 2B, you will be able to:
- Conduct a complete Opportunity Audit — from initial observation through prioritized portfolio delivery
- Identify and catalogue shadow systems — the unofficial infrastructure that supplements official systems
- Map the questions no one has time for — the cognitive labor that doesn't appear in any process documentation
- Quantify invisible waste — using the taxonomy from Module 2A to categorize and measure friction
- Build an Opportunity Portfolio — a prioritized inventory of improvement possibilities ranked by impact and feasibility
- Validate findings with practitioners — ensuring your audit reflects reality, not your assumptions about reality
The Practitioner's Challenge
Most assessments fail for the same reason: they document what people say happens instead of what actually happens.
A senior operations director described it this way: "We hired consultants to map our processes. They interviewed everyone, built beautiful flowcharts, and presented findings that looked exactly like our existing documentation. We paid six figures for a mirror."
The Opportunity Audit avoids this trap by inverting the standard approach:
| Traditional Assessment | Opportunity Audit |
|---|---|
| Start with documentation | Start with observation |
| Interview first, observe second | Observe first, interview to understand why |
| Trust what people say | Trust what people do |
| Map the designed process | Map the actual process |
| Identify problems | Identify opportunities |
| Deliver a report | Deliver a prioritized portfolio |
The difference isn't just methodological—it's philosophical. Traditional assessments ask "How should this work?" The Opportunity Audit asks "How does this actually work, and what does that reveal?"
Field Note: The First Hour
A practitioner conducting her first Opportunity Audit described the experience:
"I positioned myself near the accounts payable team at 8 AM. Within the first hour, I had filled four pages of notes. I watched invoices get printed, annotated by hand, walked to another department for signature, walked back, scanned, emailed, printed again, and filed. The same information touched seven systems and twelve hands before payment could be issued.
"None of this was in the process documentation. The documented process showed three steps: receive invoice, approve invoice, pay invoice. The actual process had twenty-three steps, and most of them existed because different systems couldn't talk to each other.
"By 9 AM, I understood more about their operations than their process maps had shown in three years."
This is what direct observation reveals: the invisible architecture that organizations build around their official systems.
Module Structure
Module 2B follows the ROOTS framework:
- O — OPERATE: The eight-step Opportunity Audit methodology, with detailed guidance for each phase
- T — TEST: Metrics and validation approaches to confirm audit accuracy and measure improvement
- S — SHARE: Reflection prompts, peer exercises, and discussion questions for consolidation
Supporting materials include:
- Reading list with academic and practitioner sources
- Slide deck outline for presentation
- Assessment questions with model answers
- Instructor notes for facilitation
Before You Begin
The Opportunity Audit requires:
- Access — You must be able to observe real work happening in real time
- Time — Minimum 4-6 hours of direct observation, plus interview and analysis time
- Humility — Your assumptions about how work happens are probably wrong
- Curiosity — Every workaround has a story; your job is to find it
The audit is not about finding fault. It's about finding truth. Shadow systems exist because capable people solved problems. Tribal knowledge exists because official systems failed to capture what work requires. Your job is to see this clearly, document it accurately, and prioritize wisely.
The goal is not to eliminate workarounds—it's to understand what gaps they fill and decide which gaps are worth closing.
Proceed to the Opportunity Audit methodology.
O — Operate
Deliverable: The Opportunity Audit
The Opportunity Audit is a systematic assessment of one process, documenting the gap between designed and actual operation, and producing a prioritized portfolio of improvement opportunities.
What the Audit Produces
A completed Opportunity Audit delivers:
-
Reality Map — A documented picture of how work actually happens, including all steps, systems, handoffs, and decision points that don't appear in official documentation
-
Shadow System Inventory — A catalogue of every unofficial resource people rely on, with analysis of what gaps each fills and what risks each creates
-
Question Inventory — A list of recurring questions that require human judgment, categorized by type and quantified by frequency and resolution time
-
Waste Pattern Analysis — Classification of observed friction using the invisible waste taxonomy, with quantified impact estimates
-
Opportunity Portfolio — A prioritized list of improvement possibilities, ranked by impact (Time, Throughput, Focus) and feasibility
-
Recommended Actions — Specific interventions for the top 3-5 opportunities, with estimated effort and expected returns
Why This Structure Matters
The audit structure is designed to answer the questions that leadership actually needs answered:
| Leadership Question | Audit Component That Answers It |
|---|---|
| "How does this process really work?" | Reality Map |
| "What unofficial systems are we depending on?" | Shadow System Inventory |
| "Where is time actually going?" | Question Inventory + Waste Pattern Analysis |
| "What should we fix first?" | Opportunity Portfolio |
| "What will it take to fix it?" | Recommended Actions |
Without this structure, assessments produce awareness without direction. Leaders learn that problems exist but don't know which ones matter most or what to do about them.
Scope: One Process at a Time
The Opportunity Audit examines one process thoroughly rather than multiple processes superficially. This constraint is intentional.
Why single-process focus works:
- Deep observation reveals patterns that surveys miss
- Practitioners open up when they see genuine interest in their work
- Root causes become visible through repeated observation
- Findings are specific enough to act on
How to choose which process to audit:
| Selection Criteria | Questions to Ask |
|---|---|
| Pain visibility | Where do complaints concentrate? |
| Volume | How often does this process execute? |
| Cross-functional touch | How many departments or systems are involved? |
| Strategic importance | Does this process affect customers, revenue, or compliance? |
| Improvability | Is there organizational will to change this? |
A process that scores high on volume and cross-functional touch will typically reveal the most systemic issues. A process with strategic importance ensures findings will receive attention.
The Audit Timeline
A complete Opportunity Audit typically requires 12-20 hours over 3-5 days:
| Phase | Time Required | Activities |
|---|---|---|
| Preparation | 1-2 hours | Gather documentation, identify observation points, schedule access |
| Observation | 4-6 hours | Watch 5+ complete process cycles, document everything |
| Shadow System Mapping | 1-2 hours | Identify and catalogue unofficial resources |
| Question Cataloguing | 1 hour | List and categorize recurring judgment calls |
| Documentation Comparison | 30-45 minutes | Compare observed reality to official process |
| Root Cause Interviews | 1-2 hours | Explore why gaps exist with 3-5 practitioners |
| Analysis & Portfolio Building | 2-3 hours | Synthesize findings, prioritize opportunities |
| Validation | 1 hour | Review findings with practitioners |
| Recommendation Development | 1-2 hours | Detail top opportunities with action plans |
The time investment is front-loaded in observation. This is intentional—the quality of findings depends entirely on the quality of observation.
Inputs Required Before Beginning
Before starting an Opportunity Audit, gather:
1. Process Documentation
Whatever exists: flowcharts, procedure manuals, training materials, system guides, policy documents. Don't read them yet—just collect them for later comparison.
2. System Access
Ability to observe how systems are actually used. This means sitting with practitioners while they work, not reviewing screenshots or recordings. Real-time observation captures hesitation, frustration, workarounds, and consultation that recordings miss.
3. Observation Time
Minimum 4 hours watching the process execute in real conditions. Schedule during normal operations, not during slow periods or crises. You need to see typical work, not edge cases.
4. Interview Access
Plan for 30-45 minute conversations with at least 3 people who perform the work regularly. Include a mix of tenure levels—new employees see friction that veterans have normalized, while veterans understand history that explains why workarounds exist.
5. Volume Data
Basic metrics about process frequency: How many times per day/week does this process execute? How many people touch it? What's the documented versus actual cycle time? This data helps quantify impact during analysis.
The Cardinal Rule
Do not start with the documentation.
This is counterintuitive. Most assessments begin by reviewing existing process maps, then observing to validate them. The Opportunity Audit inverts this sequence deliberately.
If you read documentation first, you'll see what's supposed to happen and unconsciously filter your observations to match. You'll notice the steps that exist in the flowchart and miss the steps that don't. You'll interpret workarounds as deviations rather than signals.
Start with observation. See what actually happens. Then read the documentation to understand what the gap reveals.
The gap between documentation and reality is not a flaw to be corrected—it's a diagnostic signal. It tells you where systems failed to meet work requirements, where business conditions changed faster than processes adapted, and where tribal knowledge became critical infrastructure.
Field Note: The Documentation Trap
A practitioner described falling into the documentation trap on an early audit:
"I was assessing a procurement process and made the mistake of reading their SOPs first. The documentation showed a clean approval workflow: request submitted, manager approves, purchasing places order, goods received, invoice paid.
"When I observed, I kept seeing this workflow. Request, approval, order, receipt, payment. It matched. I was about to conclude that their process was well-documented when a purchasing agent asked me, 'You're not going to mention the spreadsheet, are you?'
"That's when I learned about the master tracking spreadsheet that everyone used because the official system couldn't handle partial shipments, backorders, or vendor substitutions—which happened on 60% of orders. The 'clean workflow' I observed was actually a parallel process running alongside an invisible infrastructure I had completely missed.
"I had to start over. The second time, I didn't read anything first. I just watched. Within an hour, I saw the spreadsheet, the email chains, the phone calls to vendors that weren't logged anywhere. The real process was twice as complex as the documented one, and most of the complexity existed to handle situations the official system couldn't."
This is why the cardinal rule exists. Documentation shapes perception. Observe first.
Proceed to the eight-step audit methodology.
Step-by-Step Process
Step 1: Observe Without Judgment (2-4 hours)
Position yourself where the work happens. Watch at least 5 complete process cycles from start to finish. Document:
- Every action taken (not just the major steps)
- Every system or tool touched
- Every pause, wait, or handoff
- Every consultation with another person or resource
- Every workaround, shortcut, or adaptation
Write what you see, not what you expect. Note timestamps. Capture the actual sequence, including the messy parts.
Step 2: Map the Shadow Systems (1-2 hours)
Through observation and interview, identify every unofficial resource people use:
- Personal spreadsheets or databases
- Browser bookmark collections
- Paper notes, cheat sheets, or reference cards
- Email folders or saved messages
- Informal contact networks
- Physical artifacts (annotated printouts, color-coded folders)
For each shadow system:
- What information does it contain?
- Why doesn't this information live in official systems?
- Who maintains it?
- What happens if it disappears?
Step 3: Catalogue the Questions (1 hour)
List every recurring question that employees must answer during the process:
- Information location questions
- Interpretation questions
- Authority questions
- History questions
- Relationship questions
For each question:
- How often is it asked?
- How long does it take to answer?
- What resources are consulted?
- What happens if the answer is wrong?
Step 4: Identify Waste Patterns (1 hour)
Review your observations and categorize waste:
- Copy-paste loops: Information transferred manually between systems
- Verification rituals: Activities to confirm system outputs can be trusted
- Knowledge locked in heads: Information existing only in people's minds
- Human router dependencies: People serving as traffic controllers
- Exception theaters: Elaborate handling of "exceptions" that are actually common
- Reconciliation ceremonies: Periodic efforts to align systems with reality
Quantify where possible: How many instances? How much time? How many people affected?
Step 5: Compare to Documentation (30-45 minutes)
Now read the official process documentation. Compare it to what you observed:
- What steps exist in documentation but not in practice?
- What steps exist in practice but not in documentation?
- Where does the sequence differ?
- What systems are used that documentation doesn't mention?
- What decision points aren't documented?
The gap between documentation and reality is your primary finding.
Step 6: Conduct Root Cause Interviews (1-2 hours)
With 3-5 practitioners, explore why the gaps exist:
- "I noticed you use [shadow system]. Tell me about why that exists."
- "The documentation says [X], but I saw you do [Y]. What's the story there?"
- "When did this workaround start? What problem was it solving?"
- "If you could change one thing about this process, what would it be?"
- "What do new people struggle with most when learning this?"
Listen for the history. Most workarounds have rational origins that reveal system limitations.
Step 7: Build the Opportunity Portfolio (1-2 hours)
Synthesize findings into a clear portfolio. For each opportunity:
| Element | Description |
|---|---|
| Gap ID | Unique identifier |
| Gap Description | What's the difference between designed and actual? |
| Waste Category | Which pattern(s) from the taxonomy? |
| Root Cause | Why does this gap exist? |
| Time Impact | Hours per day/week/month consumed |
| Throughput Impact | Effect on volume, completion rate, error rate |
| Focus Impact | Mental load imposed |
| Affected Scope | People, transactions, or frequency |
| Dependencies | What's connected? What else changes? |
| Feasibility | Low/Medium/High difficulty to address |
| Priority Score | Composite of impact and feasibility |
Step 8: Rank and Recommend (30 minutes)
Prioritize opportunities using a weighted model:
Priority Score =
(Time Impact × 2) +
(Throughput Impact × 2) +
(Focus Impact × 3) +
(Scope × 1) -
(Feasibility Difficulty × 2)
Higher scores indicate higher priority opportunities. The weighting emphasizes Focus (hardest to recover) and penalizes high-difficulty interventions.
Select the top 3-5 opportunities for detailed recommendation.
Shadow System Analysis: Deep Dive
Shadow systems are the most visible signal of the gap between documented and actual process. This section provides detailed methodology for identifying, cataloguing, and analyzing shadow systems during the Opportunity Audit.
What Counts as a Shadow System
A shadow system is any unofficial resource that supplements gaps in official systems. The key word is "supplements"—shadow systems don't replace official systems; they fill holes that official systems leave.
Common Shadow System Types:
| Type | Examples | What It Usually Signals |
|---|---|---|
| Personal databases | Access databases, Excel files with lookup tables, personal CRM notes | Official system lacks required data fields or relationships |
| Reference documents | Word docs with decision rules, PDF cheat sheets, annotated printouts | Process logic is too complex or changes too frequently for system configuration |
| Email archives | Saved threads for reference, inbox folders organized by topic or customer | No central repository for correspondence or decision history |
| Browser bookmarks | Organized link collections, frequently-used URLs | Information scattered across systems, no single entry point |
| Physical artifacts | Sticky notes, annotated forms, color-coded folders, desk references | Critical information not in any digital system |
| Contact networks | Known experts for specific questions, "who to call" knowledge | Routing logic not documented, expertise not distributed |
| Workaround procedures | Unofficial steps inserted into official process | System limitations requiring human compensation |
The Shadow System Discovery Protocol
Use this systematic approach to surface shadow systems during observation:
Step 1: Watch for System Switching
When practitioners switch between applications or reach for non-digital resources, note:
- What triggered the switch?
- What information were they seeking?
- Where did they find it?
- Why isn't that information in the system they were using?
Step 2: Listen for Reference Language
Phrases that signal shadow systems:
- "Let me check my spreadsheet..."
- "I keep a list of..."
- "The system doesn't show this, but..."
- "I always look at X before doing Y..."
- "There's a document that [person] maintains..."
- "The real answer is in..."
Step 3: Ask Direct Questions
During observation or interviews:
- "What do you consult besides the main system?"
- "Where do you keep information that doesn't fit in [official system]?"
- "What would you grab if you had to train someone to do your job tomorrow?"
- "What files would you absolutely need if your computer died?"
Step 4: Look at the Physical Workspace
Examine desks, monitors, walls, and shared spaces:
- What's posted on monitors or walls?
- What reference materials are within arm's reach?
- What's in the desk drawer that gets opened frequently?
- What's printed out and annotated?
Shadow System Documentation Template
For each shadow system identified, document:
SHADOW SYSTEM INVENTORY ENTRY
Name/Description: ________________________________
(What practitioners call it or how they describe it)
Type: [ ] Database [ ] Document [ ] Email [ ] Bookmarks
[ ] Physical [ ] Contact Network [ ] Procedure
Location: ________________________________
(Where it lives—file path, desk location, person's name)
Owner/Maintainer: ________________________________
(Who created it, who updates it, how often)
Information Contained:
________________________________
________________________________
________________________________
Gap It Fills:
What information or capability does the official system lack that this
resource provides?
________________________________
________________________________
Usage Frequency:
[ ] Multiple times daily [ ] Daily [ ] Weekly [ ] As needed
Estimated consultations per week: _______
Dependency Level:
[ ] Critical (process stops without it)
[ ] Important (significant delay without it)
[ ] Helpful (minor inconvenience without it)
Single Point of Failure Risk:
[ ] High (one person maintains, no backup)
[ ] Medium (limited access, some documentation)
[ ] Low (multiple people know, could recreate)
Age: ________________________________
(How long has this existed? What triggered its creation?)
Notes:
________________________________
________________________________
Field Note: The Returns Bible
During an audit of a returns processing operation, the practitioner discovered what employees called "The Returns Bible"—a 47-page Word document maintained by a senior customer service representative named Patricia.
The document had grown over eight years to include:
- 156 vendor-specific return policies (official system tracked 12)
- Customer exception agreements not recorded in the CRM
- Product-specific handling rules based on past problems
- Seasonal adjustment factors for certain product categories
- Contact names and direct lines for vendor return departments
Patricia updated the document monthly and kept it on a shared drive. When asked what would happen if Patricia retired, her manager paused for a long moment before saying, "I try not to think about that."
The Returns Bible represented approximately 200 hours of accumulated institutional knowledge. It existed because the official ERP system had 12 fields for return policies; reality required 156 variations. Rather than fight the system, Patricia had built a parallel knowledge base that became essential infrastructure.
The audit recommended converting the document into structured data within the ERP, but the deeper insight was architectural: the official system had been designed for simplicity rather than for the complexity of real vendor relationships.
Analyzing Shadow System Patterns
After cataloguing shadow systems, analyze patterns:
Concentration Analysis
Where do shadow systems cluster?
- By function: Are shadow systems concentrated in certain departments?
- By system: Do they cluster around specific official systems?
- By data type: Are certain kinds of information consistently missing?
Clustering reveals systematic gaps rather than individual preferences.
Age Analysis
When were shadow systems created?
- Old shadow systems (3+ years) indicate persistent, unaddressed gaps
- Recent shadow systems may indicate recent system changes that broke workflows
- Shadow systems created by departed employees signal institutional knowledge risk
Dependency Analysis
Map the critical path:
- Which shadow systems would stop work if unavailable?
- Which shadow systems are consulted most frequently?
- Which shadow systems contain information that exists nowhere else?
Critical-path shadow systems are highest priority for formalization.
The Shadow System Paradox: Revisited
Module 2A introduced the shadow system paradox: shadow systems are simultaneously invaluable and dangerous. In practice, this means the audit must avoid two opposite errors:
Error 1: Recommending elimination without understanding value
Shadow systems exist because capable people solved problems. Eliminating them without closing the gaps they fill will create new shadow systems—or worse, will leave gaps unfilled and work undone.
Before recommending elimination, ask: What need does this serve? How will that need be met if this disappears?
Error 2: Accepting shadow systems as permanent
Shadow systems create risk: single points of failure, undocumented dependencies, knowledge locked in individuals. Accepting them as permanent institutionalizes fragility.
Before accepting continuation, ask: What's the cost of formalizing this? What's the risk of not formalizing it? What happens when the maintainer leaves?
The Right Question
The question is not "Should we eliminate shadow systems?" but rather "Which shadow systems represent gaps worth closing, and which represent elegant adaptations that would be costly to formalize?"
Some shadow systems are scar tissue from old wounds. Others are load-bearing adaptations that keep operations running. The audit must distinguish between them.
Shadow System Findings in the Opportunity Portfolio
Shadow systems enter the Opportunity Portfolio as gaps with specific characteristics:
| Portfolio Field | How to Complete for Shadow Systems |
|---|---|
| Gap ID | SS-01, SS-02, etc. |
| Gap Description | "[Shadow system name] contains information not available in [official system]" |
| Waste Category | Usually "Knowledge in heads" or "Copy-paste loop" |
| Root Cause | What capability is the official system missing? |
| Time Impact | Consultation time × frequency |
| Throughput Impact | Errors or delays when shadow system unavailable |
| Focus Impact | Cognitive load of maintaining parallel systems |
| Affected Scope | Who uses it? How often? |
| Dependencies | What connects to this? What else would need to change? |
| Feasibility | Difficulty of adding capability to official system |
| Priority Score | Calculate using standard formula |
Continue to Question Inventory methodology.
Example: Completed Opportunity Audit
Process Audited: Customer Returns Processing Organization: Regional Electronics Distributor (85 employees) Audit Period: 3 days observation + 6 interviews Auditor: Operations Improvement Lead
Documentation vs. Reality Summary:
| Documented Process | Actual Process |
|---|---|
| 5 steps, linear flow | 11 steps with 3 decision loops |
| 1 system (ERP returns module) | 4 systems + 2 spreadsheets |
| Average time: 15 minutes | Average time: 47 minutes |
| 3 roles involved | 6 roles + 2 informal consultants |
Shadow Systems Identified:
-
"The Returns Bible" — 47-page Word document maintained by a senior CS rep, containing exception-handling rules built up over 8 years. Used for ~60% of returns.
-
"Vendor Matrix" — Excel spreadsheet tracking which vendors require RMA numbers, which accept returns without authorization, and which charge restocking fees. Updated monthly by purchasing.
-
"Customer Notes" — Sticky notes on monitors with customer-specific return policies. 12 reps have different notes; no consolidated version exists.
-
"Marcus's email folder" — Warehouse supervisor keeps an email thread for every disputed return. Pulled up when customers claim damage in transit.
Question Inventory:
| Question | Frequency | Avg. Time to Answer | Resource Consulted |
|---|---|---|---|
| Is this item returnable? | 23/day | 4 min | Returns Bible + judgment |
| Does this vendor require RMA? | 18/day | 3 min | Vendor Matrix |
| What's this customer's return history? | 15/day | 6 min | ERP + personal memory |
| Is this within return window? | 31/day | 2 min | ERP (but often wrong) |
| Who approves exceptions over $500? | 8/day | 12 min | Phone calls, email chains |
| Was this item damaged on receipt? | 6/day | 15 min | Marcus's email folder |
T — Test
Measuring Assessment Quality and Impact
The Opportunity Audit produces a portfolio. Measurement proves whether the portfolio is accurate and whether addressing opportunities delivers expected value.
Validating the Audit
Before acting on audit findings, validate them:
Practitioner Review
Share findings with the people you observed. Ask:
- "Does this accurately describe how you work?"
- "Did I miss anything significant?"
- "Is my time estimate for [activity] realistic?"
Practitioners should recognize their experience. If they don't, the audit missed something.
Volume Verification
Cross-check your frequency estimates against system data where possible:
- Orders processed per day
- Exceptions logged
- Returns handled
- Escalations recorded
If your observation-based estimates differ significantly from system data, investigate why.
Gap Confirmation
For each identified gap, verify:
- The gap exists consistently, not just during your observation period
- The root cause you identified is accurate
- The impact estimate is reasonable based on volume and frequency
Time Lens Metrics for Assessment
Metric: Documentation vs. Reality Time Gap
Baseline: Documented process time versus observed actual time Measure: Ratio of actual to documented (Meridian was 4.2 days vs. 36 hours = 2.8x) Target: Reduce ratio toward 1.0 as gaps are closed Frequency: Remeasure monthly during improvement efforts
Metric: Shadow System Time
Baseline: Time spent consulting unofficial resources per process cycle Measure: Sum of all shadow system interactions during observation Target: Reduce shadow system time by 50% through formalization Frequency: Monthly
Metric: Question Resolution Time
Baseline: Time spent answering recurring questions per cycle Measure: Sum of question time from audit Target: Reduce by building systematic answers to high-frequency questions Frequency: Monthly
Throughput Lens Metrics for Assessment
Metric: First-Pass Completion Rate
Baseline: Percentage of process cycles that complete without loops, rework, or exceptions Measure: Track completions that follow happy path versus those requiring intervention Target: Increase first-pass rate by reducing exception triggers Frequency: Weekly
Metric: Handoff Count
Baseline: Number of handoffs per process cycle (audit should document this) Measure: Count transfers between people, systems, or departments Target: Reduce handoffs by eliminating unnecessary routing Frequency: Monthly reassessment
Metric: Exception Rate
Baseline: Percentage of cycles requiring exception handling Measure: Exceptions divided by total volume Target: Reduce exception rate by expanding "normal" path coverage Frequency: Weekly
Focus Lens Metrics for Assessment
Metric: Shadow System Dependency Count
Baseline: Number of unofficial resources documented in audit Measure: Count of spreadsheets, databases, notes, etc. in active use Target: Reduce by 50% through consolidation or formalization Frequency: Quarterly
Metric: Single-Person Risk Score
Baseline: Count of critical knowledge items held by only one person Measure: Items from audit flagged as "single point of failure" Target: Reduce single-dependency items by documenting or spreading knowledge Frequency: Quarterly
Metric: Question Systematization Rate
Baseline: Percentage of recurring questions that now have systematic answers Measure: Questions from audit inventory that have been addressed Target: Systematize answers to top 50% of questions by frequency Frequency: Monthly
Leading Indicators
These signals suggest opportunity portfolio is accurate and improvements are working:
- Practitioners cite specific findings: People reference the audit when discussing process problems
- Shadow systems begin consolidating: Informal resources merge as official systems improve
- Documentation gets updated: Process maps change to reflect actual work
- Exception rate decreases: Fewer cycles require special handling
- Onboarding improves: New employees ramp faster with better documentation
Red Flags
These signals suggest the audit missed something or improvements aren't working:
- New shadow systems emerging: People creating new workarounds to compensate for changes
- Practitioners dispute findings: Those observed don't recognize the audit's picture
- Improvement metrics don't move: Interventions complete but Time/Throughput/Focus unchanged
- Exception rate holds or increases: "Normal" path still doesn't match reality
- Resistance to using new systems: People reverting to old workarounds despite new tools
S — Share
Reflection Prompts
Complete these individually before group discussion:
-
Your process gap: Think of a process you perform regularly. What's the documented version? What's the actual version? Where does reality differ from the design, and why?
-
Your shadow systems: What unofficial resources do you rely on that don't exist in official systems? A spreadsheet, a notes file, a trusted colleague? What would happen if those resources disappeared?
-
Your undocumented knowledge: What do you know about your work that isn't written down anywhere? How did you learn it? How would you transfer it to a replacement?
-
The questions you answer: What recurring questions do you answer mentally during your work? Which ones should have systematic answers but don't?
-
The Meridian moment: Rachel was confident the problem was in the warehouse. She was wrong. Where in your organization is there similar misplaced confidence about where problems live?