CALCULATE — Proving Value Before Building
Translating intuition into evidence that earns budget
Module 3A: Theory
R — Reveal
Case Study: The Best Work That Never Got Funded
Nadia Reyes had done everything right.
As operations director at Whitmore & Associates—a 200-person management consulting firm—she had spent three months conducting the most thorough operational assessment the firm had ever seen. She'd observed how engagements actually flowed through the organization. She'd documented the shadow systems consultants used to track client work. She'd catalogued the tribal knowledge that lived in senior partners' heads and nowhere else.
Her Opportunity Portfolio contained eighteen prioritized findings, each one documented with waste patterns, time impact, root causes, and feasibility estimates. The top three opportunities alone represented over 400 hours of monthly friction—consultants doing administrative work instead of billable work, knowledge being recreated instead of reused, proposals being built from scratch when 80% of the content already existed somewhere.
She walked into the quarterly resource allocation meeting confident that her work would speak for itself.
It didn't.
The Meeting That Changed Nothing
The meeting followed the firm's standard format: each director presented initiatives competing for the same pool of discretionary budget. Four proposals. Enough funding for two, maybe two and a half.
Nadia presented third.
"The assessment identified significant friction in how we manage engagement knowledge," she began, advancing to a slide showing her portfolio summary. "Consultants spend an average of six hours per engagement recreating deliverables that already exist in slightly different forms. Partner review cycles take three times longer than they should because reviewers lack context about client history. And every time a senior consultant leaves, we lose institutional knowledge that took years to accumulate."
She walked through the findings methodically: the shadow systems, the waste patterns, the cognitive tax on billable staff. She showed the prioritized portfolio. She explained the root causes.
"Based on this assessment, I'm recommending we invest in a knowledge management system—specifically, a structured repository with intelligent search that can surface relevant prior work during proposal development and engagement delivery."
The managing partner, a man named Gerald Morrison who had run the firm for eighteen years, leaned forward. "What's the ROI?"
Nadia had anticipated this question. "The time savings alone would be substantial. If we reduce proposal development time by even 30%, we're talking about thousands of hours annually. And the reduction in cognitive load—"
"What's the number?"
"It's difficult to quantify precisely, but—"
"Ballpark."
"Significant. Probably hundreds of thousands of dollars when you factor in the fully-loaded cost of consultant time."
Gerald nodded slowly—the kind of nod that meant he was already moving on. "Thank you, Nadia. Let's hear from David."
The Proposal That Won
David Mensah was the firm's IT director. His proposal followed immediately after Nadia's.
"I'm requesting $180,000 for a client relationship management system upgrade," David began. "Our current CRM is seven years old. The new system integrates with our proposal tools, our time tracking system, and our financial reporting."
He clicked to a slide with a single table:
| Metric | Current State | Projected State | Annual Value |
|---|---|---|---|
| Time to generate client reports | 45 min | 12 min | $67,000 |
| Proposal auto-population | 0% | 65% | $112,000 |
| Data entry reduction | 0 hours/week | 8 hours/week | $41,600 |
| Total Annual Value | $220,600 | ||
| Investment Required | $180,000 | ||
| Payback Period | 9.8 months |
"Sub-one-year payback," David said. "And that's conservative. I haven't included the soft benefits—better client intelligence, improved forecasting accuracy, reduced errors in financial reporting."
Gerald asked two clarifying questions about implementation timeline and vendor selection. Then he turned to the CFO. "Thoughts?"
"The numbers work. If David's projections are even 70% accurate, we're looking at positive ROI within fourteen months."
Nadia watched the room's energy shift. David's proposal had numbers. His numbers had sources. His payback period was a specific calculation, not a gesture toward "significant" savings.
She wanted to object—to point out that David's "current state" baseline was self-reported, that his "projected state" came from vendor marketing materials, that his annual value calculations assumed 100% adoption from day one. But she had no counter-numbers to offer. She had findings. She had insights. She had a portfolio of opportunities.
She didn't have proof.
What Nadia Didn't Know
Two weeks after the meeting, Nadia learned that David's CRM project had been approved with full funding. Her knowledge management initiative had been "noted for future consideration"—the polite corporate phrase for unfunded.
What she didn't learn until months later: David's assessment work had been thin. He had spent two weeks gathering requirements—mostly by asking department heads what features they wanted. He hadn't observed how consultants actually tracked client relationships. He hadn't documented the workarounds that had developed around the existing system. He hadn't measured the gap between documented process and reality.
His baseline numbers weren't measured; they were assumed. "Time to generate client reports: 45 minutes" wasn't based on observation. It was David's estimate of how long reports should take, based on how the current system was designed to work. The actual time, as Nadia would later discover, was closer to 25 minutes—because consultants had already built workarounds that the official metrics didn't capture.
His projected savings were vendor claims, not organizational evidence. The "65% proposal auto-population" figure came directly from the vendor's marketing deck. It assumed clean data, full adoption, and use cases that matched Whitmore's work—none of which had been validated.
His payback calculation compared future-state benefits to nothing. The $67,000 in "time to generate reports" savings compared the new system to the current system without accounting for the workarounds people had already developed. He was claiming credit for solving problems that had already been partially solved through informal adaptation.
David's proposal won not because it was better-founded than Nadia's. It won because it looked better-founded. He had numbers; she had narratives. He had a payback period; she had possibilities.
The Aftermath
Eighteen months later, the CRM project was declared a qualified success. Adoption was at 40%, not the assumed 100%. The time savings were approximately 30% of projection—still positive ROI, but barely. The "improved forecasting accuracy" benefit never materialized because the data quality issues that plagued the old system had migrated to the new one.
Meanwhile, two senior consultants had left the firm. Between them, they took an estimated 4,000 hours of institutional knowledge about key client relationships, industry expertise, and engagement methodologies. The knowledge gaps showed up in proposal win rates, engagement quality scores, and client renewal patterns.
Nadia had been right. Her opportunity portfolio had correctly identified the highest-value interventions. Her assessment work had surfaced real problems that David's surface-level requirements gathering had missed.
But being right hadn't mattered. Her good ideas had died in that budget meeting, killed not by bad judgment but by the absence of proof.
The Moment of Clarity
The turning point came during an unrelated conversation with Gerald Morrison six months after the CRM go-live.
Gerald had called Nadia in to discuss another initiative—routine budgeting for the upcoming year. Near the end of the meeting, he mentioned her knowledge management proposal.
"I've been thinking about that presentation you gave," he said. "The assessment work was impressive. I've never seen anyone map our operations that thoroughly."
Nadia felt a flicker of validation, then frustration. "But it didn't get funded."
"No." Gerald paused, choosing his words. "You told me we had a problem. You told me the problem was significant. You told me what we should do about it. But you never told me what the problem was costing us—not in a way I could use."
"I said it was hundreds of thousands—"
"You said 'probably.' You said 'significant.' You said 'difficult to quantify.'" Gerald leaned back. "I run a consulting firm. We teach clients how to make decisions with evidence. And you came to me asking for $200,000 based on 'probably significant.'"
"But David's numbers weren't—"
"David's numbers might have been wrong. But they were numbers. He told me what we spend now, what we'd spend after, and when we'd break even. I could check his work. I could challenge his assumptions. I could adjust his projections and see what happened to the payback."
Gerald looked at her directly. "With your proposal, I had nothing to check. You asked me to trust your judgment that this was important. I do trust your judgment. But I can't allocate $200,000 based on trust when someone else is offering me a spreadsheet I can interrogate."
Nadia sat with that for a moment. "So the assessment wasn't enough."
"The assessment was necessary. But it wasn't sufficient. You did the hard work of understanding the problem. Then you stopped one step short of making it real."
The Lesson She Couldn't Unlearn
That evening, Nadia pulled out her original presentation and examined it with new eyes.
She had documented the friction. She had catalogued the waste patterns. She had prioritized the opportunities. All of that work was solid—better than solid.
But when she looked at how she'd communicated value, she saw what Gerald saw:
"Trust me, this will save time." She had asserted savings without establishing baselines. How much time did proposal development actually take today? Not documented. She had asked leadership to believe her estimate of future improvement without verifying present reality.
Comparing benefits to nothing. She had described the future state—intelligent search, structured repositories, reduced recreating of work—without quantifying the current cost of the alternative. Her implicit comparison was "having knowledge management" versus "not having knowledge management." The real comparison should have been "current cost of finding and recreating knowledge" versus "projected cost after implementation."
Hidden assumptions. Her projections depended on assumptions she had never stated: adoption rates, behavior change, maintenance requirements. These assumptions were embedded in her thinking but invisible in her presentation. When Gerald asked for specifics, she had nothing to share because she hadn't made her own assumptions explicit.
Activity metrics masquerading as outcomes. She had talked about "reducing proposal development time" as though it were an end in itself. But time reduction only matters if it translates to outcomes: more proposals submitted, higher win rates, consultants reassigned to billable work. She had measured the intervention without connecting it to results that mattered to the business.
David's proposal had been weaker in foundation but stronger in structure. His numbers might have been wrong, but they were examinable. His assumptions might have been optimistic, but they were visible. His methodology might have been shallow, but it looked rigorous.
Proof wasn't about being right. It was about being checkable.
What She Did Next
Over the following month, Nadia rebuilt her knowledge management proposal from scratch. Not the assessment—that work stood. The translation layer.
She established baselines: actual measured time for proposal development (not estimates), documented frequency of knowledge recreation, observed hours spent searching for prior work that existed but couldn't be found.
She built a model: current-state costs in the left column, projected future-state costs in the right column, with every assumption called out in a separate assumptions log. Adoption rate: 60% in year one, 80% by year two (conservative, based on comparable implementations at peer firms). Time savings per search: 23 minutes (based on her observation data, not vendor claims).
She calculated outcomes, not activities: billable hours recovered, proposal capacity increase, reduced time-to-revenue on new engagements.
Her revised payback period was 14 months—longer than David's CRM projection, but built on evidence that could be verified rather than claims that had to be believed.
When she re-presented six months later, Gerald approved the project in the same meeting. Not because the numbers were bigger, but because they were real.
"Now I can see what you're asking me to invest in," he said. "And I can see how we'll know if it's working."
The assessment had given her clarity. The calculation gave her credibility. Together, they gave her funding.
Module 3A: CALCULATE — Theory
O — Observe
The Core Principle
Proof isn't about being right. It's about being checkable.
This principle separates proposals that earn funding from proposals that earn polite deferrals. Nadia Reyes had better assessment work than David Mensah. She had deeper understanding of the real problems. She had identified higher-value opportunities. But David's flawed proposal won because it could be examined, challenged, and verified—while Nadia's could only be believed.
The asymmetry is uncomfortable but real: bad ideas with good numbers beat good ideas with no numbers. Not because leadership is foolish, but because leadership is rational. Given limited resources and competing claims on those resources, decision-makers favor investments they can evaluate over investments they must accept on faith.
This isn't a failure of organizational judgment. It's a failure of proof.
Why Proof Matters
The discipline of calculation exists for three reasons:
1. Good ideas die without evidence.
Every improvement professional has watched worthy initiatives fail in budget meetings. The friction was real. The waste patterns were documented. The opportunity was genuine. But the proposal asked for money based on qualitative assertions—"significant savings," "improved efficiency," "reduced burden"—and lost to a competing initiative that offered specific numbers with documented sources.
Assessment reveals opportunity. Calculation makes opportunity visible in the language organizations use to allocate resources.
2. Bad ideas survive on persuasion.
The inverse is equally true. Proposals with weak foundations but polished business cases get funded regularly. David's CRM project delivered 30% of its projected value because his baselines were wrong and his assumptions were optimistic—but it got funded because his numbers looked rigorous.
The David trap is common: shallow assessment paired with confident projections. It works often enough to be dangerous. But projects built this way fail at higher rates, damage credibility when they underdeliver, and consume resources that could have addressed real problems.
3. The discipline protects everyone.
Good value calculation isn't manipulation. It's translation. The work of quantification forces clarity about what's actually being claimed, what assumptions underlie those claims, and what would need to be true for the benefits to materialize.
When assumptions are visible, leadership can challenge them. When calculations are transparent, finance can verify them. When projections are explicit, everyone can track whether they come true.
The goal isn't to win proposals. It's to fund the right work and learn from the results.
The Three ROI Lenses Applied to Calculation
Module 1 introduced the three lenses—Time, Throughput, and Focus—as frameworks for seeing value. Module 3 applies these lenses to quantification.
Time: Quantifying Recovered Hours
Time is the most intuitive value dimension. People understand "save 15 hours per week." But the calculation requires precision that intuition doesn't provide:
- Whose time? Different roles have different costs.
- Doing what? Time spent on rework differs from time spent on value-adding activity.
- Recovered for what? Saved time that becomes slack isn't the same as saved time that becomes capacity.
The time calculation requires baseline measurement (how long does this actually take today?), projected improvement (how much time will the intervention save?), and value translation (what is that time worth in organizational terms?).
Throughput: Quantifying Capacity and Flow
Throughput measures how much work moves through a system in a given period. The calculation addresses:
- Volume: How many transactions, orders, requests, or cycles?
- Completion rate: What percentage finish without intervention?
- Quality: What percentage require correction?
- Cycle time: How long from start to finish?
Throughput improvements often matter more than time improvements because they unlock capacity. Doing the same work faster is efficiency. Doing more work with the same resources is growth.
Focus: Quantifying Cognitive Load Reduction
Focus is the hardest lens to quantify—and often the most valuable. The cognitive tax that doesn't appear in efficiency metrics has real organizational costs:
- Errors: Cognitive overload increases mistake rates.
- Turnover: Exhausted employees leave.
- Onboarding: Complex processes take longer to learn.
- Innovation: Overwhelmed teams don't improve.
Focus quantification uses proxies: error rates, retention data, training time, self-reported burden. The numbers are softer than time or throughput metrics, but they represent value that's real even when it's hard to count.
Single-Lens Analysis Misleads
The discipline requires all three lenses because optimizing for one dimension often degrades another. David's CRM project projected time savings but ignored the focus cost of learning a new system. Vance's automation projected throughput gains but destroyed the focus conditions that made quality possible.
A complete value model addresses: What happens to time? What happens to throughput? What happens to focus? The intervention that improves all three is rare. The intervention that improves one at the expense of others may not be an improvement at all.
The Baseline Imperative
You cannot prove improvement without proving current state.
This seems obvious, but it's routinely violated. Nadia's proposal asserted that the future state would be better without documenting what the present state actually cost. David's proposal claimed specific current-state numbers that weren't based on measurement—and those numbers were wrong.
The baseline must be measured, not assumed.
Documented process times are not baselines. System-reported metrics are not baselines. Estimates of how long things "should" take are not baselines. A baseline is an observed, verified measurement of current-state reality.
The baseline must happen before intervention.
Once an improvement begins, the current state starts changing. Measuring baseline after implementation creates comparison problems: Was this the real starting point? Did anticipation of the change affect behavior? The baseline measurement must occur before the intervention begins.
The baseline must match what you'll measure later.
If you plan to track cycle time after implementation, you must measure cycle time before. If you plan to track error rates, you must establish error rates first. The comparison requires consistent metrics across time.
Module 2 taught that the map is not the territory—documented processes differ from actual processes. Module 3 extends this principle: assumed performance differs from actual performance. The baseline imperative is about grounding calculation in observation.
Value Types That Organizations Recognize
Not all value is equal in organizational decision-making. The practitioner must understand what kinds of value resonate with different stakeholders.
Hard Savings
Money that leaves the budget: vendor costs eliminated, headcount reduced, expenses avoided. Hard savings are the gold standard because they appear in financial statements. Leadership trusts hard savings because they're verifiable after the fact.
Soft Savings
Time recovered, capacity increased, efficiency improved—but no dollars leave the budget. The 15 hours saved per week are real, but they don't appear as a line-item reduction. Soft savings require explaining what the organization will do with recovered capacity.
Risk Reduction
Avoided failures, reduced exposure, eliminated single points of failure. Risk reduction is valuable but probabilistic: the benefit is "this bad thing is now less likely to happen." Some organizations value risk reduction highly; others discount it because the cost avoided never materializes visibly.
Strategic Value
Capability enablement, competitive positioning, organizational learning. Strategic value is hardest to quantify but sometimes most important. "This investment enables us to do something we couldn't do before" matters when that capability is strategically significant.
Speaking Finance's Language
CFOs think in cash flows, payback periods, and return ratios. Operations leaders think in capacity, reliability, and quality. The practitioner must translate operational improvements into financial language without losing operational truth.
The hourly rate calculation is the basic translation: recovered hours × fully-loaded labor cost = value. But this calculation has limits. If no one is actually eliminated or redeployed, the "value" exists only as recovered capacity—which finance may discount.
The Compound Value of Restored Focus
Cognitive load reduction has multiplicative effects that single-transaction calculations miss.
When practitioners spend less mental energy on friction, they make fewer errors. Fewer errors mean less rework, fewer customer complaints, reduced exception handling downstream. One hour of cognitive tax often generates two or three hours of downstream consequence.
When practitioners experience less burden, they stay longer. Turnover costs—recruiting, hiring, training, ramping—often exceed six months of salary. A single retention success can exceed the annual value of many efficiency improvements.
When processes are clearer, new hires learn faster. Reduced onboarding time isn't just training cost—it's time to full productivity. Complex, poorly-documented processes extend the period when new employees are costly liabilities rather than productive contributors.
These compound effects are real. They're also hard to quantify without overstating. The discipline is making invisible value visible without inflating it—using conservative estimates, stating assumptions clearly, and allowing stakeholders to adjust projections based on their own judgment.
The Calculation That Follows
The theory establishes why proof matters. The practice that follows teaches how to build proof that earns trust:
- Establishing baselines that ground calculation in reality
- Building value models using all three lenses
- Documenting assumptions transparently
- Assembling business cases that invite scrutiny
- Recognizing when the numbers say "don't proceed"
The goal isn't optimistic projection. It's honest analysis that helps organizations make better decisions about where to invest limited resources.
Proof isn't persuasion. It's evidence that can be examined, challenged, and verified.
Economic Frameworks for Value Quantification
Value calculation requires economic models that translate operational improvements into financial terms. This section covers the frameworks practitioners use to build defensible ROI projections.
The ROI Calculation Framework
The basic formula is simple:
ROI = (Benefit - Cost) / Cost
A $50,000 benefit from a $40,000 investment yields 25% ROI: (50,000 - 40,000) / 40,000 = 0.25.
But this simplicity hides complexity:
Defining Benefit
Benefit is the value created by the improvement. It may include hard savings (reduced costs), soft savings (recovered capacity), risk reduction (avoided losses), and strategic value (enabled capabilities). Each component requires its own calculation method and carries different levels of certainty.
Defining Cost
Cost is not just the implementation price. It includes ongoing maintenance, training, change management, and the productivity dip during transition. Incomplete cost accounting inflates ROI artificially.
Time Horizon
Benefits rarely begin immediately or continue indefinitely. When do the benefits start? How long do they last? At what point do they decay or disappear? A three-year benefit from a one-year investment looks different from a one-year benefit from the same investment.
Discounting Future Value
A dollar received next year is worth less than a dollar received today. Present value calculations account for this, but the mechanics matter less than the principle: distant benefits are worth less than near-term benefits. For practical purposes, many organizations use a simple cutoff—ignore benefits beyond year three—rather than complex discounting.
The formula's simplicity makes it accessible. Its hidden complexity makes it dangerous for practitioners who use it without understanding what it contains.
Total Cost of Ownership
Implementation cost is what you pay upfront. Total cost of ownership is what you pay over the life of the improvement.
Implementation Costs (Obvious)
Direct costs to build, buy, or configure the solution. These are typically well-understood because they require budget approval.
Maintenance Costs (Often Forgotten)
Systems require updates. Processes require refinement. People require ongoing training. A solution that costs $50,000 to implement and $15,000 annually to maintain costs $95,000 over three years—nearly double the implementation price.
Opportunity Costs (Rarely Calculated)
The resources used for this project can't be used for others. If the implementation team could have delivered another project with higher ROI, the opportunity cost is real. This is harder to quantify but matters when comparing alternatives.
Transition Costs (Underestimated)
New systems and processes don't operate at full efficiency from day one. There's a productivity dip while people learn, adapt, and work through early problems. This dip is a cost—sometimes substantial—that optimistic projections ignore.
A complete business case addresses all four cost categories. Proposals that include only implementation costs invite skepticism from anyone who has managed a project through completion.
Value Stream Economics
A value stream follows work from trigger to completion. The insight from value stream thinking: most time in a process is not spent creating value.
Where Value Is Created
Value is created when work product advances toward completion. A returns processor creates value when verifying that an item is returnable, entering the return authorization, or confirming the credit. These are touch-time activities.
Where Time Is Spent
Time is spent waiting for approvals, searching for information, correcting errors, and navigating exceptions. In the Module 2 Returns Processing example, the documented 15-minute process actually took 47 minutes—with 32 minutes consumed by non-value-adding activity.
The Insight
Most operational improvements don't speed up value-adding work. They reduce time spent on waste. Understanding this distinction helps practitioners identify where intervention creates real value versus where it speeds up activity that shouldn't exist in the first place.
Capacity vs. Efficiency
Efficiency is doing the same work faster. If a process takes 45 minutes and an improvement reduces it to 30 minutes, the process is more efficient.
Capacity is the ability to do more work. If recovered time enables handling additional volume—more orders, more customers, more transactions—capacity has increased.
The distinction matters because efficiency gains may not translate to organizational value. If the same work gets done faster but no additional work fills the gap, the organization has gained slack, not value. Finance may discount efficiency claims that don't come with credible answers to: "What will people do with the recovered time?"
Capacity gains are more compelling because they connect to growth: more revenue, more customers, more output without proportional cost increase. When building value models, explicitly address whether the opportunity creates efficiency, capacity, or both—and what the organization will do with either.
Marginal vs. Average Cost Thinking
Average cost divides total cost by total volume: if a process costs $100,000 annually and handles 10,000 transactions, the average cost per transaction is $10.
Marginal cost asks: what does one more transaction cost? If the infrastructure is already in place and the only added cost is incremental labor, the marginal cost might be $2—far less than the average.
Why This Matters for Automation Decisions
Proposals often justify automation by multiplying average transaction cost by projected volume reduction. But if most of that cost is fixed overhead, eliminating transactions doesn't eliminate the cost.
The correct question: What costs are actually variable with volume? What costs disappear if we reduce transactions? For many operational improvements, the answer is labor—and only the portion of labor that's truly reallocated or eliminated, not the portion that becomes slack.
When "Cost Per Transaction" Misleads
A process with $10 average transaction cost might have only $3 in variable cost per transaction. An automation that eliminates 1,000 transactions doesn't save $10,000—it saves $3,000 unless overhead is also reduced.
Practitioners must separate fixed costs from variable costs and project savings based on what actually changes, not on arithmetic that treats all costs as variable.
Practical Application
These frameworks aren't academic exercises. They structure how practitioners build defensible value models:
Start with total cost of ownership. What will this really cost over three years, including maintenance, training, and transition?
Map the value stream. Where is time actually spent? What portion is value-adding versus waste?
Distinguish capacity from efficiency. Is this improvement enabling more work or just faster work? What happens to recovered resources?
Separate fixed from variable. Which costs actually change with volume? Which persist regardless of transaction count?
Apply the ROI formula honestly. Include all costs, realistic benefit timing, and appropriate time horizons.
The goal isn't sophistication for its own sake. It's building models that survive scrutiny—because models that can't survive scrutiny shouldn't earn investment.
Module 3B: CALCULATE — Practice
A systematic methodology for proving value before spending money
Why This Module Exists
Module 3A established the theory: proof isn't about being right—it's about being checkable. Good ideas die without evidence. Bad ideas survive on persuasion. The discipline of value calculation separates funded proposals from polite deferrals.
This module provides the methodology to build proof that earns trust.
The ROI Model is not an advocacy document. It is an analysis tool—a structured way to establish baselines, project value, document assumptions, and present findings that invite scrutiny rather than deflect it. Every framework in this module has been tested in real budget meetings, refined through rejection, and validated by practitioners who needed funding, not sympathy.
What You Will Learn
By the end of Module 3B, you will be able to:
- Establish baseline metrics — measured current-state performance, not assumed
- Build value models using the three ROI lenses — Time, Throughput, and Focus
- Document assumptions transparently — making every projection verifiable
- Rank opportunities across a portfolio — comparing alternatives systematically
- Assemble business cases that earn budget — structured for decision-makers
- Recognize when numbers say "don't proceed" — and have the discipline to accept it
The Practitioner's Challenge
Most ROI models are advocacy documents disguised as analysis.
A finance director described the pattern: "I review maybe twenty business cases a year. Eighteen of them have the same structure: optimistic assumptions buried in spreadsheets, benefits calculated against a fictional current state, risks mentioned briefly and then dismissed. The presenters want approval, not examination. They're surprised when I ask where the numbers came from."
The ROI Model methodology avoids this trap by inverting the standard approach:
| Advocacy Document | ROI Model |
|---|---|
| Assumes current-state cost | Measures current-state cost |
| Projects benefits from vendor claims | Projects benefits from organizational evidence |
| Buries assumptions | Documents assumptions prominently |
| Presents point estimates | Presents ranges with confidence levels |
| Deflects scrutiny | Invites verification |
| Advocates for approval | Enables informed decision |
The difference isn't just methodological—it's about whose interests the model serves. Advocacy documents serve the proposer's interest in getting funded. ROI models serve the organization's interest in making good decisions.
What You're Receiving as Input
Module 3B builds on work completed in Modules 1 and 2:
From Module 1 — Cognitive Tax Assessment:
- Baseline cognitive load scores for affected activities
- Identified hidden dependencies and single points of failure
- Uncertainty ratings that quantify current-state burden
From Module 2 — Opportunity Audit:
- Prioritized Opportunity Portfolio with ranked improvement candidates
- Quantified time impact per occurrence
- Documented frequency and volume data
- Waste pattern analysis with root causes
- Feasibility estimates for interventions
The R-01 Example: Throughout Module 3B, we'll continue with the Returns Processing audit example from Module 2. The R-01 opportunity (Returns Bible not in system) becomes the worked example for every methodology step:
| ID | Opportunity | Impact Score | Feasibility | Priority |
|---|---|---|---|---|
| R-01 | Returns Bible not in system | 9/10 | 7/10 | 1 |
This continuity allows you to see how assessment findings transform into quantified value models.
Field Note: The Model That Earned Trust
A practitioner described building her first rigorous ROI model:
"I had proposed knowledge management improvements three times before. Each time, I showed the problem clearly—the shadow systems, the tribal knowledge, the repeated work. Each time, leadership nodded sympathetically and funded something else.
"The fourth time, I built the model differently. I spent two weeks measuring actual time spent on knowledge searches—not estimates, real observation. I documented every assumption on a separate page. I built three scenarios: conservative, base, and optimistic. I calculated what would happen if my worst assumption was 50% wrong.
"When the CFO asked about the numbers, I had sources for everything. When she challenged the adoption assumption, I showed her the sensitivity analysis. When she asked why this was better than the competing proposal, I showed her the comparison on the same basis.
"She didn't approve it because I persuaded her. She approved it because she could verify the logic herself. That's the difference."
Module Structure
Module 3B follows the ROOTS framework:
- O — OBSERVE: The valuation methodology overview
- O — OPERATE: Five-step process for building defensible ROI models
- Baseline metrics establishment
- Value modeling across three lenses
- Opportunity ranking
- Business case assembly
- Risk and assumption documentation
- T — TEST: Quality metrics and validation approaches
- S — SHARE: Reflection prompts, peer exercises, and discussion questions
Supporting materials include:
- Reading list with academic and practitioner sources
- Slide deck outline for presentation
- Assessment questions with model answers
- Instructor notes for facilitation
The Deliverable
Module 3B produces the ROI Model with Baseline Metrics—the third artifact in the A.C.O.R.N. cycle.
A complete ROI Model includes:
- Documented current-state baseline (measured, not assumed)
- Quantified improvement potential across Time, Throughput, and Focus
- Implementation cost estimate (total cost of ownership)
- ROI calculation with payback period
- Assumption inventory with confidence levels
- Risk assessment with mitigation strategies
- Clear recommendation (proceed / modify / don't proceed)
This deliverable feeds Module 4: ORCHESTRATE, where the top opportunity becomes a workflow design project.
Proceed to the valuation methodology overview.
O — Observe
The Valuation Methodology Overview
Before diving into individual steps, this section overviews the complete ROI Model methodology—what it produces, what it requires, and how the pieces fit together.
What the ROI Model Produces
A complete ROI Model delivers six components:
1. Documented Current-State Baseline
Measured performance of the process as it exists today. Not documented performance. Not assumed performance. Observed, verified, quantified performance across Time, Throughput, and Focus dimensions.
The baseline answers: "What is this actually costing us right now?"
2. Quantified Improvement Potential
Projected performance after intervention, with explicit calculations for each ROI lens. Time recovery. Throughput increase. Focus restoration. Each projection tied to specific changes and supported by evidence.
The improvement potential answers: "How much better could this be?"
3. Implementation Cost Estimate
Total cost of ownership, not just implementation price. Includes upfront costs, ongoing maintenance, training, change management, and the productivity dip during transition.
The cost estimate answers: "What will this actually cost?"
4. ROI Calculation with Payback Period
Net value (benefit minus cost), ROI ratio (return per dollar invested), and time to recover the investment. Presented with confidence ranges, not false precision.
The calculation answers: "Is this worth doing?"
5. Assumption Inventory
Every assumption that affects the model, documented explicitly. Volume assumptions. Adoption assumptions. Performance assumptions. Each with stated confidence level and impact if wrong.
The inventory answers: "What would have to be true for these numbers to hold?"
6. Risk Assessment
Implementation risks, adoption risks, and residual operational risks. Each with probability estimate, impact assessment, and mitigation strategy.
The assessment answers: "What could go wrong, and what will we do about it?"
Together, these components enable informed decision-making. Leadership can verify the logic, challenge the assumptions, and make resource allocation decisions based on evidence rather than faith.
The Valuation Timeline
Building a rigorous ROI Model requires time. Rushing produces the advocacy documents that finance directors learn to distrust.
| Phase | Duration | Activities |
|---|---|---|
| Baseline Measurement | 1-2 weeks | Observation, time studies, volume verification, focus assessment |
| Value Modeling | 2-4 hours per opportunity | Calculate Time, Throughput, Focus value; aggregate; sensitivity analysis |
| Ranking | 1-2 hours | Compare opportunities; apply ranking criteria; build portfolio view |
| Business Case Assembly | 2-3 hours | Structure findings; write executive summary; prepare appendices |
| Risk/Assumption Documentation | 1-2 hours | Extract assumptions; assess risks; document mitigation |
Total for top 3 opportunities: 2-3 weeks
This timeline assumes the Opportunity Audit (Module 2) has already identified and prioritized candidates. The ROI Model deepens the analysis for the highest-priority opportunities.
Inputs Required
The ROI Model builds on prior work. Before beginning, ensure you have:
From Module 1 — Cognitive Tax Assessment:
- Cognitive Load Scores for activities affected by the opportunity
- Identified decision points, context switches, hidden dependencies
- Uncertainty ratings (1-5 scale) for affected processes
- Documented workarounds and their maintenance burden
These inputs establish the Focus dimension baseline and help quantify cognitive load reduction value.
From Module 2 — Opportunity Audit:
- Opportunity Portfolio with impact and feasibility scores
- Quantified time impact per occurrence (from observation, not assumption)
- Volume data: frequency per day/week/month
- Waste pattern classification for each opportunity
- Root cause analysis explaining why the gap exists
- Shadow system inventory showing what the opportunity might replace
These inputs establish the Time and Throughput baselines and provide the foundation for improvement projections.
New Data to Gather:
- Labor costs: Fully-loaded hourly rates by role (from HR or finance)
- Volume verification: Confirm audit estimates with system data where available
- Implementation costs: Vendor quotes, internal resource estimates
- Stakeholder input: What value dimensions matter most to decision-makers
The Methodology Sequence
The ROI Model methodology follows five steps:
Step 1: Establish Baseline Metrics
Measure current-state performance across all three lenses. This step converts Module 2's observations into verified, quantified baselines that can support before/after comparison.
Key activities:
- Conduct time studies for affected activities
- Verify volume data from system logs
- Assess cognitive load using Module 1 methodology
- Document measurement methods and sources
Step 2: Build Value Model
Project improvement and calculate value across Time, Throughput, and Focus. This step converts baseline measurements into projected benefits with explicit calculations.
Key activities:
- Calculate time value (saved hours × labor cost)
- Calculate throughput value (capacity gain, error reduction)
- Calculate focus value (cognitive load reduction effects)
- Aggregate across lenses, avoiding double-counting
Step 3: Rank Opportunities
Compare opportunities within the portfolio. This step ensures the highest-value opportunities receive attention first and creates a defensible prioritization for leadership.
Key activities:
- Apply ranking criteria (net value, ROI ratio, payback, risk)
- Build portfolio matrix (quick wins, major projects, etc.)
- Consider sequencing and dependencies
- Create executive summary view
Step 4: Assemble Business Case
Package the analysis for decision-makers. This step structures findings for the audience that will approve or reject the investment.
Key activities:
- Write executive summary (one page)
- Structure supporting detail for different reading depths
- Prepare the numbers page with key metrics
- Include recommendation with rationale
Step 5: Document Risks and Assumptions
Make the model's foundations visible. This step enables verification and builds credibility by acknowledging uncertainty.
Key activities:
- Extract all assumptions from the model
- Document each with basis, confidence, and impact
- Identify implementation and adoption risks
- Develop mitigation strategies
Quality Standard
The ROI Model methodology produces analysis that meets a specific quality bar:
Every number has a source.
No number appears without documentation of where it came from. "47 minutes per return" links to observation records. "$28/hour fully-loaded rate" links to HR data. "60% of returns affected" links to audit sampling.
Every assumption is visible.
Assumptions don't hide in spreadsheet cells. They appear in a dedicated assumption inventory, each one stated explicitly with basis, confidence level, and impact if wrong.
Every calculation can be verified.
Someone who wasn't involved in building the model can trace every output to its inputs. The logic is transparent. The arithmetic is checkable. The conclusions follow from the evidence.
The model invites challenge rather than deflecting it.
The goal isn't to prevent questions—it's to welcome them. A rigorous model gets stronger when challenged because the challenges reveal where assumptions might need adjustment. A weak model avoids scrutiny because scrutiny would expose its foundations.
The R-01 Example Throughout
The Returns Processing audit from Module 2 provides continuity through Module 3B. The R-01 opportunity (Returns Bible not in system) becomes the worked example for every methodology step:
What we know from Module 2:
- The ERP has 12 fields for return policies; reality requires 156 variations
- A 47-page Word document ("The Returns Bible") fills the gap
- Patricia, a senior CS rep, maintains it; she's a single point of failure
- ~60% of returns require consulting the Bible
- Consultation adds approximately 15 minutes per affected return
- 100 returns per day; 60 affected = 900 minutes (15 hours) daily
What Module 3B will establish:
- Verified baseline metrics (are these numbers accurate?)
- Calculated value across all three lenses
- Comparison to other portfolio opportunities
- Complete business case for leadership
- Documented assumptions and risks
By the end of Module 3B, you'll have seen the complete transformation from opportunity identification to investment proposal.
Proceed to baseline metrics methodology.
O — Operate
Step 1: Establish Baseline Metrics
The baseline is the foundation of every calculation that follows. Get it wrong, and everything built on top is wrong. This section details how to establish baselines that can withstand scrutiny.
Purpose of Baseline Measurement
You can't prove improvement without proving current state.
David Mensah's CRM proposal claimed 45 minutes per report as the baseline. It was an assumption—how long reports should take based on system design. The actual time was 25 minutes because people had built workarounds. His projected savings were calculated against a fictional current state.
Baseline measurement prevents this error. Before claiming improvement, document what exists. Before projecting savings, verify what's being spent.
Baselines make future comparison possible.
After implementation, you'll want to demonstrate results. Without a measured baseline, you'll have no credible "before" to compare against. Leadership will ask: "How do we know this improved anything?" The baseline provides the answer.
Baselines often reveal the problem is worse (or different) than assumed.
The Module 2 audit estimates time impact through observation sampling. Baseline measurement verifies these estimates with systematic data collection. Sometimes the problem is bigger than the audit suggested. Sometimes it's smaller. Sometimes it's different—time isn't the issue, but error rates are. The baseline reveals reality.
Time Lens Baselines
Time baselines quantify how long activities currently take. The measurements must be specific enough to support before/after comparison.
Cycle Time
End-to-end duration from trigger to completion. For returns processing: from return request received to credit issued. Cycle time includes waiting time, not just work time.
Measurement method: Track start and end timestamps for a representative sample. Minimum 30 instances across different conditions (time of day, day of week, operator).
Touch Time
Actual work time within the cycle—when someone is actively engaged. A 4-hour cycle might contain 45 minutes of touch time and 3 hours 15 minutes of waiting.
Measurement method: Time studies with direct observation. Record when work starts and stops for each step. Document waiting periods and their causes.
Wait Time
Delays between activities. Waiting for approval. Waiting for information. Waiting for availability. Wait time often dominates cycle time.
Measurement method: Calculate as (Cycle Time - Touch Time). Document causes of waiting to identify improvement opportunities.
Rework Time
Time spent correcting errors. If 15% of returns require correction after initial processing, that rework time belongs in the baseline.
Measurement method: Track error frequency and correction time separately. Some errors are quick fixes; others require significant effort. Capture the distribution.
Example — R-01 Time Baseline:
The Module 2 audit estimated 15 minutes of Returns Bible consultation per affected return. Baseline measurement verifies:
| Metric | Measurement Method | Finding |
|---|---|---|
| Total return processing time | Time study, 50 returns | 47 minutes average |
| Bible consultation time | Direct observation, 35 affected returns | 14.2 minutes average |
| Wait time during consultation | Observation | 3.1 minutes (finding document, locating policy) |
| Touch time during consultation | Observation | 11.1 minutes (reading, interpreting, deciding) |
The audit estimate of 15 minutes was close. The baseline provides verified numbers with documented measurement method.
Throughput Lens Baselines
Throughput baselines quantify volume, completion, and quality. These metrics establish current-state capacity and error rates.
Volume
Transactions per period. Returns per day. Orders per week. Invoices per month. Volume establishes the multiplier for per-transaction improvements.
Measurement method: System data when available. Manual counting when necessary. Verify system data against reality—systems sometimes count differently than practitioners experience.
Completion Rate
Percentage of transactions that finish without intervention. If 70% of returns process smoothly and 30% require exception handling, the completion rate is 70%.
Measurement method: Sample tracking through full cycle. Count exceptions, escalations, and special handling.
Error Rate
Percentage requiring correction. Errors caught internally differ from errors caught by customers. Both matter.
Measurement method: Defect tracking, quality audits, customer complaint logs. Distinguish error types—data entry errors, judgment errors, system errors.
First-Pass Yield
Percentage correct the first time. High first-pass yield means less rework. Low first-pass yield means the process generates its own additional work.
Measurement method: Track items through cycle; flag any that require correction or rerouting. First-pass yield = (Total - Reworked) / Total.
Example — R-01 Throughput Baseline:
| Metric | Measurement Method | Finding |
|---|---|---|
| Return volume | ERP system data, 30 days | 98 returns/day average |
| Returns requiring Bible | Sample of 100 returns | 58% (not 60% as estimated) |
| Error rate (Bible-related) | Quality audit, 50 returns | 4.3% wrong policy applied |
| Returns requiring supervisor review | Exception log | 12% of Bible-dependent returns |
The baseline reveals that volume is slightly lower and affected percentage slightly lower than audit estimates—but error rate is higher than expected. These verified numbers replace the estimates in the value model.
Focus Lens Baselines
Focus baselines quantify cognitive load—the hardest dimension to measure but often the most valuable to address.
Cognitive Load Score
Module 1's Cognitive Tax Assessment produces a score based on decision points, context switches, hidden dependencies, and uncertainty. This score serves as the focus baseline.
Measurement method: Apply Module 1 methodology to the specific activity affected by the opportunity.
Interruption Frequency
How often is work interrupted? Interruptions fragment attention and prevent deep engagement.
Measurement method: Track interruptions during time studies. Count phone calls, system alerts, colleague questions, and other attention breaks.
Context Switch Count
How many system or task transitions occur within a workflow? Each switch has cognitive cost.
Measurement method: Count distinct systems touched during one process cycle. Count mental mode shifts (e.g., from data entry to judgment call to communication).
Self-Reported Clarity
How confident are practitioners that they're doing the right thing? Uncertainty creates ongoing anxiety.
Measurement method: Structured interview or pulse survey. "On a scale of 1-5, how confident are you that you have the information needed to handle this correctly?"
Example — R-01 Focus Baseline:
| Metric | Measurement Method | Finding |
|---|---|---|
| Cognitive Load Score (Bible consultation) | Module 1 methodology | 67/100 (high) |
| Decision points per consultation | Process mapping | 4 (is item returnable? which policy applies? any exceptions? who approves?) |
| Hidden dependencies | Dependency inventory | 2 (Patricia's knowledge, document version currency) |
| Uncertainty rating | Practitioner interviews, n=6 | 3.8/5 (moderate-high uncertainty) |
The focus baseline confirms what the audit suggested: consulting the Returns Bible imposes significant cognitive load. The numerical baseline enables calculation of focus value.
The R-01 Complete Baseline
Assembling all three lenses produces a complete baseline for the R-01 opportunity:
Current State Summary:
| Dimension | Key Metric | Baseline Value | Source |
|---|---|---|---|
| Time | Bible consultation time | 14.2 min/return | Time study, n=35 |
| Time | Affected return volume | 57 returns/day | System data + sampling |
| Time | Daily time impact | 809 minutes (13.5 hours) | Calculation |
| Throughput | Error rate (policy errors) | 4.3% | Quality audit, n=50 |
| Throughput | Supervisor escalation rate | 12% | Exception log |
| Focus | Cognitive Load Score | 67/100 | Module 1 methodology |
| Focus | Uncertainty rating | 3.8/5 | Practitioner survey, n=6 |
| Risk | Single point of failure | Patricia | Dependency analysis |
Baseline Documentation:
- Measurement period: March 4-15, 2024
- Sample sizes: 35-100 depending on metric
- Data sources: Time studies, ERP system logs, quality audits, practitioner surveys
- Known limitations: Sample may not capture seasonal variation; uncertainty ratings self-reported
This baseline can withstand scrutiny. Every number has a source. Every measurement method is documented. The baseline is ready to support value calculation.
Common Baseline Mistakes
Using documented performance instead of actual.
Process documentation shows how things should work. Baselines measure how things do work. The gap is often substantial. Always measure; never assume.
Measuring during atypical periods.
A baseline taken during a crisis overstates the problem. A baseline taken during a slow period understates it. Choose measurement periods that represent normal operations. If variability is high, capture the range.
Relying solely on self-reporting.
Practitioners estimate time poorly—both directions. Some underestimate because they've normalized friction. Some overestimate because frustration skews perception. Validate self-reports with observation.
Forgetting to baseline what you'll need later.
If you plan to claim error reduction, you need a baseline error rate. If you plan to claim focus improvement, you need a baseline cognitive load score. Think ahead to what the value model will require.
Not documenting measurement method.
A number without methodology is an assertion, not a measurement. Document how each baseline metric was obtained so it can be verified and replicated.
Quality Checklist
Your baseline is ready when you can answer "yes" to each question:
- Is each metric clearly defined? (What exactly does "consultation time" include?)
- Is the measurement method documented? (Time study vs. self-report vs. system data)
- Is the data source identified? (Which system, which records, which sample)
- Is the measurement period specified? (When was data collected)
- Is the sample size adequate? (Minimum 30 for statistical reliability)
- Could someone else replicate this measurement?
- Have I baselined everything I'll need to claim improvement against?
- Have practitioners validated that these numbers reflect their experience?
A baseline that passes this checklist will survive the questions that finance directors ask.
Proceed to value modeling.
O — Operate
Step 2: Build the Value Model
With baselines established, the value model projects improvement and calculates return. This section covers the complete methodology for translating baseline measurements into defensible value projections.
The Value Model Structure
Every value model follows the same logic:
| Component | Question | Source |
|---|---|---|
| Current State | What does this cost us now? | Baseline metrics (Step 1) |
| Target State | What will performance be after? | Projected improvement |
| Gap | How much value is capturable? | Difference × volume |
| Confidence | How certain is this projection? | Assumption strength |
The model's power comes from transparency. Each component has visible inputs. Each calculation can be traced. Reviewers can challenge any element without dismissing the whole.
Time Value Calculation
Time value is the most straightforward lens—and the most commonly overstated.
The Basic Formula:
Time Value = (Current time - Projected time) × Volume × Labor cost
Example — R-01 Time Value:
From the baseline (98 total returns per day, 58% requiring Bible consultation = 57 affected returns):
- Current Bible consultation time: 14.2 minutes per affected return
- Projected time with system integration: 3 minutes (lookup + apply)
- Time saved per return: 11.2 minutes
- Affected returns per day: 57
- Daily time saved: 639 minutes (10.65 hours)
Converting to value:
- Annual working days: 250
- Annual time saved: 2,662 hours
- Fully-loaded hourly rate: $28
- Annual time value: $74,536
But this calculation isn't complete until you answer: What happens to those 2,662 hours?
The Redeployment Question:
Finance directors ask: "Will you actually spend less money, or will people just have more slack?"
Options for addressing this:
- Headcount avoidance: "We can handle 15% volume growth without adding staff."
- Productivity reallocation: "Customer service reps will handle more complex inquiries currently escalated to supervisors."
- Quality improvement: "Representatives will have time for fuller customer interactions, improving first-call resolution."
The weakest answer is no answer. If you can't articulate what happens to recovered time, the value becomes suspect.
Hard Savings vs. Capacity Recovery:
- Hard savings mean actual dollars leaving the budget—eliminated positions, reduced overtime, canceled contracts. These are fully credible.
- Capacity recovery means ability to do more without spending more. These are credible if you can demonstrate what the capacity enables.
For R-01, the honest framing: "This recovers 2,662 hours annually, enabling current staff to handle projected 12% volume growth without adding headcount. At a fully-loaded cost of $56,000 per additional CS representative, this represents $56,000 in avoided hiring cost."
Throughput Value Calculation
Throughput value captures improved output quality and increased capacity.
Error Reduction Value:
From the R-01 baseline:
- Current error rate (wrong policy applied): 4.3%
- Projected error rate with system integration: 1.5%
- Error reduction: 2.8 percentage points
- Affected returns per day: 57
- Daily errors avoided: 1.6
- Annual errors avoided: 400
Cost per error:
- Rework time: 25 minutes average
- Customer complaint handling: 15 minutes (for 30% of errors)
- Refund adjustments: $12 average (for 20% of errors)
- Total cost per error: approximately $22
Annual error reduction value: 400 × $22 = $8,800
Escalation Reduction Value:
From the baseline:
- Current supervisor escalation rate: 12% of Bible-dependent returns
- Projected escalation rate: 4% (ambiguous cases only)
- Reduction: 8 percentage points
- Daily escalations avoided: 4.6
- Annual escalations avoided: 1,150
Cost per escalation:
- Representative wait time: 8 minutes
- Supervisor handling time: 12 minutes
- Total: 20 minutes at blended rate of $35/hour = $11.67
Annual escalation reduction value: 1,150 × $11.67 = $13,420
Focus Value Calculation
Focus value is the hardest to quantify—and often the most significant. The methodology uses proxies because cognitive load itself isn't directly observable.
Cognitive Load Reduction Effects:
Module 1 established that high cognitive load produces measurable consequences:
- Increased error rates
- Higher turnover
- Longer onboarding time
- Decision fatigue degradation
Each consequence can be quantified.
Connecting Cognitive Load to Error Rates:
The Module 1 Cognitive Tax Assessment established a Cognitive Load Score of 67/100 for Returns Bible consultation. Research suggests a 10-point reduction in cognitive load correlates with approximately 15-20% reduction in error rates. This connection supports the error reduction projection in the Throughput calculation — the same intervention improves both Focus and Throughput through different mechanisms. To avoid double-counting, we attribute error reduction value to Throughput and attribute risk/sustainability value to Focus.
Turnover Risk Reduction:
Patricia maintains the Returns Bible. Her departure would create:
- Knowledge transfer gap: 2-4 weeks of elevated errors
- Reconstruction effort: estimated 40 hours to rebuild decision logic
- Institutional memory loss: some policies exist only in her experience
Probability of departure in next 12 months: 15% (industry average for tenure)
Expected annual cost of departure:
- Elevated error cost: $4,400 (2 weeks at 4× normal error rate)
- Reconstruction effort: 40 hours × $50/hour = $2,000
- Total exposure: $6,400
- Risk-adjusted cost: $6,400 × 15% = $960
With system integration, Patricia dependency eliminated:
- Risk-adjusted value: $960 annually
- Plus: reduced bus factor for the entire returns operation
Onboarding Time Reduction:
Current state:
- Returns Bible training: 3 days of shadowing + 2 weeks of reduced productivity
- Cost: approximately $2,200 per new hire
Projected state with system integration:
- System training: 4 hours + 1 week reduced productivity
- Cost: approximately $1,100 per new hire
With 2 new hires per year average:
- Annual onboarding value: 2 × $1,100 = $2,200
Presenting Focus Value:
Focus value should be presented separately from time and throughput value:
"Beyond direct operational savings, this opportunity reduces organizational risk by eliminating a single point of failure. The Patricia dependency currently represents $960 in annualized turnover risk and constrains hiring flexibility. System integration converts tacit knowledge to explicit process, protecting operations and enabling faster onboarding."
Total Value Aggregation
Aggregating value across lenses requires avoiding double-counting.
R-01 Total Value Summary:
| Lens | Component | Annual Value |
|---|---|---|
| Time | Labor cost of Bible consultation | $74,536 |
| Throughput | Error reduction | $8,800 |
| Throughput | Escalation reduction | $13,420 |
| Focus | Turnover risk reduction | $960 |
| Focus | Onboarding improvement | $2,200 |
| Total | $99,916 |
Checking for Double-Counting:
Review each line item: Does any value appear twice?
- Time value counts the 11.2 minutes saved per return
- Error reduction counts rework time separately (different minutes, different activity)
- Escalation reduction counts supervisor time (different people, different minutes)
- No overlap detected
One-Time vs. Recurring Value:
All values above are annual and recurring. One-time values (like system implementation savings from avoiding a later, larger project) should be separated and not annualized.
Time Horizon:
This model projects three years of recurring value, assuming:
- Stable or growing volume
- Maintained system performance
- No major process redesign
Three-year gross value: $99,916 × 3 = $299,748
Determining Labor Costs
Accurate labor costs are essential. Here's how to obtain them:
Fully-Loaded Hourly Rate Calculation:
Fully-loaded rate = (Annual salary + Benefits + Overhead) / Annual work hours
Components:
- Base salary: Annual compensation
- Benefits: Typically 25-35% of salary (health, retirement, PTO)
- Overhead: Facility, equipment, management allocation (varies widely)
Example:
- Annual salary: $45,000
- Benefits (30%): $13,500
- Overhead (15%): $6,750
- Total: $65,250
- Work hours: 2,080 (52 weeks × 40 hours)
- Fully-loaded rate: $31.37/hour
Getting the Numbers:
- From HR: Salary bands by role, benefits percentage
- From Finance: Standard overhead allocation, fully-loaded rates if already calculated
- Industry benchmarks: When internal data isn't available, use published ranges
Blended vs. Role-Specific Rates:
- Use role-specific rates when one role is primarily affected
- Use blended rates when multiple roles are involved or when the mix varies
For R-01, customer service representatives at $28/hour and supervisors at $42/hour. Since 85% of affected time is representative time, a blended rate of $30/hour is reasonable.
When Exact Figures Aren't Available:
Professional services typically range $40-80/hour fully loaded. Administrative roles typically range $25-45/hour. Manufacturing floor workers typically range $30-55/hour. Use conservative estimates and document the assumption.
Sensitivity Analysis
Single-point estimates create false confidence. Sensitivity analysis reveals how conclusions change with different inputs.
The Key Variables:
For R-01, test sensitivity to:
- Volume (affected returns per day)
- Time savings per return
- Labor cost rate
- Adoption rate
Volume Sensitivity:
| Volume | Annual Time Value | Total Value |
|---|---|---|
| 45/day (-20%) | $59,629 | $84,009 |
| 57/day (base) | $74,536 | $99,916 |
| 68/day (+20%) | $89,443 | $115,823 |
Adoption Sensitivity:
What if only 80% of representatives use the system correctly?
- Effective time savings: 11.2 × 0.8 = 9.0 minutes
- Adjusted annual time value: $59,629
- Total value at 80% adoption: $84,009
Breakeven Analysis:
At what point does the investment not pay off?
Implementation cost estimate: $35,000 Annual value required for 1-year payback: $35,000 Current projection: $99,916
The model can tolerate 65% reduction in benefits and still achieve one-year payback. This provides significant margin for assumption error.
Presenting Ranges:
Rather than claiming $99,916, present:
- Conservative case (low volume, 70% adoption): $67,000
- Base case: $100,000
- Optimistic case (high volume, full adoption): $125,000
"We project annual value between $67,000 and $125,000, with our base case at approximately $100,000."
Value Model Template
Use this structure for each opportunity:
OPPORTUNITY: [ID and Name]
Baseline Summary:
- Key metric 1: [value] ([source])
- Key metric 2: [value] ([source])
- Key metric 3: [value] ([source])
Time Value:
- Current time: [X] per [unit]
- Projected time: [Y] per [unit]
- Time saved: [X-Y] per [unit]
- Volume: [N] [units] per [period]
- Labor rate: $[Z]/hour ([source])
- Annual time value: $[calculated]
- Redeployment plan: [what happens to recovered time]
Throughput Value:
- Error reduction: [current] to [projected], [N] errors avoided
- Cost per error: $[X] ([components])
- Annual error value: $[calculated]
- Other throughput gains: [if applicable]
Focus Value:
- Risk reduction: [specific risks addressed]
- Estimated value: $[calculated] ([methodology])
- Qualitative benefits: [described, not inflated]
Total Value:
- Annual recurring: $[sum]
- One-time: $[if any]
- Time horizon: [N] years
- Gross value over horizon: $[calculated]
Sensitivity:
- Variable 1 range: [low] to [high] = $[range]
- Variable 2 range: [low] to [high] = $[range]
- Breakeven: [condition]
Confidence Assessment:
- Overall confidence: [High/Medium/Low]
- Highest-risk assumption: [identified]
- What would change our projection most: [identified]
Quality Checklist
Your value model is ready when you can answer "yes" to each question:
- Is every number traceable to a source or documented assumption?
- Is the time value calculation explicit about what happens to recovered time?
- Is throughput value based on measurable outcomes, not activities?
- Is focus value presented as risk reduction, not vague "better experience"?
- Have I checked for double-counting across lenses?
- Have I distinguished one-time from recurring value?
- Have I specified the time horizon and its justification?
- Have I conducted sensitivity analysis on key variables?
- Could someone else verify my calculations?
- Would I bet my professional credibility on these projections?
A model that passes this checklist will survive finance review.
Proceed to opportunity ranking.
O — Operate
Step 3: Rank Opportunities
With value models built, the portfolio must be prioritized. Not every positive-ROI opportunity deserves resources. This section covers how to rank opportunities systematically and present them as a portfolio.
Why Ranking Matters
Resources are finite.
Organizations have limited budget, limited staff time, and limited change capacity. Pursuing every opportunity simultaneously guarantees none succeed. Ranking forces choices.
Not every positive ROI should proceed.
An opportunity with 15% ROI competes for the same resources as one with 80% ROI. Both are "positive," but funding the first means not funding the second. The question isn't "Is this worth doing?" but "Is this the best use of these resources?"
Sequencing affects success probability.
Some opportunities create foundations for others. Some compete for the same attention. A rushed implementation that fails poisons future initiatives. Strategic sequencing improves portfolio-level success, not just individual project success.
Ranking Criteria
Six criteria determine rank. Each captures a different dimension of opportunity attractiveness.
1. Net Value
Total benefit minus total cost over the analysis horizon.
- R-01 example: $299,748 (3-year gross) - $35,000 (implementation) = $264,748 net value
Net value answers: "How much value does this create?"
2. ROI Ratio
Return per dollar invested: (Benefit - Cost) / Cost
- R-01 example: ($299,748 - $35,000) / $35,000 = 7.56 or 756%
ROI ratio answers: "How efficiently does this use capital?"
High net value with low ROI means big opportunity but expensive. High ROI with low net value means efficient but small. Both dimensions matter.
3. Payback Period
Time to recover the investment.
- R-01 example: $35,000 / $99,916 annual value = 0.35 years (about 4 months)
Payback period answers: "How quickly do we get our money back?"
Shorter payback reduces exposure to assumption error and organizational change.
4. Strategic Alignment
How well does this opportunity support organizational priorities?
- High: Directly enables stated strategic goals
- Medium: Supports operations without strategic connection
- Low: Improves efficiency in non-priority areas
Strategic alignment answers: "Does this matter beyond the numbers?"
5. Risk Profile
Combined implementation and outcome risk.
- Low risk: Proven approach, high adoption confidence, stable baseline
- Medium risk: Some uncertainty in execution or adoption
- High risk: New approach, complex implementation, uncertain adoption
Risk profile answers: "How likely are we to achieve projected value?"
6. Implementation Complexity
Resource requirements and organizational disruption.
- Low: Small team, short timeline, minimal change management
- Medium: Cross-functional effort, moderate timeline, some training
- High: Major initiative, long timeline, significant organizational change
Implementation complexity answers: "What does this demand from us?"
The Ranking Matrix
A multi-criteria framework enables systematic comparison.
Scoring Scale:
Each criterion receives a score from 1-5:
- 5: Exceptional
- 4: Strong
- 3: Acceptable
- 2: Weak
- 1: Poor
Weighting:
Organizations weight criteria differently. A cash-constrained organization may weight ROI ratio heavily. A growth-stage organization may weight strategic alignment heavily. Default weights for balanced portfolios:
| Criterion | Weight | Rationale |
|---|---|---|
| Net Value | 25% | Total value creation matters most |
| ROI Ratio | 20% | Capital efficiency matters |
| Payback Period | 15% | Risk reduction through quick recovery |
| Strategic Alignment | 20% | Organizational priorities guide choices |
| Risk Profile | 10% | Confidence in achieving projections |
| Implementation Complexity | 10% | Execution feasibility |
Calculating Weighted Score:
Weighted score = Sum of (Score × Weight) for all criteria
Example — R-01 Scoring:
| Criterion | Score | Weight | Weighted |
|---|---|---|---|
| Net Value | 5 | 25% | 1.25 |
| ROI Ratio | 5 | 20% | 1.00 |
| Payback Period | 5 | 15% | 0.75 |
| Strategic Alignment | 4 | 20% | 0.80 |
| Risk Profile | 4 | 10% | 0.40 |
| Implementation Complexity | 4 | 10% | 0.40 |
| Total | 4.60 |
R-01 scores 4.60 out of 5.00 — a strong opportunity.
Handling Ties and Close Calls:
When weighted scores are within 0.3 points, consider:
- Which opportunity creates more organizational learning?
- Which builds foundation for future improvements?
- Which has more executive sponsorship?
- Which aligns better with current organizational energy?
Numbers inform; judgment decides.
Portfolio View
The portfolio matrix positions opportunities by value and effort.
Quadrant Model:
HIGH VALUE
<
MAJOR PROJECTS QUICK WINS
(Plan carefully) (Do first)
HIGH<$LOW
EFFORT EFFORT
QUESTIONABLE FILL-INS
(Reconsider) (Do when able)
—<
LOW VALUE
Quick Wins (High Value, Low Effort)
Start here. These opportunities build momentum, demonstrate capability, and generate resources for larger initiatives. R-01 belongs in this quadrant.
Major Projects (High Value, High Effort)
Plan carefully. These require significant investment but deliver significant return. Success depends on organizational readiness and sustained commitment.
Fill-Ins (Low Value, Low Effort)
Do when able. These are worth doing but shouldn't displace higher-priority work. Useful for building skills or maintaining momentum between major efforts.
Questionable (Low Value, High Effort)
Reconsider. Unless strategic factors override the numbers, these shouldn't proceed. Document why they're questionable so the reasoning is preserved.
Sequencing Considerations
Order affects outcome. Several factors determine optimal sequence.
Prerequisites
Some opportunities enable others. Building a data foundation enables analytics. Standardizing processes enables automation. Identify dependencies:
- R-01 (Returns Bible) enables R-03 (Return routing automation) because automated routing requires policy logic in the system
- R-02 (Supervisor approval workflow) is independent — can proceed in parallel
Resource Conflicts
Multiple opportunities may require the same resources:
- Same subject matter experts
- Same technical resources
- Same change management capacity
When resources conflict, sequence by priority rank. Don't attempt simultaneous implementation when resources can't support it.
Change Fatigue
Organizations have finite capacity for change. Three major implementations simultaneously may all fail where two sequential implementations would all succeed.
Consider:
- How much change has the organization absorbed recently?
- How much change resilience exists in affected teams?
- What's the organizational appetite for more change?
Building a Realistic Roadmap:
| Quarter | Initiative | Rationale |
|---|---|---|
| Q1 | R-01 (Returns Bible) | Quick win, builds foundation |
| Q2 | R-04 (Exception tracking) | Low effort, independent |
| Q3 | R-02 (Approval workflow) | Medium effort, requires Q1 learning |
| Q4 | R-03 (Return routing) | Depends on R-01 completion |
The roadmap shows what, when, and why — enabling leadership to track progress and adjust as conditions change.
The R-01 in Portfolio Context
The Returns Processing audit identified four opportunities. Here's how they compare:
Portfolio Summary:
| ID | Opportunity | Net Value (3yr) | ROI | Payback | Weighted Score |
|---|---|---|---|---|---|
| R-01 | Returns Bible in system | $264,748 | 756% | 4 mo | 4.60 |
| R-02 | Supervisor approval workflow | $87,400 | 349% | 6 mo | 3.85 |
| R-03 | Return routing automation | $156,200 | 224% | 9 mo | 3.95 |
| R-04 | Exception tracking dashboard | $42,800 | 428% | 4 mo | 3.70 |
Why R-01 Ranks First:
- Highest net value ($264,748)
- Highest ROI ratio (756%)
- Shortest payback (4 months)
- Enables R-03 (foundational)
- Eliminates critical single point of failure (Patricia)
The numbers and strategic factors align. R-01 is the clear first priority.
Why R-03 Ranks Second Despite Lower ROI:
R-03 has lower ROI than R-04 but higher net value and strategic importance (customer experience impact). The weighted scoring reflects these multiple dimensions.
Portfolio Recommendation:
"We recommend proceeding with R-01 immediately. R-04 can proceed in parallel given independent resource requirements. R-02 and R-03 should be sequenced for Q2-Q3 after R-01 demonstrates value and team capacity is verified."
Presenting the Ranked Portfolio
The portfolio presentation must serve multiple audiences.
Executive Summary (One Page):
RETURNS PROCESSING IMPROVEMENT PORTFOLIO
Prepared: [Date]
SUMMARY
Four opportunities identified with combined 3-year net value of $551,148.
Recommended sequence prioritizes quick wins and foundations.
TOP PRIORITY: R-01 — Returns Bible in System
- Investment: $35,000
- Annual Value: $99,916
- Payback: 4 months
- Strategic: Eliminates single point of failure
PORTFOLIO MATRIX
[Visual quadrant showing all four opportunities positioned]
RECOMMENDED SEQUENCE
Q1: R-01 + R-04 (parallel)
Q2: R-02
Q3: R-03
RESOURCE REQUIREMENTS
- Q1: 1 developer (0.5 FTE), CS lead (0.2 FTE)
- Q2-Q3: Scale based on Q1 results
DECISION REQUESTED
Approve R-01 implementation and Q1 resource allocation.
Supporting Detail:
Below the executive summary, include:
- Individual opportunity summaries (one paragraph each)
- Ranking matrix with scores
- Dependency diagram
- Risk summary by opportunity
- Assumption highlights
The One-Page Portfolio View:
Leaders need to see the whole portfolio at once. A single page showing:
- All opportunities with key metrics
- Quadrant positioning
- Recommended sequence
- Total portfolio value
This enables informed discussion: "Why is R-03 after R-02?" "What if we have more budget?" "What's the minimum viable portfolio?"
Quality Checklist
Your ranking is ready when you can answer "yes" to each question:
- Have I scored each opportunity against all six criteria?
- Have I used consistent scoring definitions across opportunities?
- Have I applied appropriate weights for my organization's priorities?
- Have I identified dependencies between opportunities?
- Have I checked for resource conflicts?
- Have I considered organizational change capacity?
- Can I explain why #1 ranks above #2?
- Does the sequence make operational sense?
- Have I prepared both summary and detail views?
- Could leadership make a resource decision from this presentation?
A portfolio that passes this checklist enables informed resource allocation.
Proceed to business case assembly.
O — Operate
Step 4: Assemble the Business Case
The business case packages analysis for decision-makers. This section covers structure, audience calibration, and the complete R-01 example.
Purpose of the Business Case
Enable decision, not persuade.
A business case presents evidence so leadership can decide. It is not a sales document. The goal is informed choice, not approval at any cost.
If the evidence supports proceeding, the business case makes that clear. If the evidence is mixed, the business case presents the complexity. If the evidence says don't proceed, the business case says so.
Provide what leadership needs.
Decision-makers need:
- Clear problem statement (what's broken, what it costs)
- Proposed solution (what you want to do)
- Expected return (what they get for the investment)
- Risks and assumptions (what could go wrong)
- Recommendation (your professional judgment)
They don't need every calculation. They need confidence that the calculations are sound.
Document reasoning for future reference.
Business cases become organizational records. When someone asks "Why did we do R-01?" the business case provides the answer. Clear documentation prevents revisionist history and enables learning from outcomes.
Invite scrutiny rather than deflect it.
A strong business case welcomes questions. "I'd like to understand your adoption assumption" is a good outcome — it means the reviewer is engaging with the analysis. A business case that discourages questions is a business case that lacks confidence.
Business Case Structure
The complete business case contains eight sections, ordered for how decision-makers read.
1. Executive Summary (1 page)
The entire case condensed. A busy executive should be able to make a preliminary decision from this page alone.
Contents:
- Problem (2-3 sentences)
- Solution (1-2 sentences)
- Investment required
- Return expected
- Payback period
- Recommendation
2. Problem Statement
What friction exists and what it costs. This section establishes the "why now" — the organizational pain that justifies attention.
Include:
- Current state description
- Quantified impact (time, errors, risk)
- Root cause (why the problem exists)
- Consequence of inaction
3. Proposed Solution
What intervention will address the problem. Be specific enough to cost accurately, general enough to allow implementation flexibility.
Include:
- Solution description
- How it addresses root cause
- What changes for practitioners
- Dependencies and prerequisites
4. Value Analysis
The ROI model, presented accessibly. This is where the three lenses appear.
Include:
- Baseline summary (current state metrics)
- Projected improvement (target state)
- Value calculation (Time, Throughput, Focus)
- Total value over analysis horizon
- Key assumptions (visible, not buried)
5. Implementation Approach
How the solution will be delivered. Enough detail to evaluate feasibility, not a full project plan.
Include:
- Major phases
- Timeline estimate
- Resource requirements
- Key milestones
- Success criteria
6. Risk Assessment
What could go wrong and how you'll address it.
Include:
- Implementation risks
- Adoption risks
- Outcome risks
- Mitigation strategies
- Contingency triggers
7. Recommendation
Your professional judgment based on the analysis.
Options:
- Proceed as proposed
- Proceed with modifications
- Defer pending [specific conditions]
- Don't proceed
Include brief rationale. The recommendation should follow logically from the analysis.
8. Appendix
Detailed calculations, data sources, and supporting evidence. Reviewers who want to verify can dig here; others can skip it.
Include:
- Complete value model calculations
- Baseline data sources
- Assumption documentation
- Sensitivity analysis
- Comparison alternatives (if evaluated)
Writing for the Audience
Different readers engage with different sections.
What Executives Read:
- Executive summary (always)
- Recommendation (always)
- Problem statement (if summary hooks them)
- Bottom-line numbers (investment, return, payback)
Executives decide whether to proceed. They need confidence in the analysis without needing to verify every number.
What Finance Reviews:
- Value analysis (detailed)
- Assumptions (challenged)
- Sensitivity analysis (probed)
- Cost estimates (scrutinized)
Finance validates the model. They'll ask: "Where did this number come from?" Every number should have an answer.
What Operations Needs:
- Proposed solution (detailed)
- Implementation approach (realistic?)
- Resource requirements (feasible?)
- What changes for practitioners (manageable?)
Operations evaluates whether this will actually work in their environment.
Structuring for Different Reading Depths:
Layer 1 (30 seconds): Executive summary only
Layer 2 (5 minutes): Summary + problem + recommendation
Layer 3 (20 minutes): All main sections
Layer 4 (1 hour): Everything including appendix
Make each layer self-contained. A reader at any depth should have a complete-enough picture to engage appropriately.
The Numbers Page
The value analysis section deserves special attention. This is where trust is won or lost.
Current State Summary:
| Metric | Value | Source |
|---|---|---|
| Bible consultation time | 14.2 min/return | Time study, n=35 |
| Affected returns | 57/day | System data + sampling |
| Error rate | 4.3% | Quality audit |
| Escalation rate | 12% | Exception log |
Target State Projection:
| Metric | Current | Target | Improvement |
|---|---|---|---|
| Consultation time | 14.2 min | 3.0 min | 11.2 min saved |
| Error rate | 4.3% | 1.5% | 2.8 points |
| Escalation rate | 12% | 4% | 8 points |
Investment Required:
| Category | Amount |
|---|---|
| System integration | $25,000 |
| Training and change management | $6,000 |
| Contingency (15%) | $4,000 |
| Total | $35,000 |
Return Expected:
| Value Component | Annual | 3-Year |
|---|---|---|
| Time value | $74,536 | $223,608 |
| Error reduction | $8,800 | $26,400 |
| Escalation reduction | $13,420 | $40,260 |
| Focus/risk reduction | $3,160 | $9,480 |
| Gross value | $99,916 | $299,748 |
Summary Metrics:
- Net value (3-year): $264,748
- ROI: 756%
- Payback period: 4.2 months
Key Assumptions:
- Affected return volume remains stable (57/day)
- Time savings of 11.2 minutes achieved (±20%)
- 90% practitioner adoption within 60 days
- Error reduction to 1.5% with system guidance
Each assumption has a documented basis in the appendix.
The R-01 Business Case (Complete Example)
BUSINESS CASE: Returns Policy Integration (R-01)
EXECUTIVE SUMMARY
Problem: Returns processing requires consulting a 47-page Word document ("Returns Bible") for 57% of returns, adding 14 minutes per return and creating dependency on one employee (Patricia) who maintains the document.
Solution: Integrate return policy logic into the ERP system, replacing document lookup with system-guided decisions.
Investment: $35,000 (one-time)
Return: $99,916 annual value ($264,748 net over 3 years)
Payback: 4 months
Recommendation: Proceed. High ROI, short payback, eliminates critical single point of failure.
PROBLEM STATEMENT
The ERP system contains 12 fields for return policy decisions. Reality requires 156 policy variations based on product category, purchase timing, customer status, and promotional rules. A Word document maintained by Patricia Walsh bridges this gap.
Currently:
- 57 returns per day (58% of 98 total) require Bible consultation
- Consultation adds 14.2 minutes per return (809 minutes daily)
- 4.3% of Bible-dependent returns have wrong policy applied
- 12% require supervisor escalation for interpretation
- Patricia is the only person who understands all policy interactions
If Patricia is unavailable (vacation, illness, departure), returns processing degrades significantly. This dependency constrains hiring, complicates training, and creates operational risk.
The root cause is a system design decision made when return policies were simpler. The business evolved; the system didn't. The Bible is a workaround that has become load-bearing infrastructure.
PROPOSED SOLUTION
Integrate return policy decision logic into the ERP system as a guided workflow:
- Encode all 156 policy variations as decision rules
- Build a decision-tree interface that guides representatives through policy selection
- Include exception flagging for genuinely ambiguous cases
- Maintain audit trail showing policy applied and reasoning
This addresses the root cause (system gap) rather than the symptom (document consultation). Patricia's knowledge becomes organizational knowledge. New hires learn the system, not the document.
The solution requires ERP configuration, not custom development. Similar implementations at peer organizations have taken 6-8 weeks.
VALUE ANALYSIS
See Numbers Page above for complete breakdown.
Summary: This opportunity delivers $99,916 in annual value through time recovery ($74,536), error reduction ($8,800), escalation reduction ($13,420), and risk reduction ($3,160).
The model is conservative. Sensitivity analysis shows positive ROI even with 35% reduction in projected benefits.
IMPLEMENTATION APPROACH
Phase 1 (Weeks 1-3): Policy documentation and rule definition
- Extract all policy logic from Bible with Patricia
- Define decision tree structure
- Validate against sample returns
Phase 2 (Weeks 4-6): System configuration
- Configure ERP decision workflow
- Build exception handling process
- Create audit/reporting capability
Phase 3 (Weeks 7-8): Training and rollout
- Train CS representatives (4 hours each)
- Parallel operation (Bible available as backup)
- Monitor adoption and error rates
Success criteria:
- 90% of returns processed via system workflow by Week 10
- Error rate d2% by Week 12
- Bible consultation <5% of returns by Week 12
RISK ASSESSMENT
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| Policy logic errors in configuration | Medium | High | Extensive testing with historical returns |
| Low practitioner adoption | Low | Medium | Parallel operation period; gradual Bible retirement |
| Patricia unavailable during documentation | Medium | High | Begin immediately; document iteratively |
| Scope creep (additional policy complexity) | Medium | Medium | Fixed scope; defer enhancements to Phase 2 |
The highest risk is incomplete policy capture during Phase 1. Mitigation: validate against 200 historical returns before configuration begins.
RECOMMENDATION
Proceed as proposed.
R-01 offers exceptional return (756% ROI, 4-month payback) while eliminating a critical operational risk. The implementation is straightforward, leveraging existing ERP capability. Delaying increases exposure to the Patricia dependency.
Recommend immediate approval with target start date of [Month 1].
APPENDIX
[Would contain: complete value model spreadsheet, baseline measurement documentation, assumption inventory, sensitivity analysis tables, comparison to alternative approaches]
Common Business Case Failures
These patterns undermine credibility.
Advocacy Disguised as Analysis
The business case that starts with a conclusion and builds backwards. Numbers selected to support a predetermined answer. Assumptions chosen for their conclusions, not their basis.
Tell: All assumptions are optimistic. No sensitivity analysis. Risks minimized or absent.
Assumptions Hidden in Appendix
Key assumptions buried where reviewers won't find them. The main document presents projections as facts.
Tell: Numbers appear without basis. "We project 90% adoption" with no explanation of why.
Missing the "Do Nothing" Comparison
Presenting benefits without acknowledging that "do nothing" has costs too — but also has zero investment risk.
Fix: Include explicit "cost of inaction" section. What happens if we don't do this?
Overclaiming
Benefits inflated through double-counting, theoretical maximums, or unsubstantiated multipliers.
Tell: Value projections that exceed industry benchmarks without explanation. Benefits that assume perfect execution.
Underclaiming
Benefits minimized to appear conservative, but actually hiding value that could justify the investment.
Tell: Focus benefits ignored or labeled "intangible." Risk reduction not quantified.
The "Don't Proceed" Recommendation
Sometimes the analysis says no.
How to Write It:
"Based on this analysis, we recommend not proceeding with [opportunity] at this time.
The projected return of $X does not justify the investment of $Y given:
- [Specific concern 1]
- [Specific concern 2]
- [Specific concern 3]
This analysis has value: it prevents investment in an opportunity that would likely underperform, freeing resources for higher-return alternatives."
This Is Not Failure:
A well-constructed analysis that recommends against proceeding has succeeded. It prevented resource misallocation. It demonstrated analytical rigor. It protected organizational credibility.
Practitioners who occasionally recommend against their own proposals build trust. Their "proceed" recommendations carry more weight because leadership knows they're not reflexive advocates.
Business Case Template
BUSINESS CASE: [Opportunity Name and ID]
EXECUTIVE SUMMARY
Problem: [2-3 sentences on the friction and its cost]
Solution: [1-2 sentences on the intervention]
Investment: $[amount]
Return: $[annual] annual / $[total] over [N] years
Payback: [N] months
Recommendation: [Proceed / Modify / Don't Proceed]
PROBLEM STATEMENT
[Current state, quantified impact, root cause, consequence of inaction]
PROPOSED SOLUTION
[Description, how it addresses root cause, what changes]
VALUE ANALYSIS
[Baseline summary, target state, value by lens, total value, key assumptions]
IMPLEMENTATION APPROACH
[Phases, timeline, resources, success criteria]
RISK ASSESSMENT
[Table: Risk, Probability, Impact, Mitigation]
RECOMMENDATION
[Professional judgment with brief rationale]
APPENDIX
[Detailed calculations, data sources, sensitivity analysis]
Quality Checklist
Your business case is ready when you can answer "yes" to each question:
- Could an executive make a preliminary decision from the summary alone?
- Is the problem statement quantified, not just described?
- Does the solution clearly address the root cause?
- Are all value calculations traceable to documented sources?
- Are key assumptions visible in the main document, not just the appendix?
- Have I included risk assessment with mitigation strategies?
- Is my recommendation clear and supported by the analysis?
- Would I be comfortable if this document were reviewed a year from now?
- Does this business case invite scrutiny rather than deflect it?
- If the numbers said "don't proceed," would I have written that?
A business case that passes this checklist earns the right to be funded.
Proceed to risk and assumption documentation.
O — Operate
Step 5: Document Risks and Assumptions
Every value model rests on assumptions. Hidden assumptions are landmines. This section covers systematic assumption documentation and risk assessment.
Why Assumptions Matter
Every model rests on assumptions.
The R-01 value model assumes 57 affected returns per day. It assumes 11.2 minutes saved per return. It assumes 90% adoption. Change any assumption, and the output changes.
Assumptions aren't weaknesses — they're how models handle uncertainty. The weakness is hiding them.
Hidden assumptions are landmines.
When assumptions hide in spreadsheet cells, reviewers can't evaluate them. The model looks more certain than it is. When reality differs from hidden assumptions, the model fails without warning.
David Mensah's CRM proposal assumed 45 minutes per report. The assumption was hidden; the actual time was 25 minutes. His projected savings were calculated against a fictional baseline. The project delivered 30% of projection because the foundation was false.
Transparent assumptions enable verification.
When assumptions are visible, reviewers can challenge them: "Why do you assume 90% adoption? Industry benchmarks suggest 70%." This is healthy. The challenge may improve the model. Or the response may validate the assumption. Either outcome strengthens the analysis.
Transparent assumptions also enable post-implementation learning. When you can compare assumed vs. actual, you improve future estimates.
Assumption Categories
Six categories capture most value model assumptions.
1. Volume Assumptions
How much activity flows through the affected process?
Examples:
- Returns per day
- Transactions per week
- Reports generated per month
Volume directly multiplies other value. A 20% volume error creates 20% value error.
2. Adoption Assumptions
Will people use the new approach?
Examples:
- Percentage of practitioners using new system
- Time to reach full adoption
- Compliance with new process
Low adoption is the most common reason projected benefits don't materialize. If 50% of people use the old approach, 50% of projected value disappears.
3. Performance Assumptions
Will the solution perform as expected?
Examples:
- Time required per transaction (after)
- Error rate (after)
- System reliability
Performance assumptions project target state. Optimistic performance projections inflate value.
4. Timing Assumptions
When will benefits begin and how long will they last?
Examples:
- Implementation completion date
- Time to reach full productivity
- Benefit duration (before refresh needed)
Timing affects present value and payback calculations. Benefits that start six months late or decay in two years instead of three significantly change the model.
5. Sustainability Assumptions
Will benefits persist?
Examples:
- Process stability (no regression to old ways)
- System maintenance adequacy
- Continued organizational commitment
Many improvements show early gains that erode over time. If sustainability is low, multi-year projections are questionable.
6. Cost Assumptions
What will implementation and operation actually cost?
Examples:
- Vendor pricing stability
- Internal effort estimates
- Change management requirements
Cost underestimation is nearly universal. Optimistic cost assumptions inflate ROI.
Assumption Documentation Template
Each assumption deserves systematic documentation.
Template:
| Field | Content |
|---|---|
| Assumption ID | [Unique identifier for reference] |
| Category | [Volume / Adoption / Performance / Timing / Sustainability / Cost] |
| Statement | [The assumption expressed clearly] |
| Basis | [Why we believe this — evidence, benchmarks, expert judgment] |
| Confidence | [High / Medium / Low] |
| Impact if Wrong | [What happens to the model if this assumption is significantly off] |
| Validation Plan | [How and when we'll know if this was accurate] |
Example — R-01 Volume Assumption:
| Field | Content |
|---|---|
| Assumption ID | A-01 |
| Category | Volume |
| Statement | 57 returns per day require Bible consultation |
| Basis | 30-day ERP data showing 98 returns/day; sampling indicates 58% require Bible |
| Confidence | High |
| Impact if Wrong | Linear impact: 20% fewer returns = 20% less value (~$20K annually) |
| Validation Plan | Monitor system data monthly post-implementation |
The R-01 Assumption Inventory
Complete documentation for the R-01 value model:
A-01: Affected Return Volume
- Statement: 57 returns per day require policy consultation
- Basis: ERP system data (98/day) × sampling rate (58%)
- Confidence: High
- Impact if wrong: Linear — volume directly multiplies time value
- Validation: Monthly volume tracking
A-02: Time Savings Per Return
- Statement: System integration reduces consultation from 14.2 to 3.0 minutes (11.2 min saved)
- Basis: Time study for current state; comparable implementations for target state
- Confidence: Medium — target state is projected, not measured
- Impact if wrong: Linear — time savings directly affect value
- Validation: Time study at Week 10 post-implementation
A-03: Practitioner Adoption Rate
- Statement: 90% of practitioners will use system workflow within 60 days
- Basis: Similar ERP changes achieved 85-95% adoption; mandatory process with manager visibility
- Confidence: Medium — depends on training effectiveness and management reinforcement
- Impact if wrong: High — 70% adoption reduces value by ~22%
- Validation: System usage reports at Days 30, 60, 90
A-04: Error Rate Reduction
- Statement: Policy application errors decrease from 4.3% to 1.5%
- Basis: Comparable system implementations; error source analysis shows 65% are lookup errors
- Confidence: Medium — target assumes system guidance addresses primary error sources
- Impact if wrong: Moderate — error value is ~9% of total
- Validation: Quality audit at Week 12
A-05: Implementation Cost
- Statement: Total implementation cost is $35,000
- Basis: Vendor quote ($25K), internal estimate for training/change management ($6K), 15% contingency ($4K)
- Confidence: Medium — vendor quote firm; internal estimate less certain
- Impact if wrong: 20% cost increase reduces ROI from 756% to 620% (still strong)
- Validation: Track actual costs against budget
A-06: Benefit Sustainability
- Statement: Benefits persist for 3 years without significant decay
- Basis: Policy change frequency is low; system will require minor maintenance only
- Confidence: Medium — assumes no major policy overhaul or system replacement
- Impact if wrong: If benefits decay 20% annually, 3-year value drops by ~$50K
- Validation: Annual value reassessment
High-Impact Assumptions:
Review the inventory for assumptions that are both uncertain and high-impact:
| Assumption | Confidence | Impact | Priority |
|---|---|---|---|
| A-03 (Adoption) | Medium | High | Monitor closely |
| A-02 (Time savings) | Medium | High | Validate early |
| A-04 (Error reduction) | Medium | Moderate | Track |
| A-05 (Cost) | Medium | Moderate | Track |
| A-01 (Volume) | High | High | Low concern |
| A-06 (Sustainability) | Medium | Moderate | Annual review |
Adoption and time savings assumptions require active monitoring. If Week 30 adoption is below 75%, escalate immediately.
Risk Documentation
Risks differ from assumptions. Assumptions are inputs to the model; risks are events that could affect outcomes.
Risk Categories:
Implementation Risks
- Technical: Will the solution work as designed?
- Resource: Will we have the people and budget to complete?
- Timeline: Will we finish when projected?
- Scope: Will requirements remain stable?
Adoption Risks
- Resistance: Will practitioners accept the change?
- Capability: Can practitioners learn the new approach?
- Support: Will management reinforce adoption?
Outcome Risks
- Performance: Will the solution deliver projected benefits?
- Sustainability: Will benefits persist?
- Unintended consequences: What could go wrong that we haven't anticipated?
Risk Documentation Template:
| Field | Content |
|---|---|
| Risk ID | [Unique identifier] |
| Category | [Implementation / Adoption / Outcome] |
| Description | [What could happen] |
| Probability | [High / Medium / Low] |
| Impact | [High / Medium / Low] |
| Risk Score | [Probability × Impact: 1-9] |
| Mitigation | [How we reduce probability or impact] |
| Contingency Trigger | [When do we know this risk is materializing] |
| Contingency Action | [What we do if it materializes] |
R-01 Risk Inventory
R-01: Incomplete Policy Capture
- Category: Implementation
- Description: Not all policy variations captured during documentation phase
- Probability: Medium
- Impact: High (system gives wrong guidance)
- Risk Score: 6
- Mitigation: Validate against 200 historical returns before go-live
- Contingency trigger: >2% of test returns handled incorrectly
- Contingency action: Extend Phase 1; add policy review sessions
R-02: Patricia Unavailable During Documentation
- Category: Implementation
- Description: Patricia illness, departure, or competing priorities delay knowledge capture
- Probability: Medium
- Impact: High (cannot proceed without her knowledge)
- Risk Score: 6
- Mitigation: Begin immediately; document iteratively; identify secondary sources
- Contingency trigger: Patricia unavailable >5 consecutive days during Phase 1
- Contingency action: Pause project; identify alternative knowledge holders
R-03: Low Practitioner Adoption
- Category: Adoption
- Description: Representatives continue using Bible instead of system
- Probability: Low
- Impact: Medium (reduced value; not project failure)
- Risk Score: 3
- Mitigation: Parallel operation period; manager monitoring; gradual Bible retirement
- Contingency trigger: <70% system usage at Day 30
- Contingency action: Additional training; investigate barriers; manager intervention
R-04: System Performance Issues
- Category: Outcome
- Description: System workflow is slower or less reliable than projected
- Probability: Low
- Impact: Medium (reduced time savings)
- Risk Score: 2
- Mitigation: Performance testing before go-live; user acceptance testing
- Contingency trigger: System response time >5 seconds or frequent errors
- Contingency action: Technical optimization; defer full rollout
R-05: Policy Changes During Implementation
- Category: Implementation
- Description: New return policies added or existing policies changed mid-project
- Probability: Medium
- Impact: Low (scope increase, not failure)
- Risk Score: 2
- Mitigation: Freeze scope; document change process for post-launch
- Contingency trigger: Major policy change requested during Phases 1-2
- Contingency action: Assess impact; extend timeline if >1 week effort
Risk Summary:
| Risk | Score | Priority |
|---|---|---|
| R-01: Incomplete policy capture | 6 | High |
| R-02: Patricia unavailable | 6 | High |
| R-03: Low adoption | 3 | Monitor |
| R-04: System performance | 2 | Low |
| R-05: Policy changes | 2 | Low |
High-priority risks require active management during implementation.
The Assumption Stress Test
The stress test asks: "What if our worst assumption is significantly wrong?"
Methodology:
- Identify the assumption with highest (Impact × Uncertainty)
- Adjust that assumption by 50% in the unfavorable direction
- Recalculate total value
- Determine if the business case still holds
R-01 Stress Test:
Worst assumption: A-03 (Adoption rate)
- Base case: 90% adoption
- Stress case: 45% adoption (50% reduction)
- Base annual value: $99,916
- Stress annual value: $54,959 (adoption affects ~50% of value)
- Stress 3-year net value: $129,877
- Stress ROI: 371%
- Stress payback: 7.6 months
Conclusion: Even with adoption at half the projected rate, R-01 delivers positive ROI with payback under one year. The business case is robust.
Breakeven Analysis:
At what point does the business case fail?
- Investment: $35,000
- Required annual value for 1-year payback: $35,000
- Current projection: $99,916
The model can sustain a 65% reduction in benefits and still achieve one-year payback. This margin provides confidence even with multiple assumptions proving optimistic.
Risk/Assumption Documentation Template
ASSUMPTION INVENTORY
====================
[Assumption table with all categories]
HIGH-IMPACT ASSUMPTIONS
[List assumptions requiring close monitoring]
RISK INVENTORY
==============
[Risk table with all categories]
HIGH-PRIORITY RISKS
[List risks requiring active management]
STRESS TEST RESULTS
===================
Worst-case assumption: [ID and description]
Base case value: $[X]
Stress case value: $[Y]
Business case holds: [Yes/No]
Breakeven point: [Condition under which business case fails]
MONITORING PLAN
===============
| Item | Metric | Frequency | Owner | Escalation Trigger |
|------|--------|-----------|-------|-------------------|
| [Assumption/Risk] | [How measured] | [Weekly/Monthly] | [Name] | [When to escalate] |
Quality Checklist
Your risk and assumption documentation is ready when you can answer "yes" to each question:
- Have I documented every assumption that affects the value model?
- Is each assumption categorized (Volume/Adoption/Performance/Timing/Sustainability/Cost)?
- Does each assumption have a stated basis — not just a number?
- Have I assessed confidence level for each assumption?
- Have I identified what happens if each assumption is wrong?
- Have I documented validation plans for uncertain assumptions?
- Have I identified all significant risks across categories?
- Does each risk have probability, impact, and mitigation?
- Have I defined contingency triggers and actions for high-priority risks?
- Have I stress-tested the model against worst-case assumptions?
- Does the business case survive reasonable assumption variation?
- Have I created a monitoring plan for ongoing tracking?
Documentation that passes this checklist demonstrates professional rigor and enables post-implementation learning.
Proceed to test and validation methodology.
T — Test
Measuring ROI Model Quality and Outcomes
The ROI Model produces value projections and business cases. Measurement validates model quality before implementation and tracks accuracy after.
Validating the ROI Model
Before presenting business cases, validate the model itself.
Baseline Accuracy Checks
Compare baseline measurements against independent sources:
- Do time study results align with system timestamps?
- Does observed volume match system reports?
- Do error rate samples reflect quality audit findings?
Discrepancies aren't necessarily problems — they may reveal legitimate variation. But significant gaps require investigation.
Calculation Verification
Trace each output to its inputs:
- Can you show the math for every value calculation?
- Do intermediate calculations sum correctly to totals?
- Are labor rates applied consistently?
- Are time horizons and frequencies properly multiplied?
A peer should be able to verify any calculation within five minutes.
Assumption Stress Tests
Systematically vary key assumptions:
- What if volume is 30% lower?
- What if time savings are 40% less?
- What if adoption reaches only 60%?
Document the stress test results. The business case should hold under reasonable adverse scenarios.
Peer Review Process
Have someone unfamiliar with the analysis review before presentation:
- Ask them to identify the three most uncertain assumptions
- Have them attempt to verify key calculations
- Request their assessment of overall credibility
Peer review catches errors and blind spots. If a peer finds problems, fix them before leadership does.
Model Quality Metrics
These metrics assess the ROI model itself, before any implementation.
Assumption Transparency Score
Count: How many assumptions are explicitly documented with basis, confidence, and impact?
| Score | Interpretation |
|---|---|
| 90%+ | Professional grade — assumptions visible and supported |
| 70-89% | Acceptable — most assumptions documented |
| 50-69% | Weak — significant assumptions hidden |
| <50% | Unacceptable — model is advocacy, not analysis |
R-01 target: 100% of material assumptions documented (A-01 through A-06).
Sensitivity Range
Measure: How much does the output vary when inputs vary within reasonable bounds?
A narrow range (±10%) suggests either precise inputs or hidden assumptions. A very wide range (±50%) suggests high uncertainty that should be acknowledged.
For R-01: Base case $99,916; stress case $54,959 — a 45% range. This is substantial but the business case holds throughout the range.
Source Documentation Completeness
Count: What percentage of numbers have explicit sources?
| Category | Target |
|---|---|
| Baseline metrics | 100% sourced to measurement method |
| Cost estimates | 100% sourced to quotes or benchmarks |
| Performance projections | 100% sourced to evidence or assumption |
Numbers without sources are assertions, not analysis.
Reproducibility
Test: Could someone else recreate this analysis from the documentation?
Provide the model to a colleague with only the written documentation. Can they:
- Understand each calculation?
- Trace outputs to inputs?
- Identify the key assumptions?
- Reach the same conclusion?
If not, the documentation is insufficient.
Leading Indicators (Before Implementation)
These signals suggest the ROI model will prove accurate:
Stakeholder Questions Addressed
When presenting the business case:
- Finance asks about assumptions — and you have documented answers
- Operations asks about feasibility — and your implementation plan addresses their concerns
- Leadership asks about alternatives — and you've considered them
Questions are healthy. Inability to answer questions signals model weakness.
Finance Review Passed
The finance team reviews the model and finds:
- Calculations are correct
- Assumptions are reasonable
- Cost estimates are credible
- ROI methodology is sound
Finance approval doesn't guarantee success, but finance rejection usually signals real problems.
Peer Validation Completed
A peer reviewer confirms:
- Key calculations verified
- Assumptions are explicitly stated
- Sensitivity analysis is reasonable
- Business case is defensible
Skipping peer review invites preventable errors.
Assumption Confidence Levels
Review the assumption inventory:
- How many assumptions are rated "high confidence"?
- Are high-impact assumptions well-supported?
- Have low-confidence assumptions been stress-tested?
A model with many low-confidence, high-impact assumptions is fragile regardless of the headline ROI.
Lagging Indicators (After Implementation)
These measurements track whether projections proved accurate.
Projected vs. Actual Baseline
Compare pre-implementation baseline to what was actually measured:
| Metric | Projected | Actual | Variance |
|---|---|---|---|
| Consultation time | 14.2 min | [actual] | [%] |
| Affected volume | 57/day | [actual] | [%] |
| Error rate | 4.3% | [actual] | [%] |
Baseline variance reveals assessment accuracy. If the baseline was wrong, even perfect implementation won't deliver projected value.
Projected vs. Actual Improvement
Compare projected target state to post-implementation measurement:
| Metric | Projected | Actual | Variance |
|---|---|---|---|
| New consultation time | 3.0 min | [actual] | [%] |
| New error rate | 1.5% | [actual] | [%] |
| Adoption rate | 90% | [actual] | [%] |
Improvement variance reveals projection accuracy. Systematic overestimation suggests optimistic assumptions.
Projected vs. Actual Cost
Compare budgeted to actual implementation cost:
| Category | Budgeted | Actual | Variance |
|---|---|---|---|
| System integration | $25,000 | [actual] | [%] |
| Training/change management | $6,000 | [actual] | [%] |
| Contingency used | $4,000 | [actual] | [%] |
| Total | $35,000 | [actual] | [%] |
Cost variance above 20% suggests estimation methodology needs improvement.
Assumption Accuracy
Review each assumption against actual outcomes:
| Assumption | Projected | Actual | Assessment |
|---|---|---|---|
| A-01: Volume | 57/day | [actual] | Accurate / Optimistic / Conservative |
| A-02: Time savings | 11.2 min | [actual] | Accurate / Optimistic / Conservative |
| A-03: Adoption | 90% | [actual] | Accurate / Optimistic / Conservative |
| ... |
Track which assumption types consistently miss in which direction. This enables better future estimation.
Red Flags
These signals suggest model quality problems or projection failures:
No One Challenges the Numbers
If reviewers accept projections without questions, either:
- The analysis is obviously correct (rare)
- Reviewers aren't engaging critically (common)
- The presentation discouraged scrutiny (problem)
Welcome challenges. Absence of challenge should itself raise questions.
Assumptions Aren't Documented
Business cases presented without explicit assumption inventories are advocacy documents. Numbers appear as facts rather than projections. This invites post-implementation surprise.
Sensitivity Analysis Shows Fragile Conclusions
If the business case flips from "proceed" to "don't proceed" with 10-15% assumption variation, the conclusion is fragile. Either:
- Strengthen assumptions with better evidence
- Present the decision as genuinely uncertain
- Acknowledge that this is a marginal opportunity
Business Case Required Persuasion
If approval required selling rather than showing, the evidence was insufficient. Leaders who feel persuaded rather than informed may later feel deceived if results disappoint.
Projected vs. Actual Variance Exceeds 30%
After implementation, variance beyond 30% in either direction suggests:
- Baseline measurement was flawed
- Assumptions were systematically biased
- Implementation differed from plan
- External conditions changed significantly
Investigate variance to improve future accuracy.
The Feedback Loop
Tracking projected versus actual creates organizational learning.
For Every Business Case:
- Record projected metrics at approval
- Measure actual metrics at defined intervals
- Calculate variance for each metric
- Document explanations for significant variance
- Feed insights into future estimation
Building Estimation Accuracy:
Over time, patterns emerge:
- "We consistently overestimate adoption by 15%"
- "Our time savings projections are usually conservative"
- "Cost estimates are accurate for internal work, 20% low for vendor work"
These patterns become adjustment factors for future models. An organization that tracks accuracy systematically estimates better than one that doesn't.
The Estimation Accuracy Dashboard:
| Business Case | Year | Projected ROI | Actual ROI | Variance |
|---|---|---|---|---|
| R-01 Returns Bible | 2024 | 756% | [actual] | [%] |
| R-02 Approval workflow | 2024 | 349% | [actual] | [%] |
| ... | ||||
| Average Variance | [%] |
Organizations with strong estimation cultures achieve average variance under 20%.
Quality Checklist
Your ROI model meets quality standards when you can answer "yes" to each:
Pre-Implementation:
- All calculations verified by peer review
- All assumptions documented with basis and confidence
- Sensitivity analysis completed and documented
- Stress test shows business case holds under adverse scenarios
- Finance has reviewed and found methodology sound
- Reproducibility test passed
Post-Implementation:
- Projected vs. actual baseline documented
- Projected vs. actual improvement measured
- Projected vs. actual cost tracked
- Assumption accuracy assessed
- Variance explanations documented
- Insights fed back into estimation methodology
A model that passes these checks—and an organization that tracks outcomes—builds credibility for future proposals.
Proceed to share and consolidation.
S — Share
Consolidating Learning Through Reflection and Practice
This section reinforces Module 3 concepts through structured reflection, peer exercises, and discussion.
Reflection Prompts
Complete these individually before group discussion. Write responses of 3-5 sentences each.
1. Business Cases You've Seen
Think about a business case you've seen (whether you created it, reviewed it, or were affected by it).
- Was it advocacy or analysis?
- What assumptions were visible? Which were hidden?
- Did the projected results match actual outcomes?
- What would you do differently knowing what you know now?
2. Value Types Your Organization Prioritizes
Consider how your organization makes investment decisions.
- Which types of value resonate most? (Cost reduction, capacity gain, risk reduction, strategic enablement)
- Which types are dismissed or discounted?
- What language does your finance team use?
- How does this shape which proposals succeed?
3. Assumptions You've Seen Buried
Recall a project or decision where assumptions proved wrong.
- What was assumed that turned out to be false?
- Was the assumption visible before the outcome was known?
- Could scrutiny have identified the risk?
- What would rigorous assumption documentation have changed?
4. Good Ideas That Died for Lack of Proof
Think about an improvement idea — yours or someone else's — that never happened despite being valuable.
- Why didn't it get funded or supported?
- What evidence was missing?
- Could Module 3's methodology have helped?
- What would you do differently now?
5. Your Comfort with "Don't Proceed"
Honestly assess your own tendencies.
- Have you ever recommended against your own idea when the numbers didn't support it?
- How would you feel presenting a "don't proceed" business case to leadership?
- What pressures might push you toward optimistic assumptions?
- How can you build the discipline to follow the evidence?