Skip to main content
Module 4

ORCHESTRATE — Designing Human-AI Collaboration

Creating systems where people lead and machines follow

127 min read25,343 words0/1 deliverables checked
Reading progress0%

Module 4A: ORCHESTRATE — Theory

R — Reveal

Case Study: The System Everyone Hated

The discharge planning initiative at Lakewood Regional Medical Center had everything going for it.

Carmen Vasquez, the chief nursing officer, had spent four months conducting a rigorous assessment of the discharge process. She had observed how nurses, case managers, and social workers actually coordinated patient transitions. She had documented the shadow systems—the whiteboard in the breakroom where charge nurses tracked pending discharges, the shared spreadsheet that case managers used because the electronic health record couldn't show cross-functional status, the sticky notes on computer monitors with direct phone numbers that bypassed the official communication channels.

Her Opportunity Portfolio had identified the central friction: discharge coordination required eight different people to synchronize around information that existed in six different systems. The average patient discharge took 4.3 hours from physician order to actual departure—time that patients spent waiting in beds needed for incoming admissions, that nurses spent making follow-up calls instead of providing care, that the hospital paid for in blocked capacity and delayed revenue.

The business case was compelling. Carmen had established baselines through direct observation. She had calculated value across all three lenses: $1.2 million annually in capacity recovery, 340 nursing hours per week returned to patient care, and elimination of the single point of failure represented by the legendary discharge coordinator, Maria Santos, who had been doing this job for twenty-two years and whose retirement loomed eighteen months away.

The executive team approved full funding. The project had visible sponsorship from the CMO and CFO. Implementation began with enthusiasm and adequate resources.

Eighteen months later, the discharge coordination system sat mostly unused. Nurses had developed workarounds to avoid it. Case managers logged the minimum data required and then reverted to their spreadsheet. The whiteboard in the breakroom had been officially removed—and unofficially replaced with a new one behind a supply closet door.

Average discharge time had increased to 5.1 hours.


What the Executives Saw

The system looked beautiful from the top.

The executive dashboard showed every patient's discharge status in real-time. Color-coded indicators flagged delays. Automated alerts notified administrators when patients exceeded target discharge windows. Reports generated automatically, showing trends by unit, physician, and day of week. The compliance team could pull audit trails instantly. Quality metrics were visible at a glance.

During the vendor demonstration, the COO had watched the dashboard populate and said, "Finally, we'll be able to see what's actually happening." The CFO had noted the reporting capabilities and observed that manual report compilation—currently consuming two FTEs—could be eliminated.

The system delivered exactly what it promised: visibility. Leadership could see every discharge in progress, every task completed or pending, every bottleneck and delay.

The problem was what the system required to produce that visibility.


What the Practitioners Experienced

Nurse manager Sarah Chen remembered the moment she knew the system would fail.

It was 7:15 AM on the third day after go-live. She had just received shift handoff from the night charge nurse—a conversation that used to take eight minutes and covered fourteen patients with pending or possible discharges. Under the new system, that handoff was supposed to be unnecessary. The system would show everything.

Except it didn't. The system showed data. It didn't show context.

Patient in 412: the system showed "discharge order pending" with a yellow indicator. What it didn't show was that the patient's daughter was driving in from three hours away and the family had requested a 2 PM target, which the day shift had already coordinated informally with pharmacy and transport. The system flagged 412 as a delay risk. Sarah knew 412 was actually ahead of schedule.

Patient in 408: the system showed "discharge complete—awaiting transport." Green indicator. What it didn't show was that transport had been called forty minutes ago for a patient who was confused and combative, that the transport aide had returned the patient to the unit after he became agitated in the elevator, and that psych consult was now involved. The system showed a success; Sarah had a crisis.

Patient in 415: the system showed nothing at all. The discharge order had been entered by a covering physician who didn't know the patient, had been immediately questioned by the case manager, and was pending attending review. In the old system—the whiteboard—this would have been noted with a question mark. In the new system, it was invisible until the formal discharge pathway was initiated.

"The system shows what happened," Sarah said during the post-implementation review. "It doesn't show what's happening. And it definitely doesn't show what's about to happen."


The Burden of Visibility

The core problem revealed itself within the first week: every piece of information that appeared on the executive dashboard had to be entered by someone.

The previous workflow had evolved over years to minimize documentation during the discharge process itself. The whiteboard required a nurse to write a room number and a one-word status. The case manager spreadsheet auto-populated from the EHR and required perhaps two minutes of updates per patient. Maria Santos kept most of the coordination in her head, making calls and adjustments in real-time without stopping to document each interaction.

The new system required comprehensive documentation at every step. Task completion had to be logged. Status changes had to be recorded. Delays had to be explained with reason codes selected from a dropdown menu that never quite matched reality. Every communication about a discharge was supposed to occur through the system so it would appear in the audit trail.

Nurses estimated they spent an additional 12-15 minutes per discharge on documentation—documentation that added no value to patient care, that existed purely to populate dashboards and reports.

"I'm not taking care of patients," one nurse said during a focus group. "I'm feeding the beast."


The Surveillance Problem

The deeper issue emerged more slowly.

The system tracked everything: who completed which task, when, how long it took. The data was intended for process improvement, but practitioners experienced it differently.

Charge nurses noticed that their delay explanations were being reviewed in weekly management meetings. A nurse who documented "awaiting family arrival" was questioned about why the family hadn't been given an earlier window. A case manager who logged "physician order delayed" was asked to explain which physician and why.

The system had been designed to produce accountability. Practitioners experienced it as surveillance.

"I used to make judgment calls all day," said Diane Adeyemi, a case manager with fifteen years of experience. "Now I'm afraid to make any call that I might have to defend later. So I don't make calls—I wait for someone else to make them and then document that I was waiting."

The result was exactly backward: a system designed to accelerate discharges had introduced decision paralysis. Practitioners who previously exercised judgment now deferred to avoid documentation of their reasoning.


The Workaround Economy

Within six weeks, the informal systems had reconstituted themselves.

The whiteboard returned—relocated, unofficial, and more valuable than ever because it captured what the official system couldn't. Nurses developed a parallel communication channel through the hospital's internal messaging system, using coded language to coordinate without creating documentable records. Case managers began calling each other directly rather than updating the system, then batch-entering data at the end of their shifts to satisfy compliance requirements.

Maria Santos, whose knowledge was supposed to be captured by the system, became more essential than ever. She was the only person who could translate between what the system showed and what was actually happening. Her retirement, now twelve months away, had become an organizational emergency.

The system's adoption metrics looked reasonable: 78% task completion rate, 82% status accuracy, average documentation compliance above threshold. But these numbers measured data entry, not value. Practitioners were feeding the system enough to avoid scrutiny while doing their real work elsewhere.

The shadow systems hadn't been eliminated. They had been driven underground.


The Moment of Clarity

The breakthrough came from an unexpected source.

Maria Santos cornered Carmen Vasquez in the hallway one Tuesday afternoon. Maria had been notably silent during the implementation—cooperative but not enthusiastic, compliant but not engaged. Carmen had attributed this to resistance to change.

"Can I show you something?" Maria asked.

She led Carmen to the breakroom and pulled open the supply closet door. There was the whiteboard, covered with room numbers, names, arrows, and a notation system that made sense only to people who had learned it through years of use.

"This is how we actually coordinate," Maria said. "This is what the system was supposed to replace."

Carmen looked at the whiteboard, then at Maria. "Why didn't you tell us this wouldn't work?"

"I did. During requirements gathering, I explained how we actually discharge patients. I explained the judgment calls, the family coordination, the physician variability, the transport logistics. I explained that most of what I do is anticipate problems before they become problems."

"And?"

"And they said the system would handle all of that. They said I was describing a workaround that shouldn't exist. They said the new system would give me 'structured workflows' so I wouldn't have to keep everything in my head."

Maria paused. "They weren't wrong. Keeping everything in my head isn't sustainable. But they didn't understand what 'everything' meant. They thought I was tracking tasks. I'm actually tracking relationships, timing, family dynamics, physician preferences, and a hundred variables that don't fit in dropdown menus."

Carmen stared at the whiteboard. "So the system—"

"The system was designed for you. The executives. It shows you what you want to see: status, metrics, compliance. It wasn't designed for us. We need to coordinate, not document. We need to communicate, not log. We need to anticipate, not report."

Maria pointed at the whiteboard. "This is ugly. It's a mess. It doesn't generate reports or dashboards. But it shows us what we need to know to do our jobs. The new system shows you what you need to know to review our jobs. Those aren't the same thing."


The Redesign

Over the following three months, Carmen led a fundamental redesign of the discharge coordination system. The vendor had delivered exactly what was specified; the specification had been wrong.

The redesign started with a different question: What do practitioners need to do their work better?

The answers reshaped everything:

Visibility became passive, not active. The system pulled data from existing documentation rather than requiring separate entry. Task completion was inferred from actions that were already being recorded—medication reconciliation, transport requests, equipment orders. Practitioners no longer fed the dashboard; the dashboard assembled itself.

Status became contextual, not categorical. Instead of rigid dropdown menus, the system allowed free-text notes visible only to the care team. Patient in 412 could show "family en route, 2 PM target confirmed"—context that mattered to coordinators but didn't need executive review.

Communication happened in the workflow, not about the workflow. The messaging system was integrated directly, allowing practitioners to coordinate without switching applications or creating separate documentation. The audit trail existed, but it captured natural communication rather than requiring structured data entry.

Exception handling replaced exception documenting. When a discharge fell outside normal parameters, the system offered decision support—suggesting contacts, surfacing similar past cases, prompting relevant questions—rather than demanding explanation codes.

The whiteboard logic was digitized. Maria worked with the development team to translate her mental model into a visual interface that showed relationships and timing, not just tasks and status. The result looked nothing like the original dashboard and everything like an electronic whiteboard.

Six months after redesign, average discharge time had dropped to 3.8 hours—better than the original target. Documentation burden had decreased by 40% from the failed implementation. The executive dashboard still existed, still showed status and metrics, but it was generated from work that was happening rather than work that was being documented.

Maria Santos retired on schedule. The knowledge that lived in her head had finally been captured—not in a database, but in a workflow that made sense to the people who used it.


The Lesson That Cost Eighteen Months

The discharge coordination system had failed not because of technology limitations or change resistance or inadequate training. It had failed because it was designed for the wrong audience.

The original system was designed to answer executive questions: Where are we in the discharge process? Who is responsible for delays? What does our performance look like?

The redesigned system was designed to answer practitioner questions: What do I need to do next? Who do I need to talk to? What's about to become a problem?

Both are legitimate questions. But the first set can only be answered if the second set is answered first. A system that makes practitioners' work harder will never produce the visibility executives want—or if it does, the visibility will be an illusion built on workarounds and batch data entry and checkbox compliance.

Carmen framed the lesson in a way that stuck with her team: "We designed a system to watch work happen. We should have designed a system to help work happen. The watching would have taken care of itself."

The technology hadn't failed. The design had failed. And the design had failed because it started with the wrong audience.

You design for the person doing the work. The person reviewing the work gets their view as a byproduct.

Get that order wrong, and no amount of training, change management, or compliance pressure will save you.


Module 4A: ORCHESTRATE — Theory

O — Observe

Core Principles

The Lakewood Regional case illustrates a principle that applies across every workflow design project: systems designed to watch work will never improve work. You design for the practitioner first. Everything else follows.

This module's anchor principle:

Design for the person doing the work, not the person reviewing the work.

This principle sounds obvious. It is consistently violated. The violation is rarely intentional—it emerges from the reasonable instinct to create visibility, ensure accountability, and measure progress. But these are observer needs, not practitioner needs. When observer needs drive design, practitioners experience burden, not benefit.

The Lakewood discharge system did exactly what it was designed to do: produce dashboards, generate reports, enable oversight. It failed because no one asked what the nurses and case managers needed to do their jobs better. The design served the wrong audience.


The Invisible Automation Principle

The best automation is invisible to the people it serves.

When practitioners notice a system, something has already gone wrong. They should notice that their work is easier, that information appears when needed, that errors are caught before they cascade. They should not notice screens to navigate, data to enter, workflows to follow.

The Visibility Test:

Ask practitioners: "What technology are you using?"

If they can name specific systems and describe their interactions with them, the automation is visible—and probably burdensome. If they describe their work in terms of tasks and outcomes rather than tools and interfaces, the automation has become infrastructure.

Consider the difference:

Visible automation: "I log into the discharge system, update the patient status, enter the reason code, notify the downstream team through the message center, and then check back in thirty minutes to see if they've acknowledged."

Invisible automation: "I update the chart, and everyone who needs to know gets notified. If something's going to be a problem, the system flags it before it becomes one."

The same underlying technology can produce either experience depending on design.

When Visibility Becomes Burden:

Carmen Vasquez's original discharge system added 12-15 minutes of documentation per patient. None of this documentation helped practitioners coordinate better—it existed to populate dashboards and audit trails. The information was valuable to administrators; the documentation burden fell on nurses.

Invisible automation would have captured the same information from actions already being taken: medication reconciliation, transport requests, equipment orders. The dashboard would exist, but practitioners wouldn't feed it—they would simply do their work, and the system would observe.

The Paradox of Invisible Value:

The invisibility principle creates a communication challenge. How do you demonstrate value from something no one notices?

The answer is in outcomes, not features. Practitioners don't notice the system; they notice that discharges are smoother, that information appears when needed, that problems get flagged before they escalate. The system's value is measured in the work it enables, not the technology it deploys.


Design for Adoption, Not Perfection

Elegant designs that no one uses aren't elegant.

The 80% solution that gets adopted beats the 100% solution that doesn't. This is not a compromise—it's a recognition that adoption is a design requirement, not a training outcome.

The Adoption Hierarchy:

  1. Useful — Does the design solve a real problem practitioners have?
  2. Usable — Can practitioners accomplish their goals without friction?
  3. Findable — Can practitioners discover what they need when they need it?
  4. Tolerable — Does the design avoid creating new burdens?
  5. Adoptable — Will practitioners actually use it in their real work?

Each level depends on the levels below. A perfectly useful design that creates intolerable burden will not be adopted. A usable design that doesn't solve a real problem will be abandoned.

Building for Real Humans:

Real humans take shortcuts. They skip optional fields. They batch-enter data at end of day. They communicate through back channels when official channels are slow. They develop workarounds for edge cases the system doesn't handle.

A design that treats this behavior as compliance failure will fail. A design that anticipates this behavior and works with it will succeed.

The Lakewood nurses developed workaround systems within six weeks. This wasn't resistance to change—it was adaptation to design failure. The workarounds represented requirements that the official system didn't meet. Reading workarounds as design feedback, rather than discipline problems, would have surfaced the issues months earlier.

The Perfection Trap:

Complex designs fail more often than simple ones. Every additional feature is a potential point of friction. Every edge case handled in the system is complexity practitioners must navigate.

The discipline is ruthless prioritization: Which features are essential? Which can wait? Which should never exist?

A system that handles 80% of cases smoothly and requires human intervention for 20% is better than a system that handles 95% of cases with added complexity for everyone. The cost of the 80% solution's exceptions is lower than the cost of the 95% solution's universal friction.


The Simplicity Imperative

Every added step must earn its place. Complexity is the enemy of adoption.

The "One More Field" Problem:

Systems accumulate friction through reasonable requests.

Someone needs a new data point. It's just one more field. Practitioners can fill it in while they're already in the form. The marginal burden is small.

Multiply this by years of operation, and you have forms with forty fields, workflows with eighteen steps, processes that take twenty minutes for what used to take five.

Each addition was justified. The aggregate is unbearable. And removing fields is harder than adding them—every field has a stakeholder who needs that data.

The simplicity imperative requires a different approach: fields must justify their existence against the friction they create, and that justification must be renewed regularly. "We've always captured that" is not justification. "Someone might need it" is not justification. "What specific decision does this enable, and is that decision worth the burden?" is the right question.

Complexity Compounds:

Complexity in one area creates complexity elsewhere.

A workflow with eighteen steps requires training materials. It requires exception handling for each step. It requires audit processes to verify compliance. It requires maintenance as business rules change. It requires support resources when practitioners get confused.

A workflow with six steps requires less of all of these. The gap widens over time as the complex system accumulates technical debt, workarounds, and institutional frustration.

Elegant Solutions Survive Contact with Reality:

Complex solutions break under pressure. When volume spikes, when staff is short, when exceptions multiply—complex workflows degrade first. Practitioners skip steps, batch work, take shortcuts. The system's design assumes conditions that don't hold under stress.

Simple workflows bend without breaking. Fewer steps means fewer opportunities for degradation. Clear logic means easier recovery when something goes wrong. Simplicity is a form of resilience.


Practitioner-Centered Design

Design for the person doing the work, not the person reviewing the work.

This principle bears restating because it is violated so frequently and so unconsciously. Executive stakeholders fund projects. Executive stakeholders approve designs. Executive stakeholders evaluate success. Their needs naturally shape decisions—unless the design process deliberately resists this gravity.

What Executives Want vs. What Practitioners Need:

Executives WantPractitioners Need
Status visibilityContext for decisions
Performance metricsTools to perform
Audit trailsSmooth workflows
Compliance documentationError prevention
Exception reportsException handling
Trend analysisCurrent information

Both columns contain legitimate needs. The error is prioritizing the left column in design and expecting the right column to follow. It doesn't.

A system designed for executive needs requires practitioners to document their work for observation. A system designed for practitioner needs produces executive visibility as a byproduct of work already being done.

The Surveillance Trap:

When practitioners experience a system as monitoring rather than helping, behavior changes—and not in the ways designers intended.

At Lakewood, practitioners began avoiding documentation that might require justification. Decision-making slowed as staff deferred rather than risked being questioned. The audit trail became an accountability threat rather than a quality tool.

Surveillance produces defensive behavior: covering tracks, avoiding documentation, deferring decisions. These are rational responses to perceived threat. The system became an adversary rather than a tool.

Serving Both Audiences:

Practitioner-centered design doesn't ignore executive needs—it sequences them correctly.

First: What do practitioners need to do their work better? Design for that.

Second: What visibility do executives need? Derive it from practitioner actions without adding burden.

The Lakewood redesign followed this sequence. Practitioners got contextual status displays, integrated communication, and decision support. Executives got their dashboard—populated from practitioner actions rather than separate documentation.

Both audiences were served. The order of priority made the difference.


Help, Not Surveillance

Automation should feel like assistance, not monitoring.

The same functionality can feel like either, depending on design. The difference is in who the system serves and how practitioners experience its presence.

Assistance feels like:

  • Information appearing when needed
  • Problems being flagged before they escalate
  • Routine work being handled automatically
  • Context being assembled for complex decisions
  • Communication being routed to the right people

Surveillance feels like:

  • Data entry required for observation
  • Actions being tracked for review
  • Exceptions requiring justification
  • Performance being measured for comparison
  • Delays being documented for accountability

Note that assistance and surveillance can involve identical underlying data. The difference is in purpose and presentation. An alert that says "Patient 412 may need pharmacy follow-up" feels like help. An alert that says "Discharge delayed in your unit—please document reason" feels like surveillance.

Trust as Design Requirement:

Practitioners need to trust that the system is on their side. This trust is earned through design, not assertion.

Systems earn trust by:

  • Reducing burden consistently
  • Providing accurate information
  • Flagging real problems (not generating false alerts)
  • Supporting decisions rather than second-guessing them
  • Protecting practitioners from error rather than documenting errors for review

Systems lose trust by:

  • Adding work without clear benefit
  • Providing unreliable information
  • Generating alert fatigue through false positives
  • Creating accountability exposure
  • Being used for performance evaluation without consent

Trust, once lost, is difficult to rebuild. Practitioners who have experienced surveillance will interpret even helpful features as monitoring. Design must earn trust from the first interaction and maintain it consistently.


The Principle in Practice

The Lakewood case demonstrates these principles in action—first through violation, then through correction.

The original system violated every principle:

  • Invisible automation — The system was visible and intrusive, requiring 12-15 minutes of documentation per discharge
  • Design for adoption — The 78% compliance rate masked workaround systems that did the real work
  • Simplicity imperative — Reason codes, status updates, and message threading added complexity that served dashboards, not practitioners
  • Practitioner-centered design — The system answered executive questions while making practitioner work harder
  • Help, not surveillance — Staff experienced the system as monitoring, creating defensive documentation and decision paralysis

The redesigned system embodied each principle:

  • Invisible automation — Data captured from actions already being taken; practitioners didn't feed the system
  • Design for adoption — The whiteboard logic was digitized, honoring how practitioners actually worked
  • Simplicity imperative — Free-text context replaced structured reason codes; integration replaced separate documentation
  • Practitioner-centered design — The design started with practitioner needs; executive visibility derived from practitioner actions
  • Help, not surveillance — Decision support replaced exception reporting; the system flagged problems, not people

The technology was largely the same. The design philosophy was opposite. The outcomes were transformative.

Design for the person doing the work. The person reviewing the work gets their view as a byproduct.


Module 4A: ORCHESTRATE — Theory

O — Observe

Workflow Patterns for Human-AI Collaboration

Every workflow design problem has been solved before—usually multiple times, in different contexts, by practitioners who discovered what works through trial and error. These solutions cluster into recognizable patterns.

A workflow pattern is a reusable template for how humans and intelligent systems collaborate. Each pattern defines who decides, who acts, and how information flows between them. Selecting the right pattern is the first design decision; implementing it well is everything that follows.

This section introduces five foundational patterns. Most workflows are either a single pattern or a combination of two or three.


Pattern 1: Decision Support

The Logic: AI provides recommendation; human decides.

In decision support workflows, the system assembles information, analyzes options, and suggests action. The human reviews the recommendation, applies judgment, and makes the final call. The system augments human capability; it doesn't replace human authority.

When to Use:

  • Judgment calls where context matters
  • Exceptions that require human interpretation
  • Customer-facing decisions where accountability is personal
  • Situations where multiple valid options exist
  • High-stakes choices that warrant deliberation

Design Considerations:

The central challenge is presenting recommendations without creating compliance pressure. If practitioners feel they must justify deviations from system recommendations, the pattern becomes automation in disguise—practitioners rubber-stamp suggestions to avoid documentation burden.

Good decision support designs:

  • Present recommendations as one option, not the option
  • Show the reasoning behind recommendations so humans can evaluate
  • Make disagreement easy—one click, no explanation required
  • Track when humans override and why (optional), but don't make this mandatory
  • Learn from human decisions over time without penalizing deviation

Example Application:

A bank's credit decision workflow presents loan applications with a system recommendation: approve, decline, or refer for review. The system shows its reasoning: credit score in this range, debt-to-income ratio at this level, similar applications had this outcome rate.

The loan officer reviews, applies contextual knowledge (the applicant's employer just announced expansion; the debt is from medical emergency, now resolved), and decides. The system records the decision. Over time, patterns in human override contribute to model refinement—but the human decision is final and doesn't require justification.


Pattern 2: Automation with Override

The Logic: AI handles routine cases; human handles exceptions.

In automation with override, the system processes the common cases autonomously while flagging exceptions for human attention. The human's role shifts from processing everything to handling what the system can't handle—the edge cases, the ambiguities, the situations that require judgment.

When to Use:

  • High-volume processes with predictable rules
  • Situations where most cases are routine but some require judgment
  • Workflows where speed matters for the routine and accuracy matters for exceptions
  • Processes where human time is better spent on complex cases

Design Considerations:

The critical design decision is the override mechanism. If overriding automation is difficult—buried in menus, requiring documentation, subject to review—practitioners will accept bad automated decisions rather than fight the system. The path of least resistance must be correction, not compliance.

Good automation with override designs:

  • Make override as easy as acceptance—one click, not a process
  • Show what the automation did and why before asking for approval
  • Allow batch override when patterns of error emerge
  • Don't penalize override frequency (high override is a calibration signal, not a performance problem)
  • Feed overrides back to improve automation logic

Example Application:

An insurance claims workflow auto-adjudicates routine claims—those within coverage limits, matching standard diagnosis codes, from verified providers. These are paid without human review.

Complex claims—those with unusual codes, high dollar amounts, or provider flags—route to adjusters. The adjuster sees what the system would have done and can accept, modify, or reject. Modification is simple: change the amount, add a note, process. No form, no justification, no workflow.

Over time, the system learns from adjuster modifications. A diagnosis code that consistently gets modified has its auto-adjudication rule adjusted. The automation improves; the adjuster's time focuses on genuinely complex cases.


Pattern 3: Preparation

The Logic: AI assembles context; human acts on prepared information.

In preparation workflows, the system's role is research and synthesis—gathering information from multiple sources, organizing it for human consumption, and surfacing what's relevant to the task at hand. The human arrives at a decision point with context already assembled, reducing cognitive load and improving decision quality.

When to Use:

  • Research-heavy tasks where information is scattered
  • Complex decisions requiring multi-source synthesis
  • Situations where time spent gathering information crowds out time spent thinking
  • Workflows where practitioners are expert decision-makers but inefficient researchers

Design Considerations:

The preparation pattern requires understanding what practitioners need to know—and, equally important, what they don't need to know. Over-preparation is as problematic as under-preparation. A system that surfaces everything surfaces nothing.

Good preparation designs:

  • Present information in priority order, not chronological or alphabetical
  • Surface the unusual, not just the complete—flag what's different about this case
  • Allow drill-down for detail without requiring it
  • Adapt to practitioner preferences over time
  • Make the preparation editable—let practitioners add context the system missed

Example Application:

Before a physician sees a patient, the system prepares a clinical summary: relevant history, recent lab trends, current medications, outstanding orders, and flags for potential interactions or concerns. The physician reviews for thirty seconds rather than searching for five minutes.

Critically, the preparation isn't just data dump. It's curated: highlighting what's changed since last visit, what's abnormal in recent results, what's relevant to today's chief complaint. The physician can click into any area for detail but doesn't wade through information that doesn't matter for this encounter.

R-01 Application:

The Returns Bible integration from earlier modules maps primarily to the Preparation pattern. The system's role is to prepare return policy information—surfacing the relevant policy, showing prior similar cases, flagging exceptions—so the customer service representative can make and execute the decision quickly. The system prepares; the human acts.


Pattern 4: Verification

The Logic: Human initiates; AI checks for errors or omissions.

In verification workflows, the human performs the work; the system reviews it. This reverses the typical automation relationship—instead of the system acting and the human reviewing, the human acts and the system reviews. The system catches what humans miss: errors, inconsistencies, compliance gaps, forgotten steps.

When to Use:

  • Quality control for human-performed work
  • Compliance checking before submission
  • Risk identification in complex processes
  • Error detection in high-stakes decisions

Design Considerations:

Verification workflows walk a line between help and surveillance. When done well, they feel like a safety net—a second set of eyes that catches errors before they become problems. When done poorly, they feel like second-guessing—automated criticism of human judgment.

Good verification designs:

  • Verify before submission, not after—catch errors while they're still correctable
  • Flag issues specifically and actionably—"Section 3 is missing required disclosure" not "errors detected"
  • Distinguish between errors (must fix) and warnings (should review)
  • Avoid alert fatigue—if everything is flagged, nothing is flagged
  • Don't create documentation of human error—the point is prevention, not blame

Example Application:

A legal document system checks contracts before sending for signature. It verifies that all required clauses are present, that dates are consistent, that party names match throughout, and that negotiated terms are within authorized limits.

The attorney reviews flagged issues, corrects genuine errors, and clears false positives. The system learns which flags the attorney consistently overrides and adjusts its sensitivity. Over time, the verification becomes more precise—catching real issues, ignoring non-issues.


Pattern 5: Learning

The Logic: Human teaches AI through feedback; AI improves over time.

In learning workflows, the system's performance improves through human input. This isn't a separate workflow category so much as a capability layer that applies to other patterns—any pattern can incorporate learning to adapt to local context and evolve with changing requirements.

When to Use:

  • Processes with tacit knowledge that's hard to specify upfront
  • Situations where rules evolve based on experience
  • Contexts where local variation matters
  • Workflows where initial automation can't capture all relevant factors

Design Considerations:

Learning requires feedback, and feedback requires effort. The design challenge is capturing meaningful input without adding burden. The worst outcome is a learning system that doesn't learn because practitioners skip the feedback mechanisms.

Good learning designs:

  • Capture feedback as a byproduct of natural workflow, not a separate step
  • Learn from what practitioners do, not just what they say
  • Distinguish between "the system was wrong" (training data) and "this case is unusual" (exception)
  • Show practitioners how their feedback improved the system—close the loop
  • Allow local adaptation without requiring central model retraining

Example Application:

A content moderation system flags potentially problematic posts for human review. Moderators review and decide: remove, keep, or escalate.

Each decision is training data. Posts that moderators consistently keep despite system flags suggest over-sensitivity. Posts that moderators consistently remove despite system approval suggest under-sensitivity. The model adapts, becoming more aligned with human judgment over time.

Critically, the adaptation is visible. Moderators see "You've helped improve accuracy by 12% this quarter"—feedback on their feedback that motivates continued engagement with the learning loop.


Selecting the Right Pattern

Pattern selection starts with understanding the work:

If the work requires...Consider...
Human judgment on system-prepared optionsDecision Support
Handling volume with exceptionsAutomation with Override
Research before actionPreparation
Quality assurance on human workVerification
Continuous improvement from experienceLearning (added to any pattern)

Decision Framework:

  1. Who knows best? If human judgment is essential, use Decision Support. If system rules cover most cases, use Automation with Override.

  2. Where is the burden? If gathering information is the burden, use Preparation. If checking work is the burden, use Verification.

  3. What improves over time? If the process should adapt, add Learning to whatever pattern fits.

  4. What's the cost of errors? High-cost errors favor human-primary patterns (Decision Support, Verification). Low-cost, high-volume contexts favor automation-primary patterns (Automation with Override).

The R-01 Pattern:

The Returns Bible integration (R-01) uses the Preparation pattern primarily:

  • System prepares: Surfaces relevant return policy, shows prior similar cases, flags exceptions
  • Human acts: Representative reviews preparation, makes decision, handles customer
  • Outcome: Reduced search time, consistent policy application, decision authority remains with human

A Learning component could be added: when representatives override system-surfaced policy (marking "this case was different because..."), those exceptions feed back to improve future preparation.


Combining Patterns

Complex workflows often combine patterns at different stages:

Sequential Combination:

Preparation → Decision Support → Verification

A loan underwriting workflow might: (1) prepare by assembling applicant information, (2) support the decision by recommending approval/denial with rationale, and (3) verify the final package before submission.

Parallel Combination:

A healthcare workflow might run Preparation (assembling patient context) and Verification (checking for drug interactions) simultaneously—both completing before the physician acts.

Nested Combination:

The main workflow follows one pattern; specific steps within it follow another. A customer service workflow might follow Decision Support overall, but each decision point involves Preparation of relevant information.


Pattern Anti-Patterns

Each pattern has common misapplications:

Decision Support misused as rubber-stamp automation:

When deviation requires justification, decision support becomes compliance pressure. The human's "choice" is illusory.

Automation with Override misused as exception documentation:

When overrides require forms and explanations, practitioners accept bad automation rather than fight the system. Error correction becomes burden.

Preparation misused as information overload:

When preparation surfaces everything, nothing is surfaced. Practitioners drown in data rather than acting on insight.

Verification misused as surveillance:

When verification documents human error for review rather than catching error for correction, it becomes a threat rather than a tool.

Learning misused as training burden:

When learning requires explicit feedback on every transaction, it adds friction without corresponding improvement.


Selecting patterns is the first design decision. The second is implementing them without falling into these traps. The following sections address design failures and implementation methodology.


Module 4A: ORCHESTRATE — Theory

O — Observe

Design Failures: How Workflow Designs Go Wrong

Good intentions produce bad workflows with remarkable consistency. The failures follow patterns—recognizable shapes that repeat across industries, organizations, and technology generations. Learning to see these patterns is the first step toward avoiding them.

This section catalogs seven common failure modes. Each one seemed reasonable to someone at the time. Each one produces predictable dysfunction.


1. The Executive Dashboard Trap

The Pattern:

Design begins with a question: "What do we want to see on the dashboard?"

The answer shapes everything that follows. Workflows are designed to produce data points. Processes are structured around metrics. Features are added to enable reporting.

The dashboard looks beautiful. It shows status, trends, exceptions, performance. Executives can finally see what's happening.

What they can't see: the burden created to produce that visibility.

How It Manifests:

At Lakewood Regional, the discharge coordination system required practitioners to document status changes, log communications, and enter reason codes—all to populate a dashboard that executives reviewed weekly. The 12-15 minutes per discharge wasn't incidental; it was the cost of visibility.

The dashboard showed what executives wanted: discharge status by unit, delays by category, performance trends by shift. It couldn't show the workarounds that practitioners developed to minimize documentation burden—the parallel whiteboard, the back-channel communications, the batch data entry at shift end.

The Tell:

You're in the executive dashboard trap when:

  • Design discussions focus on "what do we want to see" before "what do practitioners need"
  • Features are justified by reporting value rather than workflow improvement
  • Data entry exists primarily to create records, not to support decisions
  • Practitioners spend significant time documenting work rather than doing work

The Escape:

Ask a different first question: "What data would practitioners capture naturally if we removed all reporting requirements?"

Design for that. Then derive executive visibility from practitioner actions without adding burden. The dashboard becomes a view into work, not a destination that work must reach.


2. The Compliance Theater Pattern

The Pattern:

Workflows designed to prove work was done rather than to help do work.

The system accumulates checkboxes, approvals, attestations, and documentation steps—not because they improve outcomes, but because they create evidence. If something goes wrong, the organization can demonstrate that process was followed.

Compliance theater optimizes for defensibility rather than effectiveness.

How It Manifests:

A pharmaceutical company's quality system requires 47 signatures to release a batch of medication. Each signature attests that a step was completed correctly. In theory, this creates accountability. In practice, signers are attesting to work they didn't observe, in areas they don't understand, at scale that makes verification impossible.

The signatures don't prevent errors; they distribute blame. When something goes wrong, the investigation follows the signature chain looking for who failed to catch the problem. The actual root cause—process design, equipment limitation, training gap—is obscured by focus on documentation.

The Tell:

You're in compliance theater when:

  • Documentation steps outnumber work steps
  • Practitioners describe processes in terms of what to sign, not what to do
  • The same information is documented in multiple places "for the record"
  • Exception handling requires more documentation than routine processing
  • Audit preparation is a major operational burden

The Escape:

Distinguish between compliance requirements and compliance assumptions. What does regulation actually require? Often less than organizations assume. Regulatory frameworks typically require that controls exist and work—not that every transaction be documented from every angle.

Build compliance into workflow design rather than on top of it. A well-designed process creates compliance evidence as a byproduct of doing the work, not as a separate documentation layer.


3. The Exception Obsession

The Pattern:

Designing the entire workflow around edge cases.

Someone raises a scenario: "What if the customer wants to return an item they bought three years ago?" The workflow is modified to handle it. Another scenario: "What if the approval authority is on vacation?" More modification. Repeat until the 10% of exceptions drive the experience for the 90% of routine cases.

How It Manifests:

A procurement system was designed to handle complex, multi-department purchases with competing budget authorities. Every purchase—including $50 office supplies—flows through the same approval matrix, stakeholder notification, and documentation requirements.

The designers were solving real problems. Large purchases genuinely needed cross-functional coordination. But by applying the same solution to all purchases, they transformed routine transactions into bureaucratic exercises. Employees began hoarding office supplies to avoid the procurement system, or using personal cards and expensing later—workarounds that created different problems.

The Tell:

You're in exception obsession when:

  • Simple tasks require multiple steps "in case" of complexity
  • Practitioners ask "why do I need to do this?" and the answer is an edge case they've never encountered
  • The same workflow handles radically different transaction types
  • Process documentation is longer than anyone reads because it covers every possibility

The Escape:

Design two paths: a fast path for the 90% and an exception path for the 10%.

The fast path should be ruthlessly simple—minimum steps, minimum fields, minimum documentation. Exceptions route to a different flow with appropriate complexity.

The discipline is resisting the urge to merge paths "for consistency." Consistency that makes routine work harder isn't a virtue.


4. The "They'll Get Used to It" Fallacy

The Pattern:

Assuming training solves design problems.

The workflow is clunky, the interface is confusing, the steps don't match how work actually happens—but practitioners will adapt. They'll learn the system. Initial complaints will fade. Training investment will smooth the transition.

Sometimes this is true. More often, practitioners adapt by building workarounds that circumvent the design, creating parallel systems that eventually become the real workflow.

How It Manifests:

At Lakewood, the discharge system's complexity was dismissed as a training problem. Nurses would learn the workflow. Case managers would internalize the status codes. Resistance was change management, not design feedback.

Six months later, adoption metrics showed 78% compliance—respectable by most standards. But compliance meant data entry, not value creation. Practitioners entered minimum required information, then coordinated through their whiteboard and back-channels. They had "gotten used to" the system by reducing their interaction with it to the minimum necessary to avoid scrutiny.

The Tell:

You're in the fallacy when:

  • Launch plans allocate more time to training than to design iteration
  • Post-launch feedback is categorized as "needs more training"
  • Adoption metrics measure usage rather than value
  • "Power users" are defined by ability to navigate complexity rather than by outcomes achieved
  • Workarounds emerge within weeks of launch and persist indefinitely

The Escape:

Treat workarounds as design feedback, not discipline problems.

If practitioners find ways around the system, the system is failing them. The question isn't "how do we enforce compliance" but "what is the workaround telling us about unmet needs?"

Design should iterate until the official path is easier than the workaround. If you can't achieve that, the design is wrong—not the practitioners.


5. The Feature Accumulation Problem

The Pattern:

Workflows gain complexity over time through accumulation of reasonable requests.

No single feature breaks the system. Each addition is justified by a real need. But accumulated friction compounds until the workflow is significantly harder than when it started—and no one can point to the moment it happened.

How It Manifests:

A customer onboarding workflow launched with seven fields and a 3-minute completion time. Over two years:

  • Legal added a terms-of-service acknowledgment
  • Marketing added opt-in checkboxes for three communication channels
  • Compliance added identity verification questions
  • Product added feature preference selections for personalization
  • Support added emergency contact fields
  • Analytics added source tracking parameters

Each addition was approved independently. Each served a legitimate purpose. The workflow now has 34 fields and takes 12 minutes. Abandonment rates have tripled. No one owns the aggregate experience.

The Tell:

You're accumulating features when:

  • No one can explain when a field was added or why
  • Field removal requires multi-stakeholder negotiation
  • "Required" fields include data that's never used
  • Completion rates have degraded gradually without clear cause
  • New-hire onboarding includes learning workarounds for unnecessary steps

The Escape:

Implement a friction budget: every workflow has a complexity allocation. Adding a new step requires removing an existing one, or making a business case for budget expansion.

Conduct regular field audits: for each captured data point, identify who uses it, for what decision, and what happens if it's not available. Fields that can't answer these questions are candidates for removal.

Assign workflow owners responsible for aggregate experience, not just individual features.


6. The Automation Island

The Pattern:

Automating one step without considering the workflow it sits within.

The automated step works perfectly. It's faster, more accurate, more consistent. But it creates new handoff friction with adjacent steps, new format requirements for upstream processes, new interpretation challenges for downstream consumers.

The island of automation is surrounded by seas of new manual work.

How It Manifests:

A company automated invoice processing with impressive results: invoices were scanned, data was extracted, and entries were created in the accounting system in minutes rather than days.

But the automation was an island:

  • Upstream, vendors had to submit invoices in specific formats, creating friction that offset buyer efficiency gains
  • Downstream, extracted data required validation against purchase orders, which were still managed manually
  • Laterally, the automated entries didn't match the format expected by the month-end reconciliation process, requiring manual translation

The invoice processing step was faster. The end-to-end invoice lifecycle was barely improved because time was redistributed rather than eliminated.

How It Manifests at Lakewood:

The original discharge system automated status tracking beautifully—every status change was logged, timestamped, and attributed. But the automation was an island. Upstream, practitioners had to enter data the system couldn't capture itself. Downstream, the status data didn't integrate with transport scheduling, pharmacy dispensing, or patient education—the adjacent processes that actually required coordination.

The Tell:

You're building automation islands when:

  • Automation metrics show improvement, but end-to-end metrics don't
  • Manual steps appear immediately before and after automated steps
  • Format conversion or data translation is required at integration points
  • Different parts of the workflow use different systems that don't communicate
  • "Handoff" appears frequently in process descriptions

The Escape:

Map the end-to-end workflow before automating any step. Identify integration points. Design automation that receives inputs naturally from upstream and produces outputs usable downstream without conversion.

Sometimes the right answer is not to automate a step in isolation but to wait until adjacent steps can be addressed together.


7. Ignoring the Informal System

The Pattern:

Designing without understanding existing workarounds.

Every organization has shadow systems—the spreadsheets, the sticky notes, the tribal knowledge, the back-channel communications that make official systems tolerable. These informal systems exist because formal systems don't meet practitioner needs.

Ignoring them means ignoring requirements. Destroying them means destroying functionality.

How It Manifests:

At Lakewood, Maria Santos had spent twenty-two years building an informal coordination system: relationships with physicians, pattern recognition for discharge complications, shortcuts for common scenarios, workarounds for system limitations. This knowledge lived in her head and expressed itself through the whiteboard, the phone calls, the hallway conversations.

The new system was designed to replace this informal infrastructure with formal process. It succeeded in destroying the whiteboard. It failed to capture what the whiteboard represented: contextual, relational, adaptive coordination that couldn't be reduced to status codes and reason menus.

The Returns Bible as Informal System:

The Returns Bible from earlier modules is itself an informal system—a workaround that developed because formal systems didn't provide needed information. A design that replaces the Returns Bible without understanding why it emerged will repeat the dysfunction that created it.

Good design honors what practitioners have built. The informal system represents accumulated learning about what the work actually requires. Ignoring it discards organizational knowledge; honoring it accelerates design.

The Tell:

You're ignoring informal systems when:

  • Requirements gathering focuses on official process documentation
  • Key practitioners haven't been observed doing actual work
  • Shadow systems are described as "compliance problems" rather than requirements
  • Design assumes information lives where official records say it should
  • Launch plans include "retiring" unofficial tools without replacing their function

The Escape:

Map informal systems with the same rigor as formal ones. What spreadsheets exist? What tribal knowledge is essential? What workarounds have become standard practice?

Then design to absorb their function, not just replace their form. The new system should be easier than the workaround. If it's not, the workaround will persist—or be driven underground.


Recognizing Patterns in Your Own Design

These failure modes are easier to recognize in others' work than in your own. A few diagnostic questions:

For Executive Dashboard Trap:

  • Who is the primary beneficiary of this workflow? The person doing the work or the person reviewing it?
  • What percentage of steps exist to create visibility versus to accomplish the task?

For Compliance Theater:

  • What would happen if we removed half the documentation? What real risk would emerge?
  • Could an auditor distinguish between documented compliance and actual compliance?

For Exception Obsession:

  • What percentage of transactions use the full workflow complexity?
  • What would the workflow look like if we designed only for the common case?

For "They'll Get Used to It":

  • Are we solving resistance with training or iteration?
  • What workarounds have emerged, and what do they tell us?

For Feature Accumulation:

  • When was the last time we removed a step or field?
  • Who owns the aggregate user experience?

For Automation Island:

  • What manual work exists immediately upstream and downstream?
  • Do end-to-end metrics improve, or just step metrics?

For Ignoring Informal Systems:

  • What shadow systems exist, and what function do they serve?
  • Have we observed work, or only interviewed about it?

These questions don't prevent failure modes—they surface them early enough to correct course.


Module 4A: ORCHESTRATE — Theory

O — Observe

Technology Agnosticism: Why This Course Doesn't Teach Platforms

This course does not teach specific AI platforms, automation tools, or software systems. This is deliberate.

Technology changes; principles don't. A workflow designed well can be implemented in multiple tools. A workflow designed poorly will fail regardless of tool sophistication. The value is in the design, not the implementation.


The Approach Matters More Than the Platform

Consider two organizations implementing the same capability: automated document processing.

Organization A selected a leading AI platform based on vendor demonstrations. Implementation followed the platform's recommended workflow. Training covered the platform's features. Success was measured in platform adoption metrics.

Organization B started differently. They mapped how documents actually flowed through their organization—who touched them, what decisions were made, what information was extracted, where friction existed. They designed a future-state workflow on paper: what happens at each step, who decides, what information flows where. Only then did they evaluate platforms against their design.

Organization A's platform worked well. Documents were processed. AI capabilities were impressive. But the workflow replicated existing dysfunction with better technology. The fundamental design—who decides, when, with what information—was never examined.

Organization B's implementation was messier. Their design requirements didn't perfectly match any platform. They had to configure, customize, and in some places compromise. But the end result addressed their actual workflow needs, not their platform's preferred workflow.

Two years later, Organization A was considering a platform switch—a major undertaking with significant cost and disruption. Organization B had already migrated to a different platform with minimal friction: their design documentation specified what they needed, and the new platform met those specifications.

The lesson: Platform selection is a downstream decision. Design comes first.


The Tool Selection Trap

The most common failure pattern in workflow automation starts with a question: "Which AI should we use?"

This question seems practical. Budget decisions require vendor selection. Timelines require technology commitments. Stakeholders want to see demos.

But the question puts technology before design. And when technology comes first, the design follows the tool's assumptions rather than the organization's needs.

Vendor-Driven Design:

Vendors demonstrate capabilities. Impressive capabilities. The AI can read documents, extract data, make recommendations, automate decisions. Demo scenarios show transformation.

But demo scenarios are selected to highlight platform strengths. Real workflows have different shapes—edge cases the platform handles awkwardly, integration requirements that don't fit the demo architecture, practitioner needs that aren't part of the platform's value proposition.

When design follows vendor demonstration, the workflow is shaped by platform capabilities rather than organizational requirements. The platform works; the workflow doesn't.

The RFP That Should Have Been a Design Session:

Organizations issue RFPs specifying technology requirements: AI classification accuracy above X%, processing speed of Y documents per hour, integration with Z systems. Vendors respond. Selection committees compare.

The problem: these requirements describe technology capabilities, not workflow outcomes. An AI with 98% classification accuracy still needs a workflow design for the 2% it misses. Processing speed means nothing if the workflow creates downstream bottlenecks. System integration is implementation detail, not design specification.

The RFP process assumes the organization knows what it needs at a technology level. Usually, they know what they need at a workflow level—and need help translating that into technology requirements.

The solution: Design the workflow first. Document what happens at each step, who decides, what information is needed, what outcomes matter. Then translate that design into technology requirements. Then evaluate vendors against those requirements.


Design First, Then Select

The workflow blueprint is tool-agnostic.

A blueprint specifies:

  • What happens at each step
  • Who (human or system) performs each action
  • What information flows between steps
  • What decisions are made and by whom
  • What exceptions exist and how they're handled
  • What outcomes indicate success

A blueprint does not specify:

  • Which platform executes the workflow
  • What API calls are made
  • Which data model stores information
  • What user interface presents options

This separation is valuable because:

Designs survive technology changes. The Lakewood discharge workflow will still need coordination, status visibility, and exception handling regardless of what platform implements it. A design document focused on these needs remains valid through technology migrations.

Evaluation becomes objective. With a design in hand, platform evaluation is straightforward: can this tool implement this design? What compromises are required? Which tool requires the fewest compromises? These are answerable questions.

Organizational knowledge is preserved. The design represents understanding of how work should happen. This understanding belongs to the organization, not to a vendor relationship. When platforms change, the design persists.

What the Blueprint Must Specify:

  • Workflow structure: Steps, sequences, branches, exceptions
  • Decision points: Who decides, with what information, under what criteria
  • Information requirements: What data is needed at each step, from what sources
  • Human-AI collaboration pattern: Which pattern applies (Decision Support, Automation with Override, Preparation, Verification, Learning)
  • Success criteria: What outcomes indicate the workflow is working
  • Adoption requirements: What must be true for practitioners to use the workflow

What the Blueprint Leaves Open:

  • Platform selection
  • Technical architecture
  • Implementation sequence
  • Vendor relationships
  • Specific feature usage

Build vs. Buy vs. Configure

With a design in hand, implementation options clarify.

Configure:

Most workflows can be implemented by configuring existing systems. ERP workflows, CRM automation, document management rules—these are configuration exercises within tools the organization already owns.

Configuration is appropriate when:

  • Existing platforms support the required workflow pattern
  • Integration requirements align with platform capabilities
  • The workflow doesn't require AI capabilities beyond platform offerings
  • Speed of implementation matters more than custom optimization

Buy:

Specialized tools exist for many workflow categories. If the design reveals requirements that existing platforms can't meet, purchasing a purpose-built tool may be appropriate.

Buying is appropriate when:

  • The workflow is common enough that mature solutions exist
  • Configuration of existing platforms would require extensive customization
  • Vendor maintenance is preferable to internal development
  • The workflow category is outside organizational core competency

Build:

Custom development is necessary when no existing tool meets design requirements and the workflow is central enough to justify investment.

Building is appropriate when:

  • The workflow represents competitive differentiation
  • Integration requirements are complex and organization-specific
  • The design reveals requirements that no existing platform addresses
  • Long-term flexibility is more valuable than speed of implementation

Decision Framework:

QuestionConfigurationPurchaseBuild
Does existing platform support the pattern?Yes → ConfigureNo → Consider purchase or buildN/A
Do mature solutions exist?N/AYes → Evaluate purchaseNo → Consider build
Is this core competency?N/ANo → PurchaseYes → Consider build
Does speed matter most?Yes → ConfigureSometimes → PurchaseNo → Build may be okay

The framework guides initial direction, not final decision. Detailed evaluation follows from the design specification.


Future-Proofing Through Abstraction

Designs that depend on specific platform features are fragile. Designs that specify needs abstractly survive platform changes.

Platform-Dependent Design:

"The system uses Vendor X's AI classification API to route documents, storing results in Vendor Y's database with notifications through Vendor Z's messaging system."

This design is bound to three vendors. Changing any one requires rework. The design is a technical specification masquerading as workflow documentation.

Platform-Independent Design:

"Documents are classified by category and routed to appropriate handlers. Classification decisions are stored for audit. Handlers are notified when documents require attention."

This design could be implemented with any capable platform. Vendors can be evaluated against these requirements. Changing platforms means re-implementing the same design, not redesigning the workflow.

What to Document for Migration:

  • Workflow logic: decision rules, routing criteria, exception handling
  • Information requirements: what data is needed, in what format, from what sources
  • Integration points: where the workflow connects to other systems (abstractly)
  • Performance requirements: speed, volume, accuracy thresholds
  • Success metrics: what outcomes indicate the workflow works

With this documentation, an organization can:

  • Evaluate new platforms against documented requirements
  • Implement the same design in different technology
  • Preserve organizational learning through technology transitions
  • Avoid vendor lock-in at the design level

The R-01 Example

The Returns Bible integration (R-01) from earlier modules illustrates technology-agnostic design.

What R-01 Requires Functionally:

  1. When a customer service representative handles a return, relevant policy information should be surfaced automatically
  2. The system should identify the return type, customer history, and applicable policies
  3. Representatives should see recommended actions without searching
  4. Exceptions should be flagged for human judgment
  5. Decisions should be captured for learning and audit

Multiple Implementation Paths:

ERP Configuration: Many ERP systems support this through custom fields, business rules, and workflow configuration. Policy logic is encoded in the ERP's rule engine. Representatives see policy recommendations in their existing interface.

Standalone Tool: A purpose-built returns management system could provide this capability with specialized features for return processing, policy management, and analytics.

Custom Build: An integration layer could connect the existing customer service system to a policy database, with custom logic for surfacing recommendations. This provides maximum flexibility at higher development cost.

AI-Enhanced Approach: Any of the above could be enhanced with AI for policy interpretation, exception prediction, or learning from representative decisions.

The Design Is the Same:

Regardless of implementation path, the workflow is identical:

  • Customer initiates return
  • System prepares policy information (Preparation pattern)
  • Representative reviews and decides (Decision Support pattern)
  • Decision is executed and captured
  • Exceptions route to supervisors

The technology differs; the design doesn't. R-01's value is captured in the design. Implementation is a separate decision made against that design.


Why This Course Is Tool-Agnostic

This course teaches:

  • How to assess current-state workflows
  • How to calculate value and build business cases
  • How to design future-state workflows
  • How to prototype and test
  • How to implement and measure

None of these require specific technology knowledge. All of them produce artifacts that guide technology decisions without being bound to them.

Practitioners who complete this course will be able to:

  • Evaluate any platform against their design requirements
  • Implement their designs in whatever technology their organization uses
  • Migrate designs across platforms when circumstances change
  • Distinguish vendor claims from organizational needs

Platform-specific training has its place—but it comes after design, not before. This course provides the design capability that makes technology decisions intelligible.


Module 4A: ORCHESTRATE — Theory

O — Observe

Adoption as Design Outcome

Adoption is not a training problem. Adoption is not a change management problem. Adoption is a design problem.

If practitioners don't use the system, the design failed. This is the first truth about adoption—and it is consistently denied.

Organizations explain low adoption as resistance to change, inadequate training, cultural barriers, or insufficient executive sponsorship. These explanations locate the problem in people rather than design. They lead to interventions—more training, more communication, more pressure—that address symptoms while ignoring causes.

The design-centered view is simpler and more actionable: if practitioners find workarounds, the official system isn't serving them. The workaround is the feedback. The response is iteration, not enforcement.


Adoption Is a Design Metric

A system that practitioners don't use is a failed system, regardless of its technical capability.

This seems obvious but has radical implications. It means:

User adoption is a design specification, not a post-launch hope.

The blueprint must specify adoption requirements: What must be true for practitioners to use this workflow? What friction is acceptable? What competing alternatives must be displaced?

If these questions aren't answered during design, they'll be answered during implementation—usually by practitioners voting with their behavior.

Low adoption is design feedback, not user failure.

When adoption lags, the instinct is to push harder: more training, more reminders, more accountability. These interventions assume the design is correct and the users are wrong.

The alternative view: low adoption reveals design gaps. The design promised something practitioners don't experience. The value proposition isn't landing. The friction exceeds the benefit.

This reframe transforms low adoption from a problem to solve (push users) into information to use (improve design).

"Won't adopt" vs. "Can't adopt":

Resistance has two sources:

Won't adopt: The practitioner can use the system but chooses not to. This may be rational (the system makes their work harder) or irrational (change aversion, status quo bias). Design improvements address the former; change management addresses the latter.

Can't adopt: The practitioner lacks something required—skills, time, resources, access, clarity about how the system fits their work. These are design or implementation failures, not user failures.

Most "resistance" is can't masquerading as won't. The practitioner appears resistant when actually they're blocked by friction the designer didn't anticipate.


The Workaround Signal

Workarounds are the most valuable design feedback available. They reveal what the official system doesn't provide.

Workarounds as Requirements:

When a nurse creates a whiteboard to track discharge status, she's not violating process—she's expressing a requirement. The requirement: contextual, visual, flexible status tracking that the official system doesn't provide.

When a sales representative maintains a personal spreadsheet alongside the CRM, she's not being difficult—she's compensating for information the CRM doesn't capture or present usefully.

When a warehouse worker annotates pick tickets with handwritten notes, he's not undermining technology—he's adding context the system doesn't know.

Each workaround is a requirement. The question is whether the designer will read it.

The Returns Bible Was a Workaround:

The Returns Bible from earlier modules is itself a workaround. It emerged because official systems didn't provide return policy information in a usable form. Someone—likely "Patricia"—compiled the knowledge into a document because no other source met practitioner needs.

Understanding the Returns Bible as workaround reframes the R-01 opportunity. The goal isn't to replace the Returns Bible; it's to absorb its function into a system that serves the same need better. A design that doesn't understand why the Returns Bible emerged will repeat the dysfunction that created it.

Reading Workarounds:

For each workaround discovered during assessment, ask:

  • What need does this serve that official systems don't?
  • What information does this provide that practitioners can't get elsewhere?
  • What friction does this eliminate that official processes create?
  • What would happen if this workaround disappeared?

The answers are requirements. The workaround is a prototype solution built by practitioners who understand the work better than the system designers did.


Designing for Real Behavior

Humans take shortcuts. They skip optional fields. They batch work. They find easier paths.

Designs that fight this behavior fail. Designs that accommodate it succeed.

How People Actually Work:

Official processes describe how work should happen. Actual work happens differently:

  • Steps are skipped when they seem unnecessary
  • Information is entered at shift end, not in real-time
  • Fields marked "optional" are never completed
  • Communications happen through convenient channels, not official ones
  • Exceptions are handled through judgment, not documented procedures

These adaptations aren't dysfunction—they're efficiency. Practitioners discover what actually matters through experience and shed what doesn't.

Building for Shortcuts:

The design question isn't "how do we prevent shortcuts?" but "how do we make the shortcut the right path?"

If practitioners will batch data entry, design for batch entry. If practitioners will skip optional fields, make required fields rare and meaningful. If practitioners will use back-channels, integrate those channels into the workflow.

The goal is alignment between the easiest path and the correct path. When these diverge, practitioners follow the easy path—and the design fails.

Removing Friction Rather Than Adding Enforcement:

Low-adoption systems often trigger enforcement responses: mandatory fields, required acknowledgments, audit of compliance, performance metrics tied to usage.

Enforcement can increase compliance metrics while decreasing actual value. Practitioners enter data to satisfy requirements rather than to support their work. The system captures inputs without producing outcomes.

The alternative is friction reduction: make the official path easier than the workaround. If the system truly serves practitioner needs better than alternatives, adoption follows. If it doesn't, no amount of enforcement creates genuine adoption—only resentful compliance.


The Adoption Curve

Not all practitioners adopt at the same rate. Design must account for this variation.

The Standard Curve:

Early adopters (10-15%): Embrace new systems quickly, often before they're fully ready. They tolerate friction because they're attracted to novelty and improvement potential. Their feedback is valuable but not representative—they'll work around problems that would block others.

Mainstream (60-70%): Adopt when the system works reliably for their common cases. They need the easy path to be genuinely easy. Their adoption indicates design readiness.

Resisters (15-25%): Adopt last, if ever. Some resistance is irrational—change aversion, status quo preference, sunk cost in existing skills. But some resistance reflects legitimate concerns that others don't articulate.

Designing for the Middle:

The design target is the mainstream, not the extremes.

Designing for early adopters produces systems that work for the technically adventurous but frustrate everyone else. These designs generate initial excitement and subsequent disappointment.

Designing against resisters produces systems optimized for edge cases that make routine work harder. These designs satisfy the skeptics by annoying everyone.

The mainstream has different needs: reliable core functionality, clear value proposition, minimal friction for common cases, graceful handling of exceptions. Nail these, and the curve takes care of itself—early adopters are already on board, and some resisters will follow the mainstream.

Listening to Resisters:

Though design targets the mainstream, resister feedback merits attention. Resisters often articulate problems that others feel but don't express:

  • "This takes longer than the old way" may reveal friction invisible in demo scenarios
  • "I can't trust the system's recommendations" may reveal accuracy issues others haven't noticed
  • "This doesn't fit how we actually work" may reveal design assumptions that don't hold

The discipline is distinguishing signal from noise: which resistance reflects design problems, and which reflects change aversion? The distinction matters because the responses are opposite—iterate the design, or stay the course with better communication.


Measuring Adoption Meaningfully

Standard adoption metrics—logins, transactions processed, features used—measure activity, not value. Better metrics reveal design quality.

Usage Metrics That Reveal Design Quality:

Voluntary usage: For non-mandatory features, what percentage of eligible users engage? High voluntary usage suggests genuine value. Low voluntary usage despite availability suggests features that don't serve user needs.

Full-path completion: Do users complete workflows, or do they abandon partway? Abandonment patterns reveal friction points—specific steps where the design fails.

Workaround frequency: How often do users employ alternatives to official systems? Tracking workarounds (not to punish, but to learn) reveals unmet needs.

Return rate: Do users who try the system continue using it? High trial with low retention suggests the value proposition isn't sustained through actual use.

Time-to-Competency:

How long does it take a new user to reach proficient performance?

Complex designs have long competency curves—weeks or months of reduced productivity before users can work effectively. Simple designs have short curves—days to reach competent performance.

Time-to-competency is a design metric. Long curves don't indicate inadequate training; they indicate excessive complexity. The design asks too much of users.

Practitioner Satisfaction vs. Compliance Rates:

Compliance measures whether practitioners use the system. Satisfaction measures whether using the system makes their work better.

High compliance with low satisfaction is a warning sign: practitioners are complying because they must, not because the system serves them. This pattern indicates enforcement success and design failure.

The goal is high compliance driven by high satisfaction—practitioners use the system because it genuinely helps them work.


When Low Adoption Is the Right Answer

Sometimes the workflow is wrong.

Design iteration assumes the workflow concept is correct and the implementation needs refinement. But sometimes the concept is flawed—the workflow solves the wrong problem, addresses imaginary needs, or creates more friction than it eliminates.

How to Distinguish Design Failure from Change Resistance:

SignalSuggests Design FailureSuggests Change Resistance
WorkaroundsWorkarounds recreate capability the system lacksWorkarounds replicate old habits without functional advantage
FeedbackPractitioners articulate specific unmet needsPractitioners express vague preference for the old way
Early adoptersEarly adopters struggle with same issues as mainstreamEarly adopters succeed; mainstream struggles
Improvement attemptsChanges don't improve adoptionChanges improve adoption incrementally
Comparative behaviorPractitioners work harder to avoid systemPractitioners work harder initially but adapt

The Courage to Redesign:

When evidence indicates design failure rather than change resistance, the professional response is redesign—not more training, more enforcement, or more patience.

This requires courage. Redesign admits failure. It writes off investment. It delays promised outcomes. It may threaten careers of those who championed the original design.

But enforcing a failed design is worse. It consumes organizational energy. It damages practitioner trust. It creates compliance without value. The longer enforcement continues, the more expensive the eventual redesign.

Using Adoption Data to Iterate:

Whether the diagnosis is design failure or implementation refinement, adoption data guides response:

  • Workaround patterns reveal missing requirements
  • Abandonment points reveal friction locations
  • Satisfaction surveys reveal value perception gaps
  • Competency curves reveal complexity excess

Each data point suggests a design hypothesis. Iteration tests hypotheses against improved adoption.


Connection to Module 5

The workflow blueprint's adoption assumptions become testable in the prototype phase.

Module 4 produces a design with embedded predictions: practitioners will use this workflow because it serves their needs better than alternatives. The path is easier. The friction is lower. The value is clear.

Module 5 tests these predictions. Prototyping reveals whether the design's assumptions hold. Early practitioner interaction generates feedback before full implementation commits resources.

The blueprint isn't finished when it's designed. It's finished when it's validated—when practitioners have confirmed that the design serves their needs.

Adoption isn't something that happens after implementation. It's something that's designed in, tested through prototyping, and measured throughout operation. The blueprint is a hypothesis. Module 5 begins the experiment.



Module 4B: ORCHESTRATE — Practice

Module 4B: Practice

A systematic methodology for designing workflows that practitioners will actually use


Why This Module Exists

Module 4A established the theory: design for the person doing the work, not the person reviewing the work. The Lakewood case demonstrated how well-founded initiatives fail when workflows serve executive needs before practitioner needs. The principles are clear; the question is how to apply them.

This module provides the methodology to translate principles into designs.

The Workflow Blueprint is not a technical specification. It is a design document—a structured way to map current work, identify friction, select collaboration patterns, and create future-state workflows that practitioners recognize as improvement. Every methodology step in this module has been tested against the failure patterns catalogued in Module 4A: the executive dashboard trap, compliance theater, exception obsession, and the rest.


What You Will Learn

By the end of Module 4B, you will be able to:

  1. Map current-state workflows with practitioner input—capturing reality, not documentation
  2. Select appropriate workflow patterns for human-AI collaboration
  3. Design future-state workflows that reduce friction rather than shifting it
  4. Specify human-AI collaboration points with clarity about who decides and who executes
  5. Document workflows in blueprint format that developers and operators can use
  6. Validate designs with practitioners before committing development resources

The Practitioner's Challenge

Good designs look obvious in retrospect. The challenge is seeing practitioner experience from the inside before committing to a solution.

A systems analyst described the difficulty: "I've designed maybe a dozen workflow automation projects. The ones that failed all had something in common: I understood the process perfectly but didn't understand the work. I could draw a flowchart of how tasks moved through the system. I couldn't feel what it was like to do those tasks under pressure, with incomplete information, while handling three other things.

"The successful projects started differently. I sat with practitioners—not interviewing them, just watching. I noticed what made them sigh, what made them reach for workarounds, what they did automatically that the official process didn't account for. The design emerged from that observation, not from process documentation."

This module teaches that observation-first approach. The methodology prioritizes practitioner experience over system elegance. An adoptable design that captures 80% of the value beats an elegant design that practitioners avoid.


What You're Receiving as Input

Module 4B builds on work completed in Modules 2 and 3:

From Module 2 — Opportunity Audit:

  • Process observation notes from field assessment
  • Waste pattern analysis with root causes
  • Friction points identified and quantified
  • Understanding of workarounds and shadow systems

From Module 3 — ROI Model:

  • Baseline metrics for priority opportunity
  • Quantified value across Time, Throughput, and Focus lenses
  • Business case with success criteria
  • Assumption documentation

The R-01 Example:

Throughout Module 4B, we continue with the Returns Bible integration (R-01) from earlier modules. The opportunity has been assessed, valued, and approved:

MetricValue
Annual Value$99,916
Implementation Cost$35,000
Payback Period4.2 months
ROI756%
Priority Rank1 of 5

R-01 becomes the worked example for every methodology step. You will see how assessment findings and ROI calculations transform into a workflow design that addresses the specific friction identified.


Field Note: The Design That Felt Like Help

A practitioner described the moment a workflow design succeeded:

"They had redesigned our returns process three times before. Each time, the new system was supposed to make things easier. Each time, it added steps—data entry, reason codes, supervisor approvals. The systems got more sophisticated and the work got harder.

"The fourth design was different. The team spent two days just watching us work. They asked questions like 'What do you wish you knew automatically?' and 'Where do you have to stop and look something up?' They didn't ask what features we wanted.

"The system they built felt invisible. I'd pull up a return, and the policy information was already there—I didn't have to search for it. If something was unusual, the system flagged it and suggested who could help. I never entered a reason code because the system inferred reasons from what I was already doing.

"I didn't realize how much it helped until someone asked me about the new system. I had to think about it—I'd stopped noticing it was there. That's when I knew the design had worked."


Module Structure

Module 4B follows the ROOTS framework:

  • R — REVEAL: This introduction
  • O — OBSERVE: The blueprint methodology overview
  • O — OPERATE: Six-step process for workflow design
    • Current-state mapping
    • Future-state design
    • Practitioner validation
    • Blueprint documentation
    • Transition preparation
  • T — TEST: Quality metrics for design evaluation
  • S — SHARE: Reflection prompts, peer exercises, and discussion questions

Supporting materials include:

  • Reading list with academic and practitioner sources
  • Slide deck outline for presentation
  • Assessment questions with model answers
  • Instructor notes for facilitation

The Deliverable

Module 4B produces the Redesigned Workflow Blueprint—the fourth artifact in the A.C.O.R.N. cycle.

A complete Workflow Blueprint includes:

  • Current-state workflow documentation (observed, not assumed)
  • Future-state workflow design with human-AI collaboration specification
  • Friction point mapping showing where and how value is captured
  • Adoption design elements addressing practitioner concerns
  • Technology requirements (tool-agnostic)
  • Success metrics aligned with Module 3 ROI model
  • Practitioner validation summary

This deliverable feeds Module 5: REALIZE, where the blueprint becomes a working prototype tested in real conditions.


Proceed to the blueprint methodology overview.


Module 4B: ORCHESTRATE — Practice

O — Observe

The Workflow Blueprint Methodology

The Workflow Blueprint is a design specification that bridges strategy and implementation. It translates the value identified in Module 3 into a concrete workflow that can be built, tested, and deployed.

This section overviews the complete methodology: what the blueprint produces, how long it takes, what inputs are required, and what quality standards apply.


What the Workflow Blueprint Produces

A complete blueprint contains six components:

1. Current-State Workflow Documentation

The workflow as it actually happens—not the documented process, but the observed reality including workarounds, shadow systems, and informal coordination. This documentation establishes the baseline against which improvement will be measured.

2. Future-State Workflow Design

The redesigned workflow with friction points addressed. This is not a vision document; it is a step-by-step specification of what will happen, who will act, and how human-AI collaboration will function.

3. Human-AI Collaboration Specification

Explicit definition of roles at each decision point: what the system does, what humans do, how override works, and how feedback improves the system over time. This specification draws on the workflow patterns from Module 4A.

4. Adoption Design Elements

Design choices that address practitioner concerns and increase likelihood of adoption. This includes simplicity decisions, invisible automation implementations, and explicit attention to what makes the workflow feel like help rather than surveillance.

5. Technology Requirements (Tool-Agnostic)

Functional requirements that specify what the system must do without specifying which product or platform does it. These requirements allow evaluation of build vs. buy vs. configure decisions in Module 5.

6. Success Metrics Aligned with ROI Model

Specific metrics that will indicate whether the design is working—drawn directly from Module 3 baseline measurements. These metrics connect the design to the value proposition that justified investment.


The Design Timeline

Workflow blueprint development typically requires 5-7 working days for a moderately complex opportunity:

PhaseDurationActivities
Current-State Mapping1-2 daysPractitioner walkthroughs, observation sessions, workaround documentation
Pattern Selection & Design2-3 daysFriction analysis, pattern selection, future-state design, iteration
Practitioner Validation1 dayValidation sessions, feedback integration
Blueprint Documentation1 dayFinal documentation, quality review

Timeline Factors:

  • Complexity: Multi-step workflows with many decision points take longer
  • Stakeholder availability: Practitioner time for mapping and validation is often the constraint
  • Integration scope: Workflows touching multiple systems require more design iteration
  • Prior assessment quality: Strong Module 2 work accelerates current-state mapping

The timeline assumes one priority opportunity. Organizations developing blueprints for multiple opportunities should sequence them rather than parallelize—lessons from early blueprints improve later ones.


Inputs Required

The blueprint builds on prior module work and requires new input:

From Module 2 — Opportunity Audit:

InputPurpose in Blueprint
Process observation notesFoundation for current-state mapping
Waste pattern analysisIdentifies friction to address in design
Workaround documentationReveals requirements hidden in informal systems
Shadow system inventoryEnsures design absorbs shadow system function

From Module 3 — ROI Model:

InputPurpose in Blueprint
Baseline metricsSuccess criteria for design evaluation
Value quantificationPrioritizes which friction to address
Business caseJustifies design investment
Assumption documentationDesign must not violate approved assumptions

New Inputs for Module 4:

InputHow to Obtain
Practitioner interviewsStructured conversations about current work and pain points
Technology inventoryDocumentation of systems touched by the workflow
Constraint documentationOrganizational policies, compliance requirements, technical limitations
Stakeholder preferencesInput from managers, IT, compliance on design requirements

The Methodology Sequence

The workflow blueprint methodology follows six steps:

Step 1: Map Current-State Workflow

Document what actually happens today. Start with practitioner walkthrough, observe actual instances, capture divergence between described and observed, include informal systems. The output is a current-state workflow map that practitioners recognize as accurate.

Key question: Does this map reflect how work actually happens, including the parts no one talks about?

Step 2: Identify Friction Points

Analyze the current-state map for value leakage. Where does time disappear? Where do errors originate? Where does cognitive load concentrate? Which steps exist only because systems don't communicate? The output is a friction point inventory prioritized by value impact.

Key question: Which friction points, if eliminated, would capture the value identified in Module 3?

Step 3: Select Workflow Pattern

Choose the human-AI collaboration pattern that fits the work: Decision Support, Automation with Override, Preparation, Verification, or Learning. The pattern provides structure for the future-state design. Multiple patterns can combine for complex workflows.

Key question: What is the fundamental nature of human-AI collaboration in this workflow—who decides and who executes?

Step 4: Design Future-State Workflow

Create the redesigned workflow that addresses friction points using the selected pattern. Design for adoption: make the easy path the right path, make automation invisible, ensure the design feels like help. The output is a future-state workflow specification.

Key question: Would a practitioner choose to use this workflow even if it weren't required?

Step 5: Validate with Practitioners

Test the design with the people who will use it. Present the future state, explore scenarios, identify gaps. Iterate based on feedback. The output is a validated design with documented practitioner input.

Key question: Have practitioners seen this design and confirmed it would improve their work?

Step 6: Document the Blueprint

Assemble all components into the final blueprint document. Structure for multiple audiences: developers need technical specification, operations needs process documentation, leadership needs connection to business case. The output is the Workflow Blueprint deliverable.

Key question: Could someone build this system, train users, and measure success using only this document?


Quality Standard

A blueprint meets quality standard when:

Practitioners recognize the current state as accurate.

The current-state map should prompt reactions like "Yes, that's exactly what we do" and "I forgot we had to do that step." If practitioners don't recognize the map, it documents the wrong process.

Future state clearly addresses identified friction.

Every significant friction point from Step 2 should have a corresponding design element in Step 4. The connection should be explicit—"Friction: Bible lookup takes 14 minutes. Solution: System surfaces policy automatically."

Human-AI roles are explicitly specified.

For each step, the blueprint should answer: Who does this—human or system? If system, what does the human see? If human, what does the system provide? How does override work? There should be no ambiguous steps.

Adoption considerations are designed in, not added on.

Adoption isn't a training problem to solve later. Simplicity, invisibility, and help-not-surveillance should be evident in design choices—not mentioned in a "change management" appendix.

The design can be implemented in multiple tools.

The blueprint specifies what must happen, not how it's technically accomplished. A developer reading the blueprint should be able to implement it in their platform of choice without requesting additional design decisions.


The R-01 Blueprint

Throughout Module 4B, R-01 (Returns Bible integration) serves as the worked example. By the end of this module, you will have seen:

  • R-01 current-state workflow mapped with all workarounds
  • R-01 friction points identified and prioritized
  • Workflow pattern selected for R-01 (Preparation pattern)
  • R-01 future-state workflow designed
  • Practitioner validation of R-01 design
  • Complete R-01 blueprint document

The R-01 example demonstrates each methodology step at scale appropriate for a moderately complex opportunity. Your own blueprints may be simpler or more complex, but the methodology applies.


Proceed to current-state workflow mapping.


Module 4B: ORCHESTRATE — Practice

O — Operate

Step 1: Map Current-State Workflow

You cannot design improvement without understanding what exists. Module 2's audit identified friction; this mapping shows flow. The map must reflect reality, including the workarounds practitioners don't mention in meetings.


Purpose of Current-State Mapping

The current-state map serves three functions:

1. Design Foundation

You can't redesign what you don't understand. The current-state map reveals the actual workflow—not the documented process, not the ideal process, but the work as it actually happens. Future-state design emerges from this reality.

2. Friction Localization

Module 3's ROI model quantified total value. Current-state mapping localizes that value—showing exactly where in the workflow time is lost, errors originate, and cognitive load concentrates. This localization guides design prioritization.

3. Validation Baseline

The current-state map becomes the baseline against which improvement is measured. When Module 5 tests the prototype, comparison requires knowing what "before" looked like in detail.


What to Capture

For each step in the workflow, document:

ElementDescriptionWhy It Matters
TriggerWhat initiates this stepDefines scope and starting conditions
ActorWho performs this stepIdentifies human-AI role assignment
ActionWhat specifically happensEnables comparison with future state
SystemsTools touched at this stepReveals integration requirements
DecisionsChoice points and criteriaIdentifies where judgment is required
InformationData consumed and producedDefines information architecture
TimeDuration and wait timeQuantifies improvement potential
WorkaroundsUnofficial adaptationsReveals hidden requirements

Mapping Methodology

Start with Practitioner Walkthrough

Ask a practitioner to describe a recent example—not the abstract process, but a specific instance. "Walk me through the return you handled this morning. What happened first?"

The walkthrough reveals sequence and logic. Note what the practitioner mentions automatically (important steps) and what they skip until prompted (assumed context).

Observe 3-5 Actual Instances

Walkthrough describes what practitioners think they do. Observation reveals what they actually do. The gap is significant.

During observation, note:

  • Steps the walkthrough didn't mention
  • Divergence from documented process
  • Physical artifacts (sticky notes, printouts, reference materials)
  • Communications outside the official channel
  • Moments of hesitation, frustration, or improvisation

Capture Divergence

Compare walkthrough description to observed reality. Where they diverge, the observation is correct. Common divergences:

  • Steps described as sequential actually overlap
  • "Automatic" system steps require manual intervention
  • Official process is skipped entirely for common cases
  • Workarounds are so habitual they weren't mentioned

Document the Informal System

Every workflow has shadow infrastructure—the spreadsheets, notes, tribal knowledge, and back-channel communications that make official systems tolerable. Module 4A's "Ignoring the Informal System" failure pattern warned against designing without this understanding.

For R-01, the informal system includes:

  • Patricia's Returns Bible (the physical document)
  • Mental models of which policies apply to which situations
  • Shortcuts veteran representatives have developed
  • Escalation patterns that bypass official channels

Validate with Multiple Practitioners

A single practitioner's perspective is incomplete. Validate the map with 2-3 others:

  • Does this match your experience?
  • What did I miss?
  • Is there anything you do differently?

Variation between practitioners is data—it reveals where the process isn't standardized and where individual adaptation has filled gaps.


The R-01 Current-State Workflow

Here is the complete current-state map for R-01 (Returns Bible lookup), developed through practitioner walkthrough and observation:

Trigger: Customer requests return (phone, email, or chat)


Step 1: Gather Return Information

  • Actor: Customer Service Representative
  • Action: Collect order number, item, reason for return
  • Systems: CRM (customer lookup), Order Management (order details)
  • Time: 2-3 minutes
  • Notes: Representatives have developed shortcut questions based on common return types

Step 2: Initial Assessment

  • Actor: Customer Service Representative
  • Action: Determine if return is straightforward or requires policy lookup
  • Systems: None (judgment call)
  • Time: 30 seconds
  • Decision: If return type is familiar and policy is known → Skip to Step 6
  • Notes: Experienced reps skip Bible lookup for ~40% of returns; new reps consult Bible for nearly everything

Step 3: Bible Retrieval

  • Actor: Customer Service Representative
  • Action: Locate and retrieve Returns Bible
  • Systems: Physical document (shared binder) OR digital copy (shared drive)
  • Time: 1-2 minutes
  • Workaround: Representatives often ask Patricia directly rather than searching the Bible
  • Notes: Physical copy frequently not in expected location; digital copy may be outdated

Step 4: Policy Search

  • Actor: Customer Service Representative
  • Action: Navigate Bible to find applicable policy
  • Systems: Returns Bible (300+ pages, organized by product category and return reason)
  • Time: 3-8 minutes depending on complexity
  • Decision: Multiple policies may apply; representative must determine precedence
  • Notes: Bible organization doesn't match how reps think about returns; cross-references are incomplete

Step 5: Policy Interpretation

  • Actor: Customer Service Representative
  • Action: Interpret policy language, apply to specific situation
  • Systems: Returns Bible, Order Management (for details)
  • Time: 2-5 minutes
  • Workaround: When policy is ambiguous, representatives consult Patricia or senior colleague
  • Notes: ~12% of Bible-dependent returns require escalation for interpretation

Step 6: Customer Communication

  • Actor: Customer Service Representative
  • Action: Explain return process and outcome to customer
  • Systems: CRM (communication logging), Phone/Chat/Email
  • Time: 2-4 minutes
  • Notes: Representatives often simplify policy language for customer clarity

Step 7: Return Processing

  • Actor: Customer Service Representative
  • Action: Initiate return in system, generate RMA if applicable
  • Systems: Order Management, Inventory (for restock decisions)
  • Time: 2-3 minutes
  • Notes: Some return types require supervisor approval before processing

Step 8: Documentation

  • Actor: Customer Service Representative
  • Action: Log return details and outcome in CRM
  • Systems: CRM
  • Time: 1-2 minutes
  • Workaround: Representatives often batch documentation at end of shift rather than in real-time

Total Time (Bible-dependent return): 14-28 minutes Total Time (familiar return, no Bible lookup): 7-12 minutes


Workflow Diagram Description

The workflow follows this structure (suitable for later visualization):

[Customer Request]
    ↓
[1. Gather Information]
    ↓
[2. Initial Assessment]
    ↓ (familiar return)         ↓ (needs policy lookup)
    |                           |
    ↓                     [3. Bible Retrieval]
    |                           ↓
    |                     [4. Policy Search]
    |                           ↓
    |                     [5. Policy Interpretation]
    |                           ↓
    ←←←←←←←←←←←←←←←←←←←←←←←←←←←←
    ↓
[6. Customer Communication]
    ↓
[7. Return Processing]
    ↓
[8. Documentation]
    ↓
[Complete]

Step Table Summary

StepActorSystem(s)TimeFriction Level
1. Gather InformationRepCRM, Order Mgmt2-3 minLow
2. Initial AssessmentRepNone0.5 minLow
3. Bible RetrievalRepPhysical/Digital Bible1-2 minMedium
4. Policy SearchRepReturns Bible3-8 minHigh
5. Policy InterpretationRepBible, Order Mgmt2-5 minHigh
6. Customer CommunicationRepCRM, Comm Channel2-4 minLow
7. Return ProcessingRepOrder Mgmt, Inventory2-3 minLow
8. DocumentationRepCRM1-2 minLow

Friction Point Identification

From the current-state map, identify where value leaks:

High-Friction Steps:

Step 4: Policy Search (3-8 minutes)

  • Bible organization doesn't match representative mental models
  • Cross-references are incomplete
  • Finding the right policy requires significant navigation
  • Time varies dramatically based on familiarity with Bible structure

Step 5: Policy Interpretation (2-5 minutes)

  • Policy language is often ambiguous
  • Multiple policies may apply to same situation
  • Requires judgment that new representatives lack
  • 12% escalation rate indicates decision difficulty

Medium-Friction Steps:

Step 3: Bible Retrieval (1-2 minutes)

  • Physical Bible frequently missing
  • Digital copy currency unknown
  • Time wasted locating resource before using it

Friction Concentration:

Steps 3-5 consume 6-15 minutes of the 14-28 minute total for Bible-dependent returns. This is the target zone for improvement—consistent with Module 3's value calculation, which attributed the majority of R-01 value to lookup and interpretation time.


Common Mistakes in Current-State Mapping

Mapping the Documented Process

The flowchart in the SOP isn't the workflow. The workflow is what practitioners actually do. If your map matches official documentation exactly, you haven't observed deeply enough.

Missing the Workarounds

Workarounds are so habitual that practitioners don't think of them as separate from the process. Ask specifically: "Is there anything you do that isn't in the official process?" Watch for moments when practitioners reach for non-standard resources.

Treating Practitioner Complaints as Resistance

When practitioners say "This step is annoying" or "I wish we didn't have to do this," they're providing design requirements. These aren't complaints to manage—they're friction points to address.

Rushing to Future State

The temptation to start designing solutions appears immediately. Resist it. Incomplete current-state mapping leads to future-state designs that solve the wrong problems or miss critical requirements hidden in informal systems.


Documentation Checklist

Before proceeding to future-state design, confirm:

  • Workflow has been described by practitioners and observed in action
  • All steps are documented with actor, system, time, and notes
  • Workarounds and informal systems are captured
  • Multiple practitioners have validated the map
  • Friction points are identified and localized
  • Time data aligns with Module 3 baseline metrics
  • The map reflects reality, even where reality is messy

Proceed to future-state workflow design.


Module 4B: ORCHESTRATE — Practice

O — Operate

Step 2: Design Future-State Workflow

Future-state design translates friction points into solutions. The design process applies Module 4A's principles, selects an appropriate workflow pattern, and creates a step-by-step specification that practitioners will recognize as improvement.


Design Principles Applied

The five principles from Module 4A constrain and guide every design decision:

1. Invisible Automation

If practitioners notice the system, the design has failed. The goal is reducing friction, not adding technology. Ask of each design element: Will practitioners experience this as help or as a new thing to manage?

2. Design for Adoption

The 80% solution that gets adopted beats the 100% solution that doesn't. Prioritize simplicity over comprehensiveness. Ask: Will practitioners choose to use this, or will they need enforcement?

3. Simplicity Imperative

Every step must earn its place. Complexity is the enemy of adoption. For each proposed step, ask: What happens if we remove this? If the answer is "not much," remove it.

4. Practitioner-Centered Design

Design for the person doing the work, not the person reviewing the work. When executive needs and practitioner needs conflict, practitioner needs win. Executive visibility emerges from practitioner actions.

5. Help, Not Surveillance

Automation should feel like assistance, not monitoring. Ask: Will this feel like a safety net or like Big Brother? Design choices that feel like surveillance will be resisted regardless of their objective value.

The Hierarchy:

When principles conflict, apply this order:

  1. Adoption (will they use it?)
  2. Simplicity (can they learn it quickly?)
  3. Completeness (does it handle all cases?)

A simple, adoptable design that handles 80% of cases is better than a comprehensive design that's too complex to adopt.


Pattern Selection for R-01

Current-state analysis identified high friction in Steps 3-5: Bible retrieval, policy search, and policy interpretation. These steps represent information-gathering work that delays the core task (helping the customer).

Pattern Analysis:

PatternFit for R-01?
Decision SupportPartial — relevant for interpretation step
Automation with OverridePoor — returns aren't routine enough for full automation
PreparationStrong — the core problem is assembling information
VerificationPoor — the workflow isn't about checking work
LearningAdditive — useful for improving over time

Selected Pattern: Preparation

The Preparation pattern fits R-01's friction profile:

  • The bottleneck is information gathering, not decision-making
  • Representatives have the judgment to make return decisions—they lack fast access to policy information
  • The system's role is to prepare context so humans can act quickly
  • Human authority over final decision remains intact

What Preparation Implies:

In Preparation-pattern workflows:

  • System assembles relevant information before human needs it
  • Human arrives at decision point with context already prepared
  • Decision remains with human; system accelerates decision-making
  • Feedback loop improves preparation quality over time

For R-01, this means:

  • System identifies applicable return policies when return details are entered
  • Representative sees relevant policy information without searching
  • Interpretation remains with representative; system surfaces relevant precedents
  • Unusual cases are flagged for human judgment

Future-State Design Process

Start from Friction Points

The current-state map identified three high/medium-friction steps:

  • Step 3: Bible Retrieval (1-2 min)
  • Step 4: Policy Search (3-8 min)
  • Step 5: Policy Interpretation (2-5 min)

For each friction point, ask: What would eliminate this?

Step 3 friction: Representatives waste time locating the Bible before using it. Elimination: Policy information appears automatically within existing workflow—no retrieval needed.

Step 4 friction: Bible organization doesn't match how representatives think about returns. Elimination: System presents relevant policies based on return attributes—no searching needed.

Step 5 friction: Policy language is ambiguous; multiple policies may apply. Elimination: System surfaces prior similar cases and recommended actions—interpretation is guided, not eliminated.

Design the Human Experience First

Before specifying what the system does, specify what the representative experiences:

  1. Representative pulls up return request
  2. Relevant policy information is already visible—no searching
  3. If the case is straightforward, representative proceeds immediately
  4. If the case has complexity, system shows similar prior cases
  5. Representative makes and communicates decision
  6. System captures decision for future learning

This human-centered sequence determines what the system must provide at each moment.

Add Technology to Serve the Experience

Now specify what enables that experience:

  1. Integration with Order Management to identify return attributes automatically
  2. Policy engine that maps return attributes to applicable policies
  3. Case matching that surfaces similar prior returns and their outcomes
  4. Display of policy information within existing CRM interface (no new screens)
  5. Decision capture that feeds the learning loop

Test Each Choice Against Adoption

For each design choice, ask:

  • Does this make work easier? (If not, remove it)
  • Does this add steps? (If so, justify the addition)
  • Does this feel like help or surveillance? (If surveillance, redesign)
  • Would an experienced representative choose to use this? (If not, why not?)

The R-01 Future-State Workflow

Trigger: Customer requests return (phone, email, or chat)


Step 1: Gather Return Information (Revised)

  • Actor: Customer Service Representative
  • Action: Collect order number, item, reason for return
  • Systems: CRM (customer lookup), Order Management (order details), Policy Engine (automatic)
  • Time: 2-3 minutes (unchanged)
  • Change: As representative enters return details, Policy Engine identifies applicable policies in background

Step 2: Policy Review (Replaces Steps 2-5)

  • Actor: Customer Service Representative
  • Action: Review system-surfaced policy information
  • Systems: CRM (policy display integrated), Policy Engine
  • Time: 1-2 minutes
  • Human-AI Collaboration:
    • System provides: Applicable policy summary, confidence level, similar prior cases
    • Human provides: Final policy selection, contextual judgment, exception handling
    • Override: Representative can mark "system recommendation doesn't apply" with one click (no explanation required)
  • Notes: Policy information appears in existing CRM interface; no navigation to separate system

Step 3: Exception Handling (When needed)

  • Actor: Customer Service Representative + System
  • Action: Address unusual cases that don't match standard policies
  • Systems: Policy Engine (exception flagging), Escalation routing
  • Time: 2-5 minutes (only for ~15% of cases)
  • Human-AI Collaboration:
    • System provides: Flag that case is unusual, suggested contacts, relevant policy references
    • Human provides: Judgment call or escalation decision
  • Notes: Exception handling is routed, not documented—the goal is resolution, not audit trail

Step 4: Customer Communication (Was Step 6)

  • Actor: Customer Service Representative
  • Action: Explain return process and outcome to customer
  • Systems: CRM (communication logging)
  • Time: 2-4 minutes (unchanged)
  • Notes: Representative uses system-provided policy summary for consistency

Step 5: Return Processing (Was Step 7)

  • Actor: Customer Service Representative
  • Action: Initiate return in system
  • Systems: Order Management, Inventory
  • Time: 2-3 minutes (unchanged)
  • Change: Policy decision is logged automatically based on earlier selection; no separate approval step for standard returns

Step 6: Implicit Documentation (Replaces Step 8)

  • Actor: System (automatic)
  • Action: Log return details from workflow actions
  • Systems: CRM, Policy Engine
  • Time: 0 minutes (no representative action required)
  • Change: Documentation is derived from actions already taken—no separate data entry

Total Time (Standard return): 9-14 minutes (vs. 14-28 current) Total Time (Exception return): 14-19 minutes


Future-State Workflow Diagram

[Customer Request]
    ↓
[1. Gather Information]
    ↓ (Policy Engine runs automatically)
    ↓
[2. Policy Review]
    ↓ (standard case)        ↓ (exception flagged)
    |                        |
    ↓                  [3. Exception Handling]
    |                        ↓
    ←←←←←←←←←←←←←←←←←←←←←←←←←
    ↓
[4. Customer Communication]
    ↓
[5. Return Processing]
    ↓ (documentation automatic)
[Complete]

Comparison: Current vs. Future State

StepCurrent StateFuture StateChange
Information Gathering2-3 min2-3 minPolicy Engine starts automatically
Assessment/Lookup6-15 min (Steps 2-5)1-2 minSystem-surfaced policies replace Bible search
Exception HandlingEmbedded in lookup2-5 min (when needed)Explicit exception path for unusual cases
Customer Communication2-4 min2-4 minNo change
Return Processing2-3 min2-3 minPolicy decision auto-logged
Documentation1-2 min0 minImplicit from workflow actions
Total (standard)14-28 min9-14 min5-14 min saved

Human-AI Collaboration Specification

For each step requiring collaboration:

Step 2: Policy Review

RoleAI ProvidesHuman Provides
Policy IdentificationApplicable policies based on return attributesConfirmation or correction
Confidence SignalHigh/medium/low confidence indicatorJudgment on whether to proceed
Similar Cases2-3 prior similar returns with outcomesRelevance assessment
OverrideOne-click "doesn't apply" optionNo explanation required

Step 3: Exception Handling

RoleAI ProvidesHuman Provides
Exception DetectionFlag that case doesn't match patternsDecision to handle or escalate
Routing SuggestionRecommended person/team for helpFinal routing decision
Policy ReferencesRelevant sections for unusual situationInterpretation

Design for Adoption

What makes this feel like help:

  • Information appears without searching
  • No new screens or systems to navigate
  • Policy display is integrated into existing CRM
  • Override is one click, no explanation
  • Documentation happens automatically

Where practitioners might resist:

  • Distrust of system recommendations ("It doesn't know my customers")
  • Concern about being monitored through the policy log
  • Fear that system will reduce their expertise value

How design addresses resistance:

  • Recommendations are clearly labeled as suggestions, not requirements
  • Override is easy and not tracked for performance evaluation
  • System learns from representative expertise, not the reverse
  • Experienced representatives can proceed directly when confident

Minimal viable version:

  • Core: Policy surfacing for common return types
  • Deferred: Similar case matching
  • Deferred: Learning loop from representative decisions

The minimal version delivers the primary value (eliminating Bible search) without requiring the full system. Additional capabilities can be added after adoption is established.


Connecting to ROI Model

The design should capture the value calculated in Module 3:

Module 3 MetricHow Design Addresses
Time: 14.2 min per Bible lookupSteps 3-5 consolidated to 1-2 min policy review
Errors: 4.3% wrong policySystem surfaces correct policy; representative confirms
Escalations: 12% of Bible returnsException handling pathway reduces unnecessary escalations
Patricia dependencyPolicy knowledge encoded in system, not person
Onboarding: 3 days Bible trainingSystem-guided policy lookup reduces training requirement

Design Template

Use this template for your own future-state designs:

FUTURE-STATE WORKFLOW: [Opportunity ID and Name]

Pattern Selected: [Decision Support / Automation with Override / Preparation / Verification / Learning]

Pattern Rationale: [Why this pattern fits the friction profile]

Friction Points Addressed:

FrictionCurrent TimeSolutionFuture Time
[Friction 1][X min][How addressed][Y min]
[Friction 2][X min][How addressed][Y min]

Future-State Steps:

StepActorAI RoleHuman RoleTime
[Step 1][Who][What AI provides][What human does][Est.]

Adoption Design:

  • What makes this feel like help: [Specific elements]
  • Potential resistance: [Anticipated concerns]
  • Design response: [How addressed]
  • Minimal viable version: [Core vs. deferred features]

ROI Alignment:

Baseline MetricDesign Mechanism
[Metric 1][How design improves it]

Proceed to practitioner validation.


Module 4B: ORCHESTRATE — Practice

O — Operate

Step 3: Validate with Practitioners

Designers have blind spots. The most elegant workflow design fails if it doesn't fit how work actually happens. Validation before building is cheaper than redesign after—and practitioner involvement increases adoption.


Why Validation Matters

Designers Miss What Practitioners See

A design that makes sense in documentation may not make sense in practice. Practitioners see edge cases designers didn't consider, workflow interactions that weren't mapped, and friction that seems minor in theory but compounds in reality.

The Lakewood case demonstrated this failure: the discharge system made sense to designers who understood healthcare operations abstractly. It didn't make sense to nurses who understood the specific context of each patient, each family, each physician relationship.

Validation Before Building Is Cheaper

Changes during design cost hours. Changes during development cost days. Changes after deployment cost weeks and trust. Each phase of validation avoided saves effort—and each validation conducted saves money.

Practitioner Involvement Increases Adoption

Practitioners who helped shape the design are more likely to use it. They've seen their concerns addressed. They understand the rationale. They have ownership of the outcome.

Practitioners who receive a design they didn't influence are more likely to resist. They see something imposed. They don't understand the reasoning. They have no investment in success.


Who to Involve

Mix of Tenure Levels

  • New practitioners (< 1 year): See friction that veterans have stopped noticing
  • Experienced practitioners (1-5 years): Know the work well but haven't fully adapted to workarounds
  • Veterans (5+ years): Know edge cases, history, and why things are the way they are

A design validated only by veterans may miss friction that's become invisible to them. A design validated only by newcomers may miss complexity that experience reveals.

Include Skeptics

The temptation is to validate with friendly practitioners—people who are enthusiastic about improvement and likely to say positive things.

Resist this temptation. Skeptics see problems enthusiasts miss. Their objections, while uncomfortable, reveal design weaknesses that will otherwise surface during deployment when they're expensive to fix.

If a skeptic can't see how the design helps them, the design probably doesn't help them. Better to discover this in validation than in failed adoption.

People Who Will Actually Use It

Validate with practitioners who will use the system daily, not managers who will oversee it. Managers may approve designs that burden practitioners. Practitioners will identify burdens managers don't see.

For R-01, validation should include:

  • 2-3 customer service representatives with varied tenure
  • At least one representative who currently relies heavily on the Returns Bible
  • At least one representative who is skeptical of new systems

Validation Methodology

Walkthrough: Present Future-State, Get Reactions

Present the future-state workflow step by step. At each step, pause for reaction:

  • "Does this match how you'd want it to work?"
  • "What would you be thinking at this moment?"
  • "Is anything missing?"

Watch for:

  • Confusion (they don't understand what the step involves)
  • Hesitation (they have concerns they're not voicing)
  • Correction (they think you've described it wrong)
  • Enthusiasm (they see value you can build on)

Scenario Testing: "What Would Happen If..."

Walk through specific scenarios—both common cases and edge cases:

  • "A customer calls wanting to return a product they bought 18 months ago. What would happen in this workflow?"
  • "The system shows a policy you know is outdated. What would you do?"
  • "A customer is upset and you need to resolve quickly. Does this workflow help or slow you down?"

Scenarios reveal gaps that abstract walkthroughs miss. They force practitioners to mentally simulate using the system.

Edge Case Exploration: "What About When..."

Ask practitioners to generate edge cases:

  • "What situations would this not handle well?"
  • "What's the weirdest return case you've seen? How would this handle it?"
  • "What makes you reach for the Returns Bible most often? Would this help?"

Edge cases practitioners generate are more relevant than edge cases designers imagine. They come from real experience.

Comparison: "How Does This Compare to What You Do Now?"

Ask direct comparison questions:

  • "Would this make your work easier or harder?"
  • "Is this faster or slower than what you do now?"
  • "What would you miss about the current way?"
  • "What would you be glad to stop doing?"

Direct comparison surfaces value (or lack of value) that abstract evaluation misses.


Questions to Ask

Core Validation Questions:

QuestionWhat It Reveals
"Would this make your work easier or harder?"Net value assessment
"What would you do if the system gave wrong guidance?"Override design adequacy
"What situations wouldn't this handle well?"Edge case gaps
"What would make you avoid using this?"Adoption barriers
"What's missing?"Requirements gaps

Probing Questions:

QuestionWhat It Reveals
"Walk me through your first day using this."Onboarding friction
"When would you ignore the system's suggestion?"Trust calibration
"How would you explain this to a new colleague?"Comprehensibility
"What would your supervisor think about this?"Organizational dynamics

Reading Validation Feedback

Enthusiasm Isn't the Goal

Validation isn't seeking approval—it's seeking honesty. Enthusiastic feedback that misses problems is worse than critical feedback that reveals them.

Watch for practitioners who say what they think you want to hear. These responses often include:

  • Generic praise ("This looks great!")
  • Quick agreement without engagement
  • Lack of questions or concerns
  • Body language that doesn't match words

Concerns Are Design Opportunities

Every concern is a design opportunity. The practitioner who says "This wouldn't work because..." is telling you something valuable.

Respond to concerns with curiosity, not defense:

  • "Tell me more about why that wouldn't work."
  • "What would need to be different?"
  • "Have you seen something similar fail before?"

Watch for Polite Agreement

Practitioners may be reluctant to criticize directly, especially to someone who has authority or who clearly invested effort. They express reservations through euphemism:

  • "That's interesting" often means "I have concerns"
  • "I'm sure you've thought of this, but..." precedes a real issue
  • "This might work for some people" means "not for me"
  • Long pauses before responding indicate internal conflict

Ask follow-up questions when you sense polite agreement:

  • "What's making you hesitate?"
  • "If you could change one thing about this, what would it be?"
  • "Who would have the hardest time with this?"

The R-01 Validation Session

Participants:

NameRoleTenurePerspective
Maria T.CS Representative8 yearsHeavy Bible user, informal expert
DeShawn W.CS Representative2 yearsMiddle experience, tech-comfortable
Jennifer R.CS Representative4 monthsNew, still learning policies
Alex P.CS Team Lead5 yearsSkeptical of system changes

Key Feedback Received:

From Maria T.: "The system won't know about the exceptions. Half the time I'm in the Bible, I'm not looking up policy—I'm looking for that note Patricia wrote about why we don't apply the standard rule to these three product lines."

Design response: Added exception notes field in Policy Engine; begin capturing Patricia's knowledge in structured format.

From DeShawn W.: "I like that the policy shows up automatically. But what if I already know the policy? Will it slow me down for the easy cases?"

Design response: Confirmed that experienced representatives can proceed directly; policy display is available but not required to acknowledge.

From Jennifer R.: "This would have saved me so much time in training. I spent the first month just learning where things are in the Bible."

Design response: Confirmed onboarding benefit. Added to adoption design: new representatives start with system from day one.

From Alex P.: "I've seen three systems that were supposed to make things easier. All of them added steps. How is this different?"

Design response: Reviewed step-by-step comparison showing total steps reduced from 8 to 5. Showed documentation step is now implicit. Acknowledged his concern is valid and invited him to pilot testing.

Design Modifications Based on Feedback:

  1. Exception notes: Added structured capture of policy exceptions (not in original design)
  2. Expert bypass: Confirmed—no mandatory acknowledgment for experienced representatives
  3. Training integration: New reps trained on system first, Bible becomes reference only
  4. Pilot inclusion: Alex invited to prototype testing to surface remaining concerns

Unresolved Concerns:

ConcernStatusResolution Plan
Policy Engine accuracy for unusual productsAcknowledgeWill test in prototype with edge cases
Speed of policy lookup vs. current mental shortcutsAcknowledgeWill measure in prototype
Patricia's departure timeline vs. knowledge captureAcknowledgeEscalate to project sponsor

Iterating Based on Feedback

When to Modify the Design:

Modify when feedback reveals:

  • A requirement that wasn't captured
  • A friction point the design creates
  • An edge case that needs different handling
  • An adoption barrier that can be designed out

The exception notes addition to R-01 is an example: the design didn't account for informal knowledge beyond formal policy. Validation revealed the gap; the design was modified.

When to Note for Prototype Testing:

Note for prototype testing when:

  • The concern is valid but its magnitude is unknown
  • The design might address the concern but confirmation requires testing
  • The feedback suggests "try it and see"

Alex's concern about whether this creates new friction is prototype-appropriate: the design should help, but real usage will confirm.

When to Push Back:

Push back when feedback reflects:

  • Resistance to change rather than design problems
  • Individual preferences that conflict with broader needs
  • Misunderstanding that can be clarified

Push back gently. The goal is understanding, not winning the argument. Sometimes apparent resistance reveals a real issue; sometimes it's genuinely just preference.

Documenting Changes:

For each change, document:

  • What feedback prompted it
  • What changed in the design
  • What the expected impact is

This documentation creates a trail from validation to design decision, useful for explaining rationale to stakeholders and for future iterations.


Validation Sign-Off

Validation is complete when:

Practitioners understand what's proposed.

They can describe the workflow in their own words. They know what changes from current state. They understand their role in the new workflow.

Major concerns are addressed or acknowledged.

Every significant concern raised has been either resolved through design modification or explicitly acknowledged as a prototype testing question. No major concerns are simply ignored.

Willingness to participate in prototype testing.

At least some validation participants are willing to test the prototype. This indicates sufficient confidence that the design merits building.

Validation sign-off is not unanimous enthusiasm. It's informed consent: practitioners have seen the design, provided input, and are willing to try it.


Proceed to blueprint documentation.


Module 4B: ORCHESTRATE — Practice

O — Operate

Step 4: Document the Blueprint

The Workflow Blueprint is a design specification that bridges strategy and implementation. It documents what was designed, why, and how it should be built—structured for audiences ranging from developers who will implement it to executives who will sponsor it.


Purpose of the Blueprint Document

Specification for Module 5

The blueprint tells the implementation team what to build. Module 5 (REALIZE) will construct or configure a prototype based on this specification. A clear blueprint enables accurate implementation; an ambiguous blueprint invites interpretation that may not match intent.

Record of Design Decisions

The blueprint documents not just what was decided, but why. When questions arise during implementation—"Why does this step work this way?"—the blueprint provides rationale. This documentation prevents drift from design intent.

Communication Tool for Stakeholders

Different stakeholders need different views of the design. Developers need technical specification. Operations needs process documentation. Leadership needs business justification. The blueprint structure accommodates all three.

Reference for Future Iterations

Systems evolve. The blueprint captures baseline design so future changes can be evaluated against original intent. "We did it this way because..." prevents accidental undoing of deliberate choices.


Blueprint Structure

A complete Workflow Blueprint contains these sections:

1. Executive Summary

One-page overview suitable for leadership review:

  • What opportunity this addresses (from Module 3)
  • What the design accomplishes
  • Expected outcomes and timeline
  • Investment required and projected return

2. Current-State Workflow

Documentation from Step 1:

  • Process flow with steps, actors, and timing
  • Friction points identified
  • Informal systems and workarounds
  • Baseline metrics

3. Future-State Workflow

Documentation from Step 2:

  • Redesigned process flow
  • Changes from current state
  • Human-AI collaboration at each step
  • Projected timing improvement

4. Human-AI Collaboration Specification

Detailed specification of collaboration:

  • Pattern selected and rationale
  • Each decision point: what system provides, what human decides
  • Override mechanisms
  • Feedback loops for learning

5. Technology Requirements

Tool-agnostic specification:

  • Functional requirements (what system must do)
  • Integration requirements (what it connects to)
  • Performance requirements (speed, reliability)
  • Constraints (what system must not do)

6. Adoption Design

Elements that support adoption:

  • Simplicity choices and rationale
  • Invisible automation implementations
  • Resistance points and mitigations
  • Training implications

7. Success Metrics

From Module 3 ROI model:

  • Baseline measurements
  • Target improvements
  • Measurement methodology
  • Leading indicators

8. Appendix

Supporting details:

  • Detailed workflow diagrams
  • Validation session notes
  • Technical specifications
  • Risk and assumption documentation

Writing for the Audience

What Developers Need: Technical Specification

Developers implementing the design need:

  • Exact step-by-step workflow logic
  • Data flows and transformations
  • Integration points and data formats
  • Decision rules and exception handling
  • User interface requirements

Write with precision. Ambiguity in technical specification creates implementation variation.

What Operations Needs: Process Documentation

Operations teams managing the new workflow need:

  • Training requirements and materials
  • Support escalation procedures
  • Monitoring and exception handling
  • Relationship to other processes
  • Transition plan from current state

Write with practicality. Operations needs to know how to run this, not how to build it.

What Leadership Needs: Business Connection

Leadership approving resources needs:

  • Connection to approved business case
  • Expected outcomes and timeline
  • Risk acknowledgment and mitigation
  • Resource requirements
  • Decision points ahead

Write with directness. Leadership wants to know if this is on track and what they need to do.

Structuring for Different Reading Depths

The blueprint should support:

  • Skim reading: Executive summary conveys essence
  • Section reading: Each section is self-contained
  • Deep reading: Appendix provides complete detail

Use clear headings, summary boxes, and progressive disclosure. A reader should get value regardless of how deeply they read.


The R-01 Blueprint (Complete Example)


Workflow Blueprint: R-01 Returns Policy Integration

Executive Summary

Opportunity: Returns Bible Not in System (R-01)

Customer service representatives currently spend 14+ minutes per return consulting the physical Returns Bible—a 300-page policy document that is frequently unavailable, difficult to navigate, and dependent on one individual's expertise.

Design: The Returns Policy Integration workflow embeds policy information directly in the customer service workflow. The system automatically surfaces relevant policies based on return attributes, eliminating search time and reducing interpretation errors. Representative decision authority is unchanged; information access is transformed.

Expected Outcomes:

  • Time reduction: 14.2 minutes average Bible consultation → 1-2 minutes policy review
  • Error reduction: 4.3% incorrect policy application → <2% target
  • Patricia dependency: Eliminated through systematic knowledge capture
  • Representative satisfaction: Measured through post-implementation survey

Investment: $35,000 implementation (from Module 3 business case) Projected Return: $99,916 annual value, 4.2-month payback, 756% ROI

Timeline: 6-8 weeks to pilot, 12 weeks to full deployment


Current-State Workflow Summary

StepActorTimeFriction
1. Gather return infoRepresentative2-3 minLow
2. Initial assessmentRepresentative0.5 minLow
3. Bible retrievalRepresentative1-2 minMedium
4. Policy searchRepresentative3-8 minHigh
5. Policy interpretationRepresentative2-5 minHigh
6. Customer communicationRepresentative2-4 minLow
7. Return processingRepresentative2-3 minLow
8. DocumentationRepresentative1-2 minLow
Total14-28 min

Full current-state documentation in Appendix A.


Future-State Workflow Summary

StepActorAI RoleTime
1. Gather return infoRepresentativePolicy Engine identifies applicable policies2-3 min
2. Policy reviewRepresentativeSystem surfaces policy summary, similar cases1-2 min
3. Exception handlingRepresentative + SystemSystem flags unusual cases, suggests contacts2-5 min (15% of cases)
4. Customer communicationRepresentativePolicy summary available for reference2-4 min
5. Return processingRepresentativeDecision logged automatically2-3 min
6. DocumentationSystemDerived from workflow actions0 min
Total (standard)9-14 min

Full future-state specification in Appendix B.


Human-AI Collaboration Specification

Pattern: Preparation

The system prepares policy information; the representative acts on prepared context. Human decision authority is unchanged.

Step 2 Collaboration Detail:

ElementSpecification
System providesApplicable policy summary (1-2 sentences), confidence indicator (high/medium/low), 2-3 similar prior cases with outcomes
Human providesFinal policy selection, contextual judgment, override when needed
Override mechanismOne-click "doesn't apply" option; no explanation required; not tracked for performance evaluation
Feedback loopOverride patterns reviewed monthly to improve policy matching accuracy

Step 3 (Exception) Collaboration Detail:

ElementSpecification
System providesFlag that case is unusual, confidence level below threshold, suggested contacts for escalation, relevant policy sections for reference
Human providesDecision to handle directly or escalate, judgment on unusual case
Escalation routingSystem suggests based on case type; representative confirms or redirects

Technology Requirements

Functional Requirements:

  1. Policy Engine Integration

    • Receive return attributes from Order Management system
    • Match attributes to applicable policy rules
    • Return policy summary, confidence level, and similar case references
    • Response time: <2 seconds
  2. CRM Integration

    • Display policy information within existing representative interface
    • No navigation to separate application
    • Policy display appears automatically when return details entered
    • Override captured through existing workflow actions
  3. Knowledge Capture

    • Structured storage for policy rules and exceptions
    • Interface for policy administrators to update rules
    • Version control for policy changes
    • Exception notes field for non-standard situations
  4. Similar Case Matching

    • Index of prior return decisions
    • Matching based on return attributes
    • Display of outcome and representative notes
    • Privacy: No customer PII in case display

Integration Requirements:

SystemIntegration TypeData Exchange
Order ManagementReadReturn attributes, order history
CRMRead/WriteCustomer context, policy display, decision logging
Policy EngineReadPolicy rules, confidence scores
Case DatabaseReadPrior similar cases

Performance Requirements:

  • Policy lookup response: <2 seconds
  • Similar case retrieval: <3 seconds
  • System availability: 99.5% during business hours
  • Concurrent users: Support 50+ simultaneous users

Constraints:

  • No changes to existing Order Management data structures
  • No additional login or authentication for representatives
  • No mandatory data entry not present in current workflow
  • No performance metric tracking of individual representative override rates

Adoption Design

Simplicity Choices:

ChoiceRationale
Policy display within CRMNo new application to learn
Optional acknowledgmentExperienced reps can proceed directly
One-click overrideLow friction to correct system
Automatic documentationEliminates separate data entry step

Resistance Points and Mitigations:

Anticipated ResistanceMitigation
"System won't understand exceptions"Exception notes feature captures non-standard knowledge
"I already know the policies"Expert bypass—policy display available but not required
"Another system to watch me"Override not tracked for performance; design feels like help
"What if system is wrong?"Easy override; representative decision is final

Training Implications:

AudienceTraining NeedDuration
Experienced repsSystem orientation, override procedures30 minutes
New repsSystem-first training, Bible as reference only4 hours (vs. 3 days current)
Team leadsException handling, feedback review1 hour

Success Metrics

MetricBaselineTargetMeasurement Method
Time per Bible-dependent return14.2 min<5 minTime-motion sample (n=50)
Incorrect policy application4.3%<2%QA audit sample (n=100)
Supervisor escalation rate12%<5%System tracking
Patricia dependency queries15+/day<3/dayObservation
Representative satisfaction3.2/5>4.0/5Survey

Leading Indicators (first 30 days):

  • System usage rate (target: >80% of eligible returns)
  • Override rate (expected: 10-15%; higher indicates calibration needed)
  • Time-to-competency for new users (target: <1 day)

Appendices

Appendix A: Full Current-State Workflow Documentation Appendix B: Full Future-State Workflow Specification Appendix C: Validation Session Notes Appendix D: Risk and Assumption Register


Quality Checklist

Before finalizing the blueprint, confirm:

  • Executive summary is standalone and complete
  • Current state reflects observed reality, including workarounds
  • Future state addresses all identified friction points
  • Human-AI collaboration is specified at each decision point
  • Technology requirements are tool-agnostic (capabilities, not products)
  • Adoption design elements are concrete, not just mentioned
  • Success metrics align with Module 3 ROI model
  • A developer could build from this specification
  • An operations manager could train from this document
  • An executive could explain the investment from this summary

Blueprint Template

Use this structure for your own blueprints:

### Workflow Blueprint Template

#### Executive Summary
- Opportunity addressed
- Design summary
- Expected outcomes
- Investment and return
- Timeline

#### Current-State Workflow
- Step table with actor, time, friction
- Friction point summary
- Reference to full documentation

#### Future-State Workflow
- Step table with actor, AI role, time
- Changes from current state
- Reference to full specification

#### Human-AI Collaboration Specification
- Pattern selected and rationale
- Detailed collaboration for key steps
- Override mechanisms
- Feedback loops

#### Technology Requirements
- Functional requirements
- Integration requirements
- Performance requirements
- Constraints

#### Adoption Design
- Simplicity choices
- Resistance mitigations
- Training implications

#### Success Metrics
- Baseline measurements
- Targets
- Measurement methodology
- Leading indicators

#### Appendices
- Full current-state documentation
- Full future-state specification
- Validation notes
- Risks and assumptions

Proceed to transition preparation.


Module 4B: ORCHESTRATE — Practice

Transition: From ORCHESTRATE to REALIZE

The Workflow Blueprint is complete. The design has been validated with practitioners. The specification is ready for implementation.

Module 5 transforms this blueprint into a working prototype.


What Module 4 Accomplished

Designed the Workflow

Module 4 translated the value identified in Module 3 into a concrete workflow design. The R-01 opportunity—Returns Bible integration worth $99,916 annually—is now a step-by-step specification of how work will happen differently.

The design follows the Preparation pattern: the system prepares policy information; representatives act on prepared context. Human authority is unchanged; information access is transformed.

Validated with Practitioners

The design was reviewed by customer service representatives who will use it. Their concerns—exception handling, expert bypass, accuracy for unusual cases—shaped the final specification. Practitioners understand what's proposed and are willing to test it.

Documented for Implementation

The Workflow Blueprint specifies:

  • Current state (what happens now)
  • Future state (what will happen)
  • Human-AI collaboration (who does what)
  • Technology requirements (what to build)
  • Adoption design (how to ensure use)
  • Success metrics (how to measure)

This specification enables Module 5 to build without requiring additional design decisions.


What Module 5 Requires

Blueprint Is Architecture; Prototype Is Construction

The blueprint tells you what to build. Module 5 builds it—or configures it, or purchases it, depending on the build vs. buy vs. configure decision that emerges from the blueprint's technology requirements.

The prototype is not the final system. It's a learning vehicle: functional enough to test the design's assumptions, limited enough to change quickly when assumptions prove wrong.

Building the Actual System

Module 5 covers:

  • Technology selection (what platform implements the blueprint)
  • Prototype construction (minimum viable functionality)
  • Integration with existing systems
  • User interface implementation
  • Data architecture and migration

Testing in Real Conditions

The prototype will be tested by practitioners doing real work:

  • Does the policy information appear correctly?
  • Is the interface fast enough to help rather than slow down?
  • Do exceptions route appropriately?
  • Does the design feel like help or like burden?

Testing reveals what design predicted but couldn't prove.

Measuring Against Baseline

Module 5 measures prototype performance against Module 3 baselines:

  • Is policy lookup actually faster?
  • Are errors actually reduced?
  • Are escalations actually fewer?
  • Do practitioners actually prefer this to the current process?

These measurements validate the ROI model's assumptions—or reveal that the assumptions need adjustment.


The R-01 Handoff

What the Blueprint Specifies:

ElementStatus
Workflow stepsFully specified
Human-AI collaborationDetailed for each decision point
Technology requirementsFunctional requirements documented
Integration pointsIdentified (Order Management, CRM, Policy Engine)
Success metricsBaselined and targeted
Adoption designConcrete elements specified

What Remains to Be Decided:

DecisionWill Be Made In
Implementation platformModule 5 technology selection
Build vs. buy vs. configureModule 5 technology selection
Specific UI layoutModule 5 prototype design
Data migration approachModule 5 implementation
Pilot group selectionModule 5 testing plan
Rollout sequenceModule 5 deployment plan

Success Criteria:

The R-01 implementation succeeds when:

MetricBaselineTarget
Time per Bible-dependent return14.2 min<5 min
Incorrect policy application4.3%<2%
Supervisor escalation12%<5%
System adoption rateN/A>80%
Representative satisfaction3.2/5>4.0/5

What Could Go Wrong:

RiskHow We'll KnowMitigation
Policy Engine inaccuracyHigh override rate (>25%)Calibration sprint before rollout
Integration performanceResponse time >2 secondsPerformance optimization or architecture revision
Practitioner rejectionLow adoption despite availabilityDesign iteration based on feedback
Exception handling gapsNew escalation patterns emergeException pathway refinement

Assumptions to Test in Prototype

The blueprint embeds predictions that can only be validated by building and testing:

Design Assumptions:

AssumptionHow Prototype Tests It
Policy can be automatically matched to return attributesPolicy Engine accuracy measurement
One-click override is sufficient for correctionsOverride usage patterns and feedback
Similar cases are useful referencePractitioner feedback on case display
Automatic documentation captures what's neededQA review of auto-generated records

Adoption Assumptions:

AssumptionHow Prototype Tests It
CRM integration feels naturalPractitioner observation and feedback
Experienced reps will use optional guidanceUsage patterns by tenure
Design feels like help, not surveillanceSatisfaction survey and interviews
Training time reduces to hours, not daysNew representative onboarding tracking

Performance Assumptions:

AssumptionHow Prototype Tests It
Policy lookup in <2 secondsSystem performance monitoring
Similar case retrieval in <3 secondsSystem performance monitoring
System handles 50+ concurrent usersLoad testing during pilot

The Prototype Mindset

Not Building the Final System

The prototype is a learning tool, not a deployment. Its purpose is to test assumptions quickly and cheaply. Code quality, scalability, and polish matter less than learning speed.

A prototype that reveals the design is wrong has succeeded. A prototype that hides problems until production has failed.

Fast Iteration, Not Perfection

Prototype development follows short cycles:

  • Build minimum functionality
  • Test with practitioners
  • Learn what works and what doesn't
  • Iterate quickly
  • Repeat until design stabilizes

Speed beats completeness. The goal is validated learning, not comprehensive functionality.

Measuring Before/After

Every design assumption has a baseline measurement from Module 3. Prototype testing creates comparable measurements:

  • Same methodology
  • Same sample sizes
  • Same quality standards

The comparison reveals actual improvement (or its absence). Numbers that don't improve indicate design problems, assumption errors, or implementation issues.

Permission to Fail and Redesign

The prototype may reveal that the design is wrong. This is valuable information, not failure.

When prototype testing reveals design problems:

  • Document what was learned
  • Revise the design
  • Test again

The alternative—proceeding with a flawed design—is more expensive. The prototype's purpose is to fail fast and cheap rather than fail slow and expensive.


Connection to Module 5

Module 5: REALIZE covers:

  1. Technology Selection — Evaluating build vs. buy vs. configure based on blueprint requirements
  2. Prototype Construction — Building minimum viable functionality for testing
  3. Integration Development — Connecting to existing systems per specification
  4. Pilot Testing — Real-world testing with practitioners and measurement against baselines
  5. Iteration — Refining design based on prototype learning
  6. Deployment Preparation — Transitioning from validated prototype to production system

The blueprint is the plan. Module 5 is where the plan meets reality.


End of Module 4B: ORCHESTRATE

The Redesigned Workflow Blueprint is complete.

Proceed to Module 5: REALIZE when ready to build.


Module 4B: ORCHESTRATE — Practice

T — Test

Measuring Workflow Design Quality

A blueprint can be complete and still be wrong. This section covers how to evaluate design quality before building—and how to interpret results after.


Validating the Blueprint

Before declaring the blueprint ready for Module 5, verify four quality gates:

1. Current-State Accuracy

Question: Do practitioners recognize this as their actual work?

The current-state documentation should prompt reactions like "Yes, that's exactly what we do" and "I forgot we had to do that step." If practitioners don't recognize the map, it documents the wrong process.

Validation method: Present current-state workflow to 2-3 practitioners who weren't involved in mapping. Ask them to identify discrepancies.

Pass criteria: No major steps missing or misrepresented. Minor variations are acceptable; fundamental misunderstanding is not.

2. Future-State Clarity

Question: Could someone build from this specification?

The future-state design should be precise enough that a developer unfamiliar with the context could implement it. Ambiguity in specification creates implementation variation that may not match intent.

Validation method: Have someone outside the design team read the future-state section and identify questions they'd need answered to build it.

Pass criteria: Questions are about implementation detail, not about what should happen. If questions are "how should this work?" rather than "how should I build this?", the design isn't clear enough.

3. Human-AI Role Specification

Question: Is it unambiguous who does what at each step?

Every step should clearly specify: Does the system do this, or does the human? If both, what does each contribute? How does override work?

Validation method: Walk through each step and ask "Who is responsible for this action?" If the answer requires interpretation, the specification is incomplete.

Pass criteria: No steps where human-AI responsibility is unclear or context-dependent without explicit guidance.

4. Adoption Considerations

Question: Are adoption barriers addressed in design, not deferred to training?

Adoption elements should be concrete design choices, not "change management will handle this" deferrals. If resistance points are acknowledged but not designed for, adoption is at risk.

Validation method: Review each identified resistance point. For each, identify the specific design element that addresses it.

Pass criteria: Every significant resistance point has a design response. "Train them better" is not a design response.


Design Quality Metrics

Practitioner Validation Score

How thoroughly did practitioner input shape the design?

LevelDescription
4 (Excellent)Multiple practitioners validated; all major concerns addressed in design
3 (Proficient)Practitioners validated; most concerns addressed; some deferred to prototype
2 (Developing)Limited validation; concerns noted but not fully addressed
1 (Insufficient)No practitioner validation or concerns dismissed

Complexity Comparison

Is the future state simpler than the current state?

MeasureCurrentFutureDirection
Total steps[count][count]Should decrease
Decision points[count][count]Should decrease or clarify
Systems touched[count][count]Should decrease
Time (typical case)[minutes][minutes]Should decrease

For R-01: Current state 8 steps, future state 5-6 steps. Current 14-28 minutes, future 9-14 minutes. Both trends positive.

Step Reduction Analysis

Which steps were eliminated, combined, or automated?

Change TypeStepsExample
Eliminated[list]Bible retrieval (no longer needed)
Combined[list]Assessment + search → Policy review
Automated[list]Documentation (now derived from actions)
Unchanged[list]Customer communication

More eliminated/automated steps suggest stronger design impact. Steps that can't be reduced may indicate design limits.

Decision Point Clarity

At each decision point, does the practitioner know what to do?

Decision PointClear?If No, What's Missing
[Decision 1]Yes/No[Gap]
[Decision 2]Yes/No[Gap]

Every "No" is a design gap to resolve.


Leading Indicators (Before Prototype)

These signals predict implementation success before building begins:

Practitioners Willing to Participate in Testing

If validation participants are willing to test the prototype, they have sufficient confidence in the design. Reluctance to participate signals unresolved concerns.

IndicatorGreenYellowRed
Pilot volunteersMultiple eagerSome willingNone willing

No Unresolved Major Concerns

Major concerns from validation should be addressed in design or explicitly acknowledged as prototype testing questions. Unresolved concerns that aren't acknowledged tend to surface during deployment.

IndicatorGreenYellowRed
Major concernsAll addressed or acknowledgedSome unaddressedMany unaddressed

Blueprint Passes "Could Someone Build This" Test

A developer should be able to implement from the blueprint without design decisions.

IndicatorGreenYellowRed
Developer reviewReady to buildQuestions about designNeeds more design work

Success Metrics Aligned with ROI Model

The metrics that will evaluate the prototype should match the metrics that justified the investment.

IndicatorGreenYellowRed
Metric alignmentAll baseline metrics have corresponding targetsMost alignedMetrics disconnected

Lagging Indicators (After Prototype)

These metrics evaluate design quality once the prototype exists—preview for Module 5:

Adoption Rate vs. Design Assumptions

Did practitioners use the system at the rates the design assumed?

MetricDesign AssumptionPrototype ResultGap
Usage rate[%][%]Positive/negative
Voluntary vs. required[description][actual]

Time Savings vs. Projected

Did the workflow actually save time?

MetricBaselineDesign ProjectionActualAccuracy
Time per task[min][min][min][%]

Error Rate vs. Projected

Did errors actually decrease?

MetricBaselineDesign ProjectionActualAccuracy
Error rate[%][%][%][%]

Practitioner Satisfaction

Do practitioners prefer the new workflow?

MetricBeforeTargetAfter
Satisfaction (1-5)[score][score][score]
Preference (old vs. new)N/ANew preferred[actual]

Red Flags

These signals indicate design problems requiring attention:

Practitioners Won't Validate

If practitioners decline to participate in validation or provide only superficial feedback, something is wrong. Possible causes:

  • Distrust of the design process
  • Fear of reprisal for criticism
  • Prior bad experiences with similar initiatives
  • Design so disconnected from work that feedback seems pointless

Response: Investigate the underlying cause before proceeding.

Too Many Exceptions in Design

If the exception handling pathway dominates the design, the "routine" case may not be as routine as assumed. Exception-heavy designs are complexity-heavy designs.

Response: Re-examine current-state data. Is the exception rate accurate? If yes, the design may need to accept that complexity is inherent to the work.

Complexity Increased Rather Than Decreased

If the future state has more steps, more decisions, or more time than the current state, the design isn't improving the work—it's adding a new layer.

Response: Return to design principles. What was added that doesn't serve practitioners? What can be removed?

Success Metrics Don't Connect to Business Case

If the metrics that will evaluate the prototype don't relate to the metrics that justified investment, success can't be demonstrated even if achieved.

Response: Reconcile metrics. Either adjust blueprint metrics to match business case, or acknowledge that the design addresses different value than originally proposed.


The Design Feedback Loop

Design quality improves through iteration. Module 4's blueprint is a hypothesis; Module 5's prototype tests it.

Prototype Results Inform Design Iteration

What prototype testing reveals:

  • Design assumptions that proved accurate
  • Design assumptions that proved wrong
  • Unexpected friction in the new workflow
  • Unexpected benefits not anticipated

Each finding informs design refinement. The cycle continues until design stabilizes.

Tracking Design Assumption Accuracy

Over multiple projects, track which design assumptions tend to be accurate and which tend to miss:

Assumption TypeProjectsAccuracy RatePattern
Time savings[n][%][trend]
Adoption rate[n][%][trend]
Error reduction[n][%][trend]

Patterns inform future assumptions. If adoption assumptions consistently miss by 20%, future designs should account for that bias.

Building Organizational Design Capability

Each design project builds capability:

  • Pattern recognition improves
  • Practitioner relationship deepens
  • Estimation accuracy increases
  • Failure patterns become recognizable earlier

The goal is not just a working system but an organization that designs working systems reliably.


Proceed to consolidation exercises.


Module 4B: ORCHESTRATE — Practice

S — Share

Consolidation Exercises

Learning solidifies through reflection, application, and teaching. This section provides exercises for individual reflection, peer engagement, and external teaching.


Reflection Prompts

Complete these individually. Write responses before discussing with others—the act of writing clarifies thinking.

Prompt 1: Help vs. Surveillance

Think of a system you use regularly at work or in life—software, a process, an automated service.

When does it feel like help? Describe a specific moment when the system made something easier without adding burden.

When does it feel like surveillance? Describe a specific moment when the system felt like it was watching or judging rather than assisting.

What's the design difference between those moments?


Prompt 2: Executive-Centered Design

Recall a system or process that seemed designed for management visibility rather than practitioner effectiveness.

What did management want to see? What dashboards, reports, or metrics did the design produce?

What did practitioners need to do their work? How did those needs differ from what the design provided?

What was the adoption experience? Did practitioners embrace, tolerate, or work around the system?


Prompt 3: Your Workarounds

Identify a workaround you currently use—a shadow system, an unofficial shortcut, a process deviation that makes official systems tolerable.

Why does this workaround exist? What does the official system fail to provide?

What would the official system need to change for the workaround to become unnecessary?

What does this workaround tell you about hidden requirements in your work?


Prompt 4: Invisible vs. Intrusive Automation

Think about automation you've experienced—autocomplete, spell-check, recommendation engines, automated routing, scheduling assistants.

When has automation felt invisible? You used it without thinking about it.

When has automation felt intrusive? You noticed it, had to manage it, or worked against it.

What design patterns distinguish the two? Consider timing, accuracy, override ease, and presentation.


Prompt 5: Your Organization's Tendency

Reflect on your organization's approach to system design.

Does your organization tend to design for users or design for visibility? What's the default instinct when new systems are proposed?

Where do dashboards and reports come in the design discussion—early (driving requirements) or late (derived from workflow)?

What would need to change for practitioner-centered design to become the default?


Peer Exercise: Design Critique

Format: Pairs, 45 minutes total

Preparation: Each participant should have their workflow blueprint (or equivalent design document) ready for review.

Exercise Structure:

Phase 1: First Presentation (15 minutes)

Partner A presents their workflow blueprint:

  • 3 minutes: Overview of current-state friction
  • 3 minutes: Future-state design and pattern selection
  • 4 minutes: Human-AI collaboration specification
  • 5 minutes: Partner B asks clarifying questions

Phase 2: First Critique (5 minutes)

Partner B reviews using the design principles checklist:

PrincipleQuestionAssessment
Invisible AutomationWill practitioners notice the system, or just notice easier work?
Adoption FocusWould practitioners choose this over workarounds?
SimplicityIs every step necessary? What could be removed?
Practitioner-CenteredWho benefits most from this design—users or observers?
Help, Not SurveillanceDoes this feel like assistance or monitoring?

Share one strength and one concern.

Phase 3: Role Reversal (15 minutes)

Partner B presents; Partner A reviews.

Phase 4: Debrief Discussion (10 minutes)

Both partners discuss:

  • What feedback surprised you?
  • What would you change based on critique?
  • What design patterns do you see across both blueprints?
  • Where is practitioner-centered design hardest to achieve?

Teach-Back Assignment

Learning deepens when you teach others. This assignment applies Module 4 concepts outside the course context.

Assignment:

  1. Find someone outside this course—a colleague, friend, or family member who uses workplace systems.

  2. Explain the concept of "invisible automation" in accessible terms:

    • The best technology disappears into the work
    • When you notice the system, something has already gone wrong
    • The goal is making work feel easier, not making technology visible
  3. Help them identify a system in their work that feels like surveillance:

    • What data does it capture?
    • Who reviews that data?
    • How does it affect their behavior?
  4. Discuss what would make it feel like help instead:

    • What would the system need to do differently?
    • What data capture could be eliminated?
    • What visibility serves the worker vs. serves observers?
  5. Reflect on the conversation:

    • What made the concept clear to them?
    • What was confusing or required more explanation?
    • What examples did they generate that illustrate the principle?
    • How did teaching deepen your own understanding?

Deliverable: Brief written reflection (1 page) on the teach-back experience and insights gained.


Discussion Questions

For group discussion or individual reflection:

Question 1: The Visibility Tension

Management needs visibility into operations. Practitioners need simplicity in their work. These needs often conflict—visibility typically requires data capture that adds burden.

How can workflow design serve both? Is the conflict inherent, or is it a design failure? What would a system look like that provided visibility without adding practitioner burden?

Consider the Lakewood case: the redesigned system achieved visibility through inference from actions rather than explicit documentation.


Question 2: Legitimate Resistance

Module 4 argues that practitioner resistance is often design feedback rather than change aversion. But sometimes resistance genuinely is just preference for the familiar.

How do you distinguish legitimate design feedback from change resistance? What signals indicate "this design is flawed" versus "this person doesn't like change"?

What's the risk of misdiagnosis in each direction?


Question 3: Advocating for Practitioners

You understand practitioner-centered design, but executives want dashboards. The budget holder wants visibility. The compliance team wants audit trails.

How do you advocate for practitioner-centered design when decision-makers have different priorities? What arguments resonate? What demonstrations help?

How do you frame "design for practitioners" as serving executive interests, not opposing them?


Question 4: The Ethics of Invisible Automation

Invisible automation is presented as a design ideal—practitioners shouldn't notice the system. But there's an ethical dimension: should people know when they're being assisted by AI or automated systems?

When is invisibility appropriate? When should automation be transparent? Does "invisible" conflict with "informed consent" in workplace systems?

What design choices preserve the benefits of invisibility while respecting practitioner awareness?


Question 5: Trust and Design

Module 4 argues that practitioners need to trust the system for adoption to succeed. Trust is earned through design—through accuracy, reliability, and consistent helpfulness.

But trust can also be lost through single failures. A system that's helpful 99% of the time may be distrusted because of the 1% where it failed dramatically.

How do you design for trust? How do you recover trust after failure? What's the role of transparency—showing how the system works—in building trust?


Reflection Summary

After completing Module 4, consider:

What principle from this module will most change your approach to workflow design?

Write one sentence capturing your key takeaway.

What design failure pattern are you most at risk of committing?

Identify your personal tendency—the trap you're most likely to fall into.

What will you do differently in your next workflow design project?

Name one specific action you'll take based on Module 4 learning.


End of Module 4B: ORCHESTRATE

Supporting materials follow.