Skip to main content
Module 4

ORCHESTRATE — Designing Human-AI Collaboration

Creating systems where people lead and machines follow

112 min read22,322 words0/2 deliverables checked
Reading progress0%

Module 3 taught you to prove value with numbers. This module teaches you to design solutions people will actually use.

The difference between a business case and a working system is design: designing for the person doing the work, not the person reviewing it. That distinction sounds simple. It is violated constantly.

The System Everyone Hated

Carmen Vasquez, chief nursing officer at Lakewood Regional Medical Center, had done everything right. Four months observing how nurses, case managers, and social workers coordinated patient discharges. Shadow systems documented: a breakroom whiteboard tracking pending discharges, a shared spreadsheet compensating for the electronic health record's blind spots, sticky notes bypassing official channels. Her business case was airtight: eight people synchronizing information across six systems, 4.3-hour average discharge time, $1.2 million annually in blocked capacity. Full funding approved.

The system they built looked beautiful from the top. Executive dashboards showed every patient's discharge status in real time. Color-coded indicators flagged delays. Automated alerts fired when patients exceeded target windows. The COO watched the dashboard populate and said, "Finally, we'll be able to see what's actually happening." Eighteen months later, the system sat mostly unused. Nurses had built workarounds to avoid it. The breakroom whiteboard had been officially removed, then unofficially replaced behind a supply closet door. Average discharge time had increased to 5.1 hours.

The problem: every piece of information on that dashboard had to be entered by someone. Nurses estimated 12 to 15 additional minutes of documentation per discharge. Documentation that helped zero patients. It existed to populate dashboards and audit trails. Worse, the system tracked who completed which task, when, how long it took. Practitioners experienced this as surveillance, not support. Decision-making slowed as staff deferred judgment to avoid being questioned. A system built to accelerate discharges had introduced decision paralysis.

The breakthrough came when Maria Santos, the discharge coordinator with twenty-two years of experience, pulled Carmen into a supply closet and showed her the hidden whiteboard. "This is how we actually coordinate," Maria said. The whiteboard was ugly. It generated zero reports. But it showed practitioners what they needed to do their jobs. The dashboard showed executives what they needed to review those jobs. Two different things.

Carmen's summary stuck: "We designed a system to watch work happen. We should have designed a system to help work happen."

The Anchor Principle

Design for the person doing the work, not the person reviewing the work. The best automation is invisible to the people it serves. Practitioners should notice that their work is easier, that information appears when needed, that errors get caught before they cascade. They should never notice screens to navigate, data to enter, workflows to follow.

The test: ask practitioners what technology they use. If they describe systems and interactions, the automation is visible and probably burdensome. If they describe tasks and outcomes, it has become infrastructure.

The Lakewood redesign proved this. The team rebuilt the system around a different question: what do practitioners need to do their work better? The system pulled data from actions already being taken instead of requiring separate entry. Free-text notes replaced rigid dropdown menus. The whiteboard logic was digitized. Six months later, discharge time dropped to 3.8 hours. Documentation burden fell 40%. The executive dashboard still existed. But it was generated from work that was happening, not work that was being documented.

Five Workflow Design Patterns

Every human-AI collaboration follows one of five foundational patterns:

  1. Decision Support: The system recommends; the human decides. Used for judgment calls where context matters.
  2. Automation with Override: The system handles routine cases; humans handle exceptions. Used for high-volume processes with predictable rules.
  3. Preparation: The system assembles context; the human acts on prepared information. Used for research-heavy tasks where gathering information crowds out thinking.
  4. Verification: The human does the work; the system checks for errors. Used for quality control and compliance.
  5. Learning: Humans teach the system through feedback; the system improves over time. A capability layer that can be added to any pattern.

Selecting the right pattern is the first design decision. Implementing it without falling into the surveillance trap is everything after.

Your Deliverable

This module produces a Workflow Blueprint for your highest-ROI opportunity (identified in Module 3). The blueprint documents your current-state workflow as it actually operates, designs a future-state workflow using the appropriate collaboration pattern, and specifies every human-AI collaboration point with clarity about who decides and who executes.

The design principles matter because they determine whether practitioners adopt your system or circumvent it. Carmen Vasquez had a perfect business case. She still lost eighteen months because the design served the wrong audience. Your blueprint is built to avoid that failure from the start.

Module 4A: ORCHESTRATE — Theory

R — Reveal

Case Study: The System Everyone Hated

The discharge planning initiative at Lakewood Regional Medical Center had everything going for it.

Carmen Vasquez, the chief nursing officer, had spent four months conducting a rigorous assessment of the discharge process. She had observed how nurses, case managers, and social workers actually coordinated patient transitions. She had documented the shadow systems: the whiteboard in the breakroom where charge nurses tracked pending discharges, the shared spreadsheet that case managers used because the electronic health record could not show cross-functional status, the sticky notes on computer monitors with direct phone numbers that bypassed the official communication channels.

Her Opportunity Portfolio had identified the central friction: discharge coordination required eight different people to synchronize around information that existed in six different systems. The average patient discharge took 4.3 hours from physician order to actual departure. Patients spent that time waiting in beds needed for incoming admissions. Nurses spent it making follow-up calls instead of providing care. The hospital paid for it in blocked capacity and delayed revenue.

The business case was compelling. Carmen had established baselines through direct observation. She had calculated value across all three lenses: $1.2 million annually in capacity recovery, 340 nursing hours per week returned to patient care, and elimination of the single point of failure represented by the legendary discharge coordinator, Maria Santos, who had been doing this job for twenty-two years and whose retirement loomed eighteen months away.

The executive team approved full funding. The project had visible sponsorship from the CMO and CFO. Implementation began with enthusiasm and adequate resources.

Eighteen months later, the discharge coordination system sat mostly unused. Nurses had developed workarounds to avoid it. Case managers logged the minimum data required and then reverted to their spreadsheet. The whiteboard in the breakroom had been officially removed, then unofficially replaced with a new one behind a supply closet door.

Average discharge time had increased to 5.1 hours.


What the Executives Saw

The system looked beautiful from the top.

The executive dashboard showed every patient's discharge status in real-time. Color-coded indicators flagged delays. Automated alerts notified administrators when patients exceeded target discharge windows. Reports generated automatically, showing trends by unit, physician, and day of week. The compliance team could pull audit trails instantly. Quality metrics were visible at a glance.

During the vendor demonstration, the COO had watched the dashboard populate and said, "Finally, we'll be able to see what's actually happening." The CFO had noted the reporting capabilities and observed that manual report compilation, currently consuming two FTEs, could be eliminated.

The system delivered exactly what it promised: visibility. Leadership could see every discharge in progress, every task completed or pending, every bottleneck and delay.

The problem was what the system required to produce that visibility.


What the Practitioners Experienced

Nurse manager Sarah Chen remembered the moment she knew the system would fail.

It was 7:15 AM on the third day after go-live. She had just received shift handoff from the night charge nurse, a conversation that used to take eight minutes and covered fourteen patients with pending or possible discharges. Under the new system, that handoff was supposed to be unnecessary. The system would show everything.

Except it didn't. The system showed data. It didn't show context.

Patient in 412: the system showed "discharge order pending" with a yellow indicator. What it didn't show was that the patient's daughter was driving in from three hours away and the family had requested a 2 PM target, which the day shift had already coordinated informally with pharmacy and transport. The system flagged 412 as a delay risk. Sarah knew 412 was actually ahead of schedule.

Patient in 408: the system showed "discharge complete, awaiting transport." Green indicator. What it could not show was that transport had been called forty minutes ago for a patient who was confused and combative, that the transport aide had returned the patient to the unit after he became agitated in the elevator, and that psych consult was now involved. The system showed a success. Sarah had a crisis.

Patient in 415: the system showed nothing at all. The discharge order had been entered by a covering physician who did not know the patient, had been immediately questioned by the case manager, and was pending attending review. On the old whiteboard, this would have been noted with a question mark. In the new system, it was invisible until the formal discharge pathway was initiated.

"The system shows what happened," Sarah said during the post-implementation review. "It shows nothing about what's happening right now. And it shows even less about what's about to happen."


The Burden of Visibility

The core problem revealed itself within the first week: every piece of information that appeared on the executive dashboard had to be entered by someone.

The previous workflow had evolved over years to minimize documentation during the discharge process itself. The whiteboard required a nurse to write a room number and a one-word status. The case manager spreadsheet auto-populated from the EHR and required perhaps two minutes of updates per patient. Maria Santos kept most of the coordination in her head, making calls and adjustments in real-time without stopping to document each interaction.

The new system required comprehensive documentation at every step. Task completion had to be logged. Status changes had to be recorded. Delays had to be explained with reason codes selected from a dropdown menu that never quite matched reality. Every communication about a discharge was supposed to occur through the system so it would appear in the audit trail.

Nurses estimated they spent an additional 12-15 minutes per discharge on documentation. Documentation that added no value to patient care. It existed purely to populate dashboards and reports.

"I'm not taking care of patients," one nurse said during a focus group. "I'm feeding the beast."


The Surveillance Problem

The deeper issue emerged more slowly.

The system tracked everything: who completed which task, when, how long it took. The data was intended for process improvement, but practitioners experienced it differently.

Charge nurses noticed that their delay explanations were being reviewed in weekly management meetings. A nurse who documented "awaiting family arrival" was questioned about why the family hadn't been given an earlier window. A case manager who logged "physician order delayed" was asked to explain which physician and why.

The system had been designed to produce accountability. Practitioners experienced it as surveillance.

"I used to make judgment calls all day," said Diane Adeyemi, a case manager with fifteen years of experience. "Now I'm afraid to make any call that I might have to defend later. So I wait for someone else to make them and then document that I was waiting."

The result was exactly backward: a system designed to accelerate discharges had introduced decision paralysis. Practitioners who previously exercised judgment now deferred to avoid documentation of their reasoning.


The Workaround Economy

Within six weeks, the informal systems had reconstituted themselves.

The whiteboard returned. Relocated, unofficial, and more valuable than ever because it captured what the official system could not. Nurses developed a parallel communication channel through the hospital's internal messaging system, using coded language to coordinate without creating documentable records. Case managers began calling each other directly rather than updating the system, then batch-entering data at the end of their shifts to satisfy compliance requirements.

Maria Santos, whose knowledge was supposed to be captured by the system, became more essential than ever. She was the only person who could translate between what the system showed and what was actually happening. Her retirement, now twelve months away, had become an organizational emergency.

The system's adoption metrics looked reasonable: 78% task completion rate, 82% status accuracy, average documentation compliance above threshold. But these numbers measured data entry, not value. Practitioners were feeding the system enough to avoid scrutiny while doing their real work elsewhere.

The shadow systems had merely been driven underground.


The Moment of Clarity

The breakthrough came from an unexpected source.

Maria Santos cornered Carmen Vasquez in the hallway one Tuesday afternoon. Maria had been notably silent during the implementation. Cooperative but never enthusiastic. Compliant but never engaged. Carmen had attributed this to resistance to change.

"Can I show you something?" Maria asked.

She led Carmen to the breakroom and pulled open the supply closet door. There was the whiteboard, covered with room numbers, names, arrows, and a notation system that made sense only to people who had learned it through years of use.

"This is how we actually coordinate," Maria said. "This is what the system was supposed to replace."

Carmen looked at the whiteboard, then at Maria. "Why didn't you tell us this wouldn't work?"

"I did. During requirements gathering, I explained how we actually discharge patients. I explained the judgment calls, the family coordination, the physician variability, the transport logistics. I explained that most of what I do is anticipate problems before they become problems."

"And?"

"And they said the system would handle all of that. They said I was describing a workaround that should not exist. They said the new system would give me 'structured workflows' so I would not have to keep everything in my head."

Maria paused. "They were partially right. Keeping everything in my head is unsustainable. But they misunderstood what 'everything' meant. They thought I was tracking tasks. I'm tracking relationships, timing, family dynamics, physician preferences, and a hundred variables that do not fit in dropdown menus."

Carmen stared at the whiteboard. "So the system..."

"The system was designed for you. The executives. It shows you what you want to see: status, metrics, compliance. It was designed for observation. We need coordination. We need communication. We need anticipation."

Maria pointed at the whiteboard. "This is ugly. A mess. It generates zero reports or dashboards. But it shows us what we need to know to do our jobs. The new system shows you what you need to know to review our jobs. Those are two different things."


The Redesign

Over the following three months, Carmen led a fundamental redesign of the discharge coordination system. The vendor had delivered exactly what was specified; the specification had been wrong.

The redesign started with a different question: What do practitioners need to do their work better?

The answers reshaped everything:

Visibility became passive. The system pulled data from existing documentation rather than requiring separate entry. Task completion was inferred from actions already being recorded: medication reconciliation, transport requests, equipment orders. Practitioners no longer fed the dashboard. The dashboard assembled itself.

Status became contextual. Instead of rigid dropdown menus, the system allowed free-text notes visible only to the care team. Patient in 412 could show "family en route, 2 PM target confirmed," context that mattered to coordinators but required no executive review.

Communication happened inside the workflow. The messaging system was integrated directly, allowing practitioners to coordinate without switching applications or creating separate documentation. The audit trail existed, but it captured natural communication rather than requiring structured data entry.

Exception handling replaced exception documenting. When a discharge fell outside normal parameters, the system offered decision support: suggesting contacts, surfacing similar past cases, prompting relevant questions. It guided instead of interrogating.

The whiteboard logic was digitized. Maria worked with the development team to translate her mental model into a visual interface that showed relationships and timing alongside tasks and status. The result looked everything like an electronic whiteboard and nothing like the original dashboard.

Six months after redesign, average discharge time had dropped to 3.8 hours, better than the original target. Documentation burden had decreased by 40% from the failed implementation. The executive dashboard still existed, still showed status and metrics. But it was generated from work that was happening rather than work that was being documented.

Maria Santos retired on schedule. The knowledge that lived in her head had finally been captured. The capture happened through a workflow that made sense to the people who used it.


The Lesson That Cost Eighteen Months

The discharge coordination system had failed because it was designed for the wrong audience.

The original system was designed to answer executive questions: Where are we in the discharge process? Who is responsible for delays? What does our performance look like?

The redesigned system was designed to answer practitioner questions: What do I need to do next? Who do I need to talk to? What's about to become a problem?

Both are legitimate questions. But the first set can only be answered if the second set is answered first. A system that makes practitioners' work harder will never produce the visibility executives want. If it appears to, the visibility is an illusion built on workarounds and batch data entry and checkbox compliance.

Carmen framed the lesson in a way that stuck with her team: "We designed a system to watch work happen. We should have designed a system to help work happen. The watching would have taken care of itself."

The technology worked fine. The design failed. And the design failed because it started with the wrong audience.

You design for the person doing the work. The person reviewing the work gets their view as a byproduct.

Get that order wrong, and no amount of training, change management, or compliance pressure will save you.


Module 4A: ORCHESTRATE — Theory

O — Observe

Core Principles

The Lakewood Regional case illustrates a principle that applies across every workflow design project: systems designed to watch work will never improve work. You design for the practitioner first. Everything else follows.

This module's anchor principle:

Design for the person doing the work, not the person reviewing the work.

This principle sounds obvious. It is consistently violated. The violation is rarely intentional. It emerges from the reasonable instinct to create visibility, ensure accountability, and measure progress. But these are observer needs, and when observer needs drive design, practitioners experience burden.

The Lakewood discharge system did exactly what it was designed to do: produce dashboards, generate reports, enable oversight. It failed because no one asked what the nurses and case managers needed to do their jobs better. The design served the wrong audience.


The Invisible Automation Principle

The best automation is invisible to the people it serves.

When practitioners notice a system, something has already gone wrong. They should notice that their work is easier, that information appears when needed, that errors are caught before they cascade. They should not notice screens to navigate, data to enter, workflows to follow.

The Visibility Test:

Ask practitioners: "What technology are you using?"

If they can name specific systems and describe their interactions with them, the automation is visible, and probably burdensome. If they describe their work in terms of tasks and outcomes rather than tools and interfaces, the automation has become infrastructure.

Consider the difference:

Visible automation: "I log into the discharge system, update the patient status, enter the reason code, notify the downstream team through the message center, and then check back in thirty minutes to see if they've acknowledged."

Invisible automation: "I update the chart, and everyone who needs to know gets notified. If something's going to be a problem, the system flags it before it becomes one."

The same underlying technology can produce either experience depending on design.

When Visibility Becomes Burden:

Carmen Vasquez's original discharge system added 12-15 minutes of documentation per patient. None of this documentation helped practitioners coordinate better. It existed to populate dashboards and audit trails. The information was valuable to administrators; the documentation burden fell on nurses.

Invisible automation would have captured the same information from actions already being taken: medication reconciliation, transport requests, equipment orders. The dashboard would exist, but practitioners would simply do their work, and the system would observe.

The Paradox of Invisible Value:

The invisibility principle creates a communication challenge. How do you demonstrate value from something no one notices?

The answer is in outcomes. Practitioners notice that discharges are smoother, that information appears when needed, that problems get flagged before they escalate. The system's value is measured in the work it enables.


Design for Adoption, Not Perfection

Elegance without adoption is waste.

The 80% solution that gets adopted beats the 100% solution that sits unused. Adoption is a design requirement, a constraint as real as any technical specification.

The Adoption Hierarchy:

  1. Useful: Does the design solve a real problem practitioners have?
  2. Usable: Can practitioners accomplish their goals without friction?
  3. Findable: Can practitioners discover what they need when they need it?
  4. Tolerable: Does the design avoid creating new burdens?
  5. Adoptable: Will practitioners actually use it in their real work?

Each level depends on the levels below. A perfectly useful design that creates intolerable burden will not be adopted. A usable design that doesn't solve a real problem will be abandoned.

Building for Real Humans:

Real humans take shortcuts. They skip optional fields. They batch-enter data at end of day. They communicate through back channels when official channels are slow. They develop workarounds for edge cases the system cannot handle.

Designs that treat this behavior as compliance failure will fail. Designs that anticipate this behavior and work with it will succeed.

The Lakewood nurses developed workaround systems within six weeks. This was adaptation to design failure. The workarounds represented requirements that the official system failed to meet. Reading workarounds as design feedback, rather than discipline problems, would have surfaced the issues months earlier.

The Perfection Trap:

Complex designs fail more often than simple ones. Every additional feature is a potential point of friction. Every edge case handled in the system is complexity practitioners must navigate.

The discipline is ruthless prioritization: Which features are essential? Which can wait? Which should never exist?

A system that handles 80% of cases smoothly and requires human intervention for 20% is better than a system that handles 95% of cases with added complexity for everyone. The cost of the 80% solution's exceptions is lower than the cost of the 95% solution's universal friction.


The Simplicity Imperative

Every added step must earn its place. Complexity is the enemy of adoption.

The "One More Field" Problem:

Systems accumulate friction through reasonable requests.

Someone needs a new data point. It's just one more field. Practitioners can fill it in while they're already in the form. The marginal burden is small.

Multiply this by years of operation, and you have forms with forty fields, workflows with eighteen steps, processes that take twenty minutes for what used to take five.

Each addition was justified. The aggregate is unbearable. And removing fields is harder than adding them because every field has a stakeholder who needs that data.

The simplicity imperative requires a different approach: fields must justify their existence against the friction they create, and that justification must be renewed regularly. "We've always captured that" is not justification. "Someone might need it" is not justification. "What specific decision does this enable, and is that decision worth the burden?" is the right question.

Complexity Compounds:

Complexity in one area creates complexity elsewhere.

A workflow with eighteen steps requires training materials. It requires exception handling for each step. It requires audit processes to verify compliance. It requires maintenance as business rules change. It requires support resources when practitioners get confused.

A workflow with six steps requires less of all of these. The gap widens over time as the complex system accumulates technical debt, workarounds, and institutional frustration.

Elegant Solutions Survive Contact with Reality:

Complex solutions break under pressure. When volume spikes, when staff is short, when exceptions multiply, complex workflows degrade first. Practitioners skip steps, batch work, take shortcuts. The system's design assumes conditions that evaporate under stress.

Simple workflows bend without breaking. Fewer steps means fewer opportunities for degradation. Clear logic means easier recovery when something goes wrong. Simplicity is a form of resilience.


Practitioner-Centered Design

Design for the person doing the work, not the person reviewing the work.

This principle bears restating because it is violated so frequently and so unconsciously. Executive stakeholders fund projects. Executive stakeholders approve designs. Executive stakeholders evaluate success. Their needs naturally shape decisions unless the design process deliberately resists this gravity.

What Executives Want vs. What Practitioners Need:

Executives WantPractitioners Need
Status visibilityContext for decisions
Performance metricsTools to perform
Audit trailsSmooth workflows
Compliance documentationError prevention
Exception reportsException handling
Trend analysisCurrent information

Both columns contain legitimate needs. The error is prioritizing the left column in design and expecting the right column to follow. It never does.

A system designed for executive needs requires practitioners to document their work for observation. A system designed for practitioner needs produces executive visibility as a byproduct of work already being done.

The Surveillance Trap:

When practitioners experience a system as monitoring rather than helping, behavior changes in ways designers never intended.

At Lakewood, practitioners began avoiding documentation that might require justification. Decision-making slowed as staff deferred rather than risked being questioned. The audit trail became an accountability threat rather than a quality tool.

Surveillance produces defensive behavior: covering tracks, avoiding documentation, deferring decisions. These are rational responses to a perceived threat. The system became an adversary rather than a tool.

Serving Both Audiences:

Practitioner-centered design serves executive needs by sequencing them correctly.

First: What do practitioners need to do their work better? Design for that.

Second: What visibility do executives need? Derive it from practitioner actions without adding burden.

The Lakewood redesign followed this sequence. Practitioners got contextual status displays, integrated communication, and decision support. Executives got their dashboard, populated from practitioner actions rather than separate documentation.

Both audiences were served. The order of priority made the difference.

Interactive Exercise

Two Views of the Same Floor

4.2h

Avg Discharge

94%

Compliance

2

Delays

Patient 412Discharge order pending
Patient 408Discharge complete
Patient 305Awaiting transport
Patient 219In progress
Patient 601On track

Five patients. Two delays. Everything else on track. This is the view that approved the system.


Help, Not Surveillance

Automation should feel like assistance, not monitoring.

The same functionality can feel like either, depending on design. The difference is in who the system serves and how practitioners experience its presence.

Assistance feels like:

  • Information appearing when needed
  • Problems being flagged before they escalate
  • Routine work being handled automatically
  • Context being assembled for complex decisions
  • Communication being routed to the right people

Surveillance feels like:

  • Data entry required for observation
  • Actions being tracked for review
  • Exceptions requiring justification
  • Performance being measured for comparison
  • Delays being documented for accountability

Note that assistance and surveillance can involve identical underlying data. The difference is in purpose and presentation. An alert that says "Patient 412 may need pharmacy follow-up" feels like help. An alert that says "Discharge delayed in your unit, please document reason" feels like surveillance.

Trust as Design Requirement:

Practitioners need to trust that the system is on their side. This trust is earned through design, not assertion.

Systems earn trust by:

  • Reducing burden consistently
  • Providing accurate information
  • Flagging real problems (not generating false alerts)
  • Supporting decisions rather than second-guessing them
  • Protecting practitioners from error rather than documenting errors for review

Systems lose trust by:

  • Adding work without clear benefit
  • Providing unreliable information
  • Generating alert fatigue through false positives
  • Creating accountability exposure
  • Being used for performance evaluation without consent

Trust, once lost, is difficult to rebuild. Practitioners who have experienced surveillance will interpret even helpful features as monitoring. Design must earn trust from the first interaction and maintain it consistently.

Interactive Exercise

Help or Surveillance?

1 of 10

Information appearing when the practitioner needs it

Would a practitioner experience this as support for their work, or as management watching over their shoulder?


The Principle in Practice

The Lakewood case demonstrates these principles in action, first through violation, then through correction.

The original system violated every principle:

  • Invisible automation: The system was visible and intrusive, requiring 12-15 minutes of documentation per discharge
  • Design for adoption: The 78% compliance rate masked workaround systems that did the real work
  • Simplicity imperative: Reason codes, status updates, and message threading added complexity that served dashboards, never practitioners
  • Practitioner-centered design: The system answered executive questions while making practitioner work harder
  • Help over surveillance: Staff experienced the system as monitoring, creating defensive documentation and decision paralysis

The redesigned system embodied each principle:

  • Invisible automation: Data captured from actions already being taken; practitioners simply worked
  • Design for adoption: The whiteboard logic was digitized, honoring how practitioners actually worked
  • Simplicity imperative: Free-text context replaced structured reason codes; integration replaced separate documentation
  • Practitioner-centered design: The design started with practitioner needs; executive visibility derived from practitioner actions
  • Help over surveillance: Decision support replaced exception reporting; the system flagged problems and guided resolution

The technology was largely the same. The design philosophy was opposite. The outcomes were transformative.

Design for the person doing the work. The person reviewing the work gets their view as a byproduct.


Module 4A: ORCHESTRATE — Theory

O — Observe

Workflow Patterns for Human-AI Collaboration

Every workflow design problem has been solved before, usually multiple times, in different contexts, by practitioners who discovered what works through trial and error. These solutions cluster into recognizable patterns.

A workflow pattern is a reusable template for how humans and intelligent systems collaborate. Each pattern defines who decides, who acts, and how information flows between them. Selecting the right pattern is the first design decision; implementing it well is everything that follows.

This section introduces five foundational patterns. Most workflows are either a single pattern or a combination of two or three.


Pattern 1: Decision Support

The Logic: AI provides recommendation; human decides.

In decision support workflows, the system assembles information, analyzes options, and suggests action. The human reviews the recommendation, applies judgment, and makes the final call. The system augments human capability while human authority remains intact.

When to Use:

  • Judgment calls where context matters
  • Exceptions that require human interpretation
  • Customer-facing decisions where accountability is personal
  • Situations where multiple valid options exist
  • High-stakes choices that warrant deliberation

Design Considerations:

The central challenge is presenting recommendations without creating compliance pressure. If practitioners feel they must justify deviations from system recommendations, the pattern becomes automation in disguise. Practitioners rubber-stamp suggestions to avoid documentation burden.

Good decision support designs:

  • Present recommendations as one option, not the option
  • Show the reasoning behind recommendations so humans can evaluate
  • Make disagreement easy: one click, no explanation required
  • Track when humans override and why (optional), but keep this voluntary
  • Learn from human decisions over time without penalizing deviation

Example Application:

A bank's credit decision workflow presents loan applications with a system recommendation: approve, decline, or refer for review. The system shows its reasoning: credit score in this range, debt-to-income ratio at this level, similar applications had this outcome rate.

The loan officer reviews, applies contextual knowledge (the applicant's employer just announced expansion; the debt is from medical emergency, now resolved), and decides. The system records the decision. Over time, patterns in human override contribute to model refinement. The human decision is final and requires no justification.

Interactive Workshop

Workflow Pattern Matcher

Select a workflow scenario that's closest to your situation, or explore each to learn how different work patterns map to collaboration models.


Pattern 2: Automation with Override

The Logic: AI handles routine cases; human handles exceptions.

In automation with override, the system processes the common cases autonomously while flagging exceptions for human attention. The human's role shifts from processing everything to handling what the system cannot: edge cases, ambiguities, and situations that require judgment.

When to Use:

  • High-volume processes with predictable rules
  • Situations where most cases are routine but some require judgment
  • Workflows where speed matters for the routine and accuracy matters for exceptions
  • Processes where human time is better spent on complex cases

Design Considerations:

The critical design decision is the override mechanism. If overriding automation is difficult, buried in menus, requiring documentation, subject to review, practitioners will accept bad automated decisions rather than fight the system. The path of least resistance must be correction.

Good automation with override designs:

  • Make override as easy as acceptance: one click, one step
  • Show what the automation did and why before asking for approval
  • Allow batch override when patterns of error emerge
  • Treat high override frequency as a calibration signal, never a performance problem
  • Feed overrides back to improve automation logic

Example Application:

An insurance claims workflow auto-adjudicates routine claims: those within coverage limits, matching standard diagnosis codes, from verified providers. These are paid without human review.

Complex claims, those with unusual codes, high dollar amounts, or provider flags, route to adjusters. The adjuster sees what the system would have done and can accept, modify, or reject. Modification is simple: change the amount, add a note, process. No form, no justification, no workflow.

Over time, the system learns from adjuster modifications. A diagnosis code that consistently gets modified has its auto-adjudication rule adjusted. The automation improves; the adjuster's time focuses on genuinely complex cases.


Pattern 3: Preparation

The Logic: AI assembles context; human acts on prepared information.

In preparation workflows, the system's role is research and synthesis: gathering information from multiple sources, organizing it for human consumption, and surfacing what's relevant to the task at hand. The human arrives at a decision point with context already assembled, reducing cognitive load and improving decision quality.

When to Use:

  • Research-heavy tasks where information is scattered
  • Complex decisions requiring multi-source synthesis
  • Situations where time spent gathering information crowds out time spent thinking
  • Workflows where practitioners are expert decision-makers but inefficient researchers

Design Considerations:

The preparation pattern requires understanding what practitioners need to know, and equally important, what they can safely ignore. Over-preparation is as problematic as under-preparation. A system that surfaces everything surfaces nothing.

Good preparation designs:

  • Present information in priority order, not chronological or alphabetical
  • Surface the unusual; flag what's different about this case
  • Allow drill-down for detail without requiring it
  • Adapt to practitioner preferences over time
  • Make the preparation editable so practitioners can add context the system missed

Example Application:

Before a physician sees a patient, the system prepares a clinical summary: relevant history, recent lab trends, current medications, outstanding orders, and flags for potential interactions or concerns. The physician reviews for thirty seconds rather than searching for five minutes.

Critically, the preparation is curated, highlighting what's changed since last visit, what's abnormal in recent results, what's relevant to today's chief complaint. The physician can click into any area for detail but doesn't wade through information that doesn't matter for this encounter.

R-01 Application:

The Returns Bible integration from earlier modules maps primarily to the Preparation pattern. The system's role is to prepare return policy information: surfacing the relevant policy, showing prior similar cases, flagging exceptions. The customer service representative reviews that preparation, makes the decision, and handles the customer. The system prepares; the human acts.


Pattern 4: Verification

The Logic: Human initiates; AI checks for errors or omissions.

In verification workflows, the human performs the work; the system reviews it. The human acts and the system checks. The system catches what humans miss: errors, inconsistencies, compliance gaps, forgotten steps.

When to Use:

  • Quality control for human-performed work
  • Compliance checking before submission
  • Risk identification in complex processes
  • Error detection in high-stakes decisions

Design Considerations:

Verification workflows walk a line between help and surveillance. When done well, they feel like a safety net, a second set of eyes that catches errors before they become problems. When done poorly, they feel like automated criticism of human judgment.

Good verification designs:

  • Verify before submission; catch errors while they're still correctable
  • Flag issues specifically and actionably: "Section 3 is missing required disclosure," never "errors detected"
  • Distinguish between errors (must fix) and warnings (should review)
  • Guard against alert fatigue; if everything is flagged, nothing is flagged
  • Focus on prevention, never blame; the purpose is catching errors, not documenting them for review

Example Application:

A legal document system checks contracts before sending for signature. It verifies that all required clauses are present, that dates are consistent, that party names match throughout, and that negotiated terms are within authorized limits.

The attorney reviews flagged issues, corrects genuine errors, and clears false positives. The system learns which flags the attorney consistently overrides and adjusts its sensitivity. Over time, the verification becomes more precise, catching real issues while ignoring non-issues.


Pattern 5: Learning

The Logic: Human teaches AI through feedback; AI improves over time.

In learning workflows, the system's performance improves through human input. This is less a separate workflow category than a capability layer that applies to other patterns. Any pattern can incorporate learning to adapt to local context and evolve with changing requirements.

When to Use:

  • Processes with tacit knowledge that's hard to specify upfront
  • Situations where rules evolve based on experience
  • Contexts where local variation matters
  • Workflows where initial automation can't capture all relevant factors

Design Considerations:

Learning requires feedback, and feedback requires effort. The design challenge is capturing meaningful input without adding burden. The worst outcome is a learning system that doesn't learn because practitioners skip the feedback mechanisms.

Good learning designs:

  • Capture feedback as a byproduct of natural workflow, not a separate step
  • Learn from what practitioners do, not just what they say
  • Distinguish between "the system was wrong" (training data) and "this case is unusual" (exception)
  • Show practitioners how their feedback improved the system; close the loop
  • Allow local adaptation without requiring central model retraining

Example Application:

A content moderation system flags potentially problematic posts for human review. Moderators review and decide: remove, keep, or escalate.

Each decision is training data. Posts that moderators consistently keep despite system flags suggest over-sensitivity. Posts that moderators consistently remove despite system approval suggest under-sensitivity. The model adapts, becoming more aligned with human judgment over time.

Critically, the adaptation is visible. Moderators see "You've helped improve accuracy by 12% this quarter," feedback on their feedback that motivates continued engagement with the learning loop.


Selecting the Right Pattern

Pattern selection starts with understanding the work:

If the work requires...Consider...
Human judgment on system-prepared optionsDecision Support
Handling volume with exceptionsAutomation with Override
Research before actionPreparation
Quality assurance on human workVerification
Continuous improvement from experienceLearning (added to any pattern)

Decision Framework:

  1. Who knows best? If human judgment is essential, use Decision Support. If system rules cover most cases, use Automation with Override.

  2. Where is the burden? If gathering information is the burden, use Preparation. If checking work is the burden, use Verification.

  3. What improves over time? If the process should adapt, add Learning to whatever pattern fits.

  4. What's the cost of errors? High-cost errors favor human-primary patterns (Decision Support, Verification). Low-cost, high-volume contexts favor automation-primary patterns (Automation with Override).

The R-01 Pattern:

The Returns Bible integration (R-01) uses the Preparation pattern primarily:

  • System prepares: Surfaces relevant return policy, shows prior similar cases, flags exceptions
  • Human acts: Representative reviews preparation, makes decision, handles customer
  • Outcome: Reduced search time, consistent policy application, decision authority remains with human

A Learning component could be added: when representatives override system-surfaced policy (marking "this case was different because..."), those exceptions feed back to improve future preparation.


Combining Patterns

Complex workflows often combine patterns at different stages:

Sequential Combination:

Preparation → Decision Support → Verification

A loan underwriting workflow might: (1) prepare by assembling applicant information, (2) support the decision by recommending approval/denial with rationale, and (3) verify the final package before submission.

Parallel Combination:

A healthcare workflow might run Preparation (assembling patient context) and Verification (checking for drug interactions) simultaneously, both completing before the physician acts.

Nested Combination:

The main workflow follows one pattern; specific steps within it follow another. A customer service workflow might follow Decision Support overall, but each decision point involves Preparation of relevant information.


Pattern Anti-Patterns

Each pattern has common misapplications:

Decision Support misused as rubber-stamp automation:

When deviation requires justification, decision support becomes compliance pressure. The human's "choice" is illusory.

Automation with Override misused as exception documentation:

When overrides require forms and explanations, practitioners accept bad automation rather than fight the system. Error correction becomes burden.

Preparation misused as information overload:

When preparation surfaces everything, nothing is surfaced. Practitioners drown in data rather than acting on insight.

Verification misused as surveillance:

When verification documents human error for review rather than catching error for correction, it becomes a threat rather than a tool.

Learning misused as training burden:

When learning requires explicit feedback on every transaction, it adds friction without corresponding improvement.


Selecting patterns is the first design decision. The second is implementing them without falling into these traps. The following sections address design failures and implementation methodology.


Module 4A: ORCHESTRATE — Theory

O — Observe

Design Failures: How Workflow Designs Go Wrong

Good intentions produce bad workflows with remarkable consistency. The failures follow patterns, recognizable shapes that repeat across industries, organizations, and technology generations. Learning to see these patterns is the first step toward avoiding them.

This section catalogs seven common failure modes. Each one seemed reasonable to someone at the time. Each one produces predictable dysfunction.


1. The Executive Dashboard Trap

The Pattern:

Design begins with a question: "What do we want to see on the dashboard?"

The answer shapes everything that follows. Workflows are designed to produce data points. Processes are structured around metrics. Features are added to enable reporting.

The dashboard looks beautiful. It shows status, trends, exceptions, performance. Executives can finally see what's happening.

What they cannot see: the burden created to produce that visibility.

How It Manifests:

At Lakewood Regional, the discharge coordination system required practitioners to document status changes, log communications, and enter reason codes, all to populate a dashboard that executives reviewed weekly. The 12-15 minutes per discharge was the cost of visibility.

The dashboard showed what executives wanted: discharge status by unit, delays by category, performance trends by shift. It could never show the workarounds that practitioners developed to minimize documentation burden. The parallel whiteboard. The back-channel communications. The batch data entry at shift end.

The Tell:

You're in the executive dashboard trap when:

  • Design discussions focus on "what do we want to see" before "what do practitioners need"
  • Features are justified by reporting value rather than workflow improvement
  • Data entry exists primarily to create records, not to support decisions
  • Practitioners spend significant time documenting work rather than doing work

The Escape:

Ask a different first question: "What data would practitioners capture naturally if we removed all reporting requirements?"

Design for that. Then derive executive visibility from practitioner actions without adding burden. The dashboard becomes a view into work, not a destination that work must reach.


2. The Compliance Theater Pattern

The Pattern:

Workflows designed to prove work was done rather than to help do work.

The system accumulates checkboxes, approvals, attestations, and documentation steps. They exist to create evidence, never to improve outcomes. If something goes wrong, the organization can demonstrate that process was followed.

Compliance theater optimizes for defensibility rather than effectiveness.

How It Manifests:

A pharmaceutical company's quality system requires 47 signatures to release a batch of medication. Each signature attests that a step was completed correctly. In theory, this creates accountability. In practice, signers are attesting to work they didn't observe, in areas they don't understand, at scale that makes verification impossible.

The signatures distribute blame. When something goes wrong, the investigation follows the signature chain looking for who failed to catch the problem. The actual root cause, whether process design, equipment limitation, or training gap, is obscured by focus on documentation.

The Tell:

You're in compliance theater when:

  • Documentation steps outnumber work steps
  • Practitioners describe processes in terms of what to sign, not what to do
  • The same information is documented in multiple places "for the record"
  • Exception handling requires more documentation than routine processing
  • Audit preparation is a major operational burden

The Escape:

Distinguish between compliance requirements and compliance assumptions. What does regulation actually require? Often less than organizations assume. Regulatory frameworks typically require that controls exist and work. Documenting every transaction from every angle is usually organizational habit, not regulatory mandate.

Build compliance into workflow design rather than on top of it. A well-designed process creates compliance evidence as a byproduct of doing the work, not as a separate documentation layer.


3. The Exception Obsession

The Pattern:

Designing the entire workflow around edge cases.

Someone raises a scenario: "What if the customer wants to return an item they bought three years ago?" The workflow is modified to handle it. Another scenario: "What if the approval authority is on vacation?" More modification. Repeat until the 10% of exceptions drive the experience for the 90% of routine cases.

How It Manifests:

A procurement system was designed to handle complex, multi-department purchases with competing budget authorities. Every purchase, including $50 office supplies, flows through the same approval matrix, stakeholder notification, and documentation requirements.

The designers were solving real problems. Large purchases genuinely needed cross-functional coordination. But by applying the same solution to all purchases, they transformed routine transactions into bureaucratic exercises. Employees began hoarding office supplies to avoid the procurement system, or using personal cards and expensing later, workarounds that created different problems.

The Tell:

You're in exception obsession when:

  • Simple tasks require multiple steps "in case" of complexity
  • Practitioners ask "why do I need to do this?" and the answer is an edge case they've never encountered
  • The same workflow handles radically different transaction types
  • Process documentation is longer than anyone reads because it covers every possibility

The Escape:

Design two paths: a fast path for the 90% and an exception path for the 10%.

The fast path should be ruthlessly simple: minimum steps, minimum fields, minimum documentation. Exceptions route to a different flow with appropriate complexity.

The discipline is resisting the urge to merge paths "for consistency." Consistency that makes routine work harder is a liability.


4. The "They'll Get Used to It" Fallacy

The Pattern:

Assuming training solves design problems.

The workflow is clunky, the interface is confusing, the steps miss how work actually happens. But practitioners will adapt. They'll learn the system. Initial complaints will fade. Training investment will smooth the transition.

Sometimes this is true. More often, practitioners adapt by building workarounds that circumvent the design, creating parallel systems that eventually become the real workflow.

How It Manifests:

At Lakewood, the discharge system's complexity was dismissed as a training problem. Nurses would learn the workflow. Case managers would internalize the status codes. Resistance was change management, not design feedback.

Six months later, adoption metrics showed 78% compliance, respectable by most standards. But compliance meant data entry, never value creation. Practitioners entered minimum required information, then coordinated through their whiteboard and back-channels. They had "gotten used to" the system by reducing their interaction with it to the minimum necessary to avoid scrutiny.

The Tell:

You're in the fallacy when:

  • Launch plans allocate more time to training than to design iteration
  • Post-launch feedback is categorized as "needs more training"
  • Adoption metrics measure usage rather than value
  • "Power users" are defined by ability to navigate complexity rather than by outcomes achieved
  • Workarounds emerge within weeks of launch and persist indefinitely

The Escape:

Treat workarounds as design feedback, not discipline problems.

If practitioners find ways around the system, the system is failing them. The productive question: "What is the workaround telling us about unmet needs?"

Design should iterate until the official path is easier than the workaround. If that target proves unreachable, the design is wrong.


5. The Feature Accumulation Problem

The Pattern:

Workflows gain complexity over time through accumulation of reasonable requests.

No single feature breaks the system. Each addition is justified by a real need. But accumulated friction compounds until the workflow is significantly harder than when it started, and no one can point to the moment it happened.

How It Manifests:

A customer onboarding workflow launched with seven fields and a 3-minute completion time. Over two years:

  • Legal added a terms-of-service acknowledgment
  • Marketing added opt-in checkboxes for three communication channels
  • Compliance added identity verification questions
  • Product added feature preference selections for personalization
  • Support added emergency contact fields
  • Analytics added source tracking parameters

Each addition was approved independently. Each served a legitimate purpose. The workflow now has 34 fields and takes 12 minutes. Abandonment rates have tripled. No one owns the aggregate experience.

The Tell:

You're accumulating features when:

  • No one can explain when a field was added or why
  • Field removal requires multi-stakeholder negotiation
  • "Required" fields include data that's never used
  • Completion rates have degraded gradually without clear cause
  • New-hire onboarding includes learning workarounds for unnecessary steps

The Escape:

Implement a friction budget: every workflow has a complexity allocation. Adding a new step requires removing an existing one, or making a business case for budget expansion.

Conduct regular field audits: for each captured data point, identify who uses it, for what decision, and what happens if it's not available. Fields that can't answer these questions are candidates for removal.

Assign workflow owners responsible for aggregate experience, not just individual features.


6. The Automation Island

The Pattern:

Automating one step without considering the workflow it sits within.

The automated step works perfectly. It's faster, more accurate, more consistent. But it creates new handoff friction with adjacent steps, new format requirements for upstream processes, new interpretation challenges for downstream consumers.

The island of automation is surrounded by seas of new manual work.

How It Manifests:

A company automated invoice processing with impressive results: invoices were scanned, data was extracted, and entries were created in the accounting system in minutes rather than days.

But the automation was an island:

  • Upstream, vendors had to submit invoices in specific formats, creating friction that offset buyer efficiency gains
  • Downstream, extracted data required validation against purchase orders, which were still managed manually
  • Laterally, the automated entries didn't match the format expected by the month-end reconciliation process, requiring manual translation

The invoice processing step was faster. The end-to-end invoice lifecycle was barely improved because time was redistributed rather than eliminated.

How It Manifests at Lakewood:

The original discharge system automated status tracking beautifully. Every status change was logged, timestamped, and attributed. But the automation was an island. Upstream, practitioners had to enter data the system could not capture itself. Downstream, the status data failed to integrate with transport scheduling, pharmacy dispensing, or patient education, the adjacent processes that actually required coordination.

The Tell:

You're building automation islands when:

  • Automation metrics show improvement, but end-to-end metrics don't
  • Manual steps appear immediately before and after automated steps
  • Format conversion or data translation is required at integration points
  • Different parts of the workflow use different systems that don't communicate
  • "Handoff" appears frequently in process descriptions

The Escape:

Map the end-to-end workflow before automating any step. Identify integration points. Design automation that receives inputs naturally from upstream and produces outputs usable downstream without conversion.

Sometimes the right answer is not to automate a step in isolation but to wait until adjacent steps can be addressed together.


7. Ignoring the Informal System

The Pattern:

Designing without understanding existing workarounds.

Every organization has shadow systems. The spreadsheets, the sticky notes, the tribal knowledge, the back-channel communications that make official systems tolerable. These informal systems exist because formal systems fail to meet practitioner needs.

Ignoring them means ignoring requirements. Destroying them means destroying functionality.

How It Manifests:

At Lakewood, Maria Santos had spent twenty-two years building an informal coordination system: relationships with physicians, pattern recognition for discharge complications, shortcuts for common scenarios, workarounds for system limitations. This knowledge lived in her head and expressed itself through the whiteboard, the phone calls, the hallway conversations.

The new system was designed to replace this informal infrastructure with formal process. It succeeded in destroying the whiteboard. It failed to capture what the whiteboard represented: contextual, relational, adaptive coordination that couldn't be reduced to status codes and reason menus.

The Returns Bible as Informal System:

The Returns Bible from earlier modules is itself an informal system, a workaround that developed because formal systems failed to provide needed information. A design that replaces the Returns Bible without understanding why it emerged will repeat the dysfunction that created it.

Good design honors what practitioners have built. The informal system represents accumulated learning about what the work actually requires. Ignoring it discards organizational knowledge; honoring it accelerates design.

The Tell:

You're ignoring informal systems when:

  • Requirements gathering focuses on official process documentation
  • Key practitioners haven't been observed doing actual work
  • Shadow systems are described as "compliance problems" rather than requirements
  • Design assumes information lives where official records say it should
  • Launch plans include "retiring" unofficial tools without replacing their function

The Escape:

Map informal systems with the same rigor as formal ones. What spreadsheets exist? What tribal knowledge is essential? What workarounds have become standard practice?

Then design to absorb their function, not merely replace their form. The new system should be easier than the workaround. If it falls short, the workaround will persist, or be driven underground.


Recognizing Patterns in Your Own Design

These failure modes are easier to recognize in others' work than in your own. A few diagnostic questions:

For Executive Dashboard Trap:

  • Who is the primary beneficiary of this workflow? The person doing the work or the person reviewing it?
  • What percentage of steps exist to create visibility versus to accomplish the task?

For Compliance Theater:

  • What would happen if we removed half the documentation? What real risk would emerge?
  • Could an auditor distinguish between documented compliance and actual compliance?

For Exception Obsession:

  • What percentage of transactions use the full workflow complexity?
  • What would the workflow look like if we designed only for the common case?

For "They'll Get Used to It":

  • Are we solving resistance with training or iteration?
  • What workarounds have emerged, and what do they tell us?

For Feature Accumulation:

  • When was the last time we removed a step or field?
  • Who owns the aggregate user experience?

For Automation Island:

  • What manual work exists immediately upstream and downstream?
  • Do end-to-end metrics improve, or just step metrics?

For Ignoring Informal Systems:

  • What shadow systems exist, and what function do they serve?
  • Have we observed work, or only interviewed about it?

These questions surface failure modes early enough to correct course.


Module 4A: ORCHESTRATE — Theory

O — Observe

Technology Agnosticism: Why This Course Doesn't Teach Platforms

This course does not teach specific AI platforms, automation tools, or software systems. This is deliberate.

Technology changes. Principles endure. A workflow designed well can be implemented in multiple tools. A workflow designed poorly will fail regardless of tool sophistication. The value is in the design, not the implementation.


The Approach Matters More Than the Platform

Consider two organizations implementing the same capability: automated document processing.

Organization A selected a leading AI platform based on vendor demonstrations. Implementation followed the platform's recommended workflow. Training covered the platform's features. Success was measured in platform adoption metrics.

Organization B started differently. They mapped how documents actually flowed through their organization: who touched them, what decisions were made, what information was extracted, where friction existed. They designed a future-state workflow on paper: what happens at each step, who decides, what information flows where. Only then did they evaluate platforms against their design.

Organization A's platform worked well. Documents were processed. AI capabilities were impressive. But the workflow replicated existing dysfunction with better technology. The fundamental design, who decides, when, with what information, was never examined.

Organization B's implementation was messier. Their design requirements didn't perfectly match any platform. They had to configure, customize, and in some places compromise. But the end result addressed their actual workflow needs, not their platform's preferred workflow.

Two years later, Organization A was considering a platform switch, a major undertaking with significant cost and disruption. Organization B had already migrated to a different platform with minimal friction: their design documentation specified what they needed, and the new platform met those specifications.

The lesson: Platform selection is a downstream decision. Design comes first.


The Tool Selection Trap

The most common failure pattern in workflow automation starts with a question: "Which AI should we use?"

This question seems practical. Budget decisions require vendor selection. Timelines require technology commitments. Stakeholders want to see demos.

But the question puts technology before design. And when technology comes first, the design follows the tool's assumptions rather than the organization's needs.

Vendor-Driven Design:

Vendors demonstrate capabilities. Impressive capabilities. The AI can read documents, extract data, make recommendations, automate decisions. Demo scenarios show transformation.

But demo scenarios are selected to highlight platform strengths. Real workflows have different shapes: edge cases the platform handles awkwardly, integration requirements that miss the demo architecture, practitioner needs outside the platform's value proposition.

When design follows vendor demonstration, the workflow is shaped by platform capabilities rather than organizational requirements. The platform works; the workflow doesn't.

The RFP That Should Have Been a Design Session:

Organizations issue RFPs specifying technology requirements: AI classification accuracy above X%, processing speed of Y documents per hour, integration with Z systems. Vendors respond. Selection committees compare.

The problem: these requirements describe technology capabilities, not workflow outcomes. An AI with 98% classification accuracy still needs a workflow design for the 2% it misses. Processing speed means nothing if the workflow creates downstream bottlenecks. System integration is implementation detail, not design specification.

The RFP process assumes the organization knows what it needs at a technology level. Usually, they know what they need at a workflow level and need help translating that into technology requirements.

The solution: Design the workflow first. Document what happens at each step, who decides, what information is needed, what outcomes matter. Then translate that design into technology requirements. Then evaluate vendors against those requirements.


Design First, Then Select

The workflow blueprint is tool-agnostic.

A blueprint specifies:

  • What happens at each step
  • Who (human or system) performs each action
  • What information flows between steps
  • What decisions are made and by whom
  • What exceptions exist and how they're handled
  • What outcomes indicate success

A blueprint does not specify:

  • Which platform executes the workflow
  • What API calls are made
  • Which data model stores information
  • What user interface presents options

This separation is valuable because:

Designs survive technology changes. The Lakewood discharge workflow will still need coordination, status visibility, and exception handling regardless of what platform implements it. A design document focused on these needs remains valid through technology migrations.

Evaluation becomes objective. With a design in hand, platform evaluation is straightforward: can this tool implement this design? What compromises are required? Which tool requires the fewest compromises? These are answerable questions.

Organizational knowledge is preserved. The design represents understanding of how work should happen. This understanding belongs to the organization, not to a vendor relationship. When platforms change, the design persists.

What the Blueprint Must Specify:

  • Workflow structure: Steps, sequences, branches, exceptions
  • Decision points: Who decides, with what information, under what criteria
  • Information requirements: What data is needed at each step, from what sources
  • Human-AI collaboration pattern: Which pattern applies (Decision Support, Automation with Override, Preparation, Verification, Learning)
  • Success criteria: What outcomes indicate the workflow is working
  • Adoption requirements: What must be true for practitioners to use the workflow

What the Blueprint Leaves Open:

  • Platform selection
  • Technical architecture
  • Implementation sequence
  • Vendor relationships
  • Specific feature usage

Build vs. Buy vs. Configure

With a design in hand, implementation options clarify.

Configure:

Most workflows can be implemented by configuring existing systems. ERP workflows, CRM automation, document management rules: these are configuration exercises within tools the organization already owns.

Configuration is appropriate when:

  • Existing platforms support the required workflow pattern
  • Integration requirements align with platform capabilities
  • The workflow doesn't require AI capabilities beyond platform offerings
  • Speed of implementation matters more than custom optimization

Buy:

Specialized tools exist for many workflow categories. If the design reveals requirements that existing platforms cannot meet, purchasing a purpose-built tool may be appropriate.

Buying is appropriate when:

  • The workflow is common enough that mature solutions exist
  • Configuration of existing platforms would require extensive customization
  • Vendor maintenance is preferable to internal development
  • The workflow category is outside organizational core competency

Build:

Custom development is necessary when no existing tool meets design requirements and the workflow is central enough to justify investment.

Building is appropriate when:

  • The workflow represents competitive differentiation
  • Integration requirements are complex and organization-specific
  • The design reveals requirements that no existing platform addresses
  • Long-term flexibility is more valuable than speed of implementation

Decision Framework:

QuestionConfigurationPurchaseBuild
Does existing platform support the pattern?Yes → ConfigureNo → Consider purchase or buildN/A
Do mature solutions exist?N/AYes → Evaluate purchaseNo → Consider build
Is this core competency?N/ANo → PurchaseYes → Consider build
Does speed matter most?Yes → ConfigureSometimes → PurchaseNo → Build may be okay

The framework guides initial direction, not final decision. Detailed evaluation follows from the design specification.


Future-Proofing Through Abstraction

Designs that depend on specific platform features are fragile. Designs that specify needs abstractly survive platform changes.

Platform-Dependent Design:

"The system uses Vendor X's AI classification API to route documents, storing results in Vendor Y's database with notifications through Vendor Z's messaging system."

This design is bound to three vendors. Changing any one requires rework. The design is a technical specification masquerading as workflow documentation.

Platform-Independent Design:

"Documents are classified by category and routed to appropriate handlers. Classification decisions are stored for audit. Handlers are notified when documents require attention."

This design could be implemented with any capable platform. Vendors can be evaluated against these requirements. Changing platforms means re-implementing the same design, not redesigning the workflow.

What to Document for Migration:

  • Workflow logic: decision rules, routing criteria, exception handling
  • Information requirements: what data is needed, in what format, from what sources
  • Integration points: where the workflow connects to other systems (abstractly)
  • Performance requirements: speed, volume, accuracy thresholds
  • Success metrics: what outcomes indicate the workflow works

With this documentation, an organization can:

  • Evaluate new platforms against documented requirements
  • Implement the same design in different technology
  • Preserve organizational learning through technology transitions
  • Avoid vendor lock-in at the design level

The R-01 Example

The Returns Bible integration (R-01) from earlier modules illustrates technology-agnostic design.

What R-01 Requires Functionally:

  1. When a customer service representative handles a return, relevant policy information should be surfaced automatically
  2. The system should identify the return type, customer history, and applicable policies
  3. Representatives should see recommended actions without searching
  4. Exceptions should be flagged for human judgment
  5. Decisions should be captured for learning and audit

Multiple Implementation Paths:

ERP Configuration: Many ERP systems support this through custom fields, business rules, and workflow configuration. Policy logic is encoded in the ERP's rule engine. Representatives see policy recommendations in their existing interface.

Standalone Tool: A purpose-built returns management system could provide this capability with specialized features for return processing, policy management, and analytics.

Custom Build: An integration layer could connect the existing customer service system to a policy database, with custom logic for surfacing recommendations. This provides maximum flexibility at higher development cost.

AI-Enhanced Approach: Any of the above could be enhanced with AI for policy interpretation, exception prediction, or learning from representative decisions.

The Design Is the Same:

Regardless of implementation path, the workflow is identical:

  • Customer initiates return
  • System prepares policy information (Preparation pattern)
  • Representative reviews and decides (Decision Support pattern)
  • Decision is executed and captured
  • Exceptions route to supervisors

The technology differs; the design remains constant. R-01's value is captured in the design. Implementation is a separate decision made against that design.


Why This Course Is Tool-Agnostic

This course teaches:

  • How to assess current-state workflows
  • How to calculate value and build business cases
  • How to design future-state workflows
  • How to prototype and test
  • How to implement and measure

None of these require specific technology knowledge. All of them produce artifacts that guide technology decisions without being bound to them.

Practitioners who complete this course will be able to:

  • Evaluate any platform against their design requirements
  • Implement their designs in whatever technology their organization uses
  • Migrate designs across platforms when circumstances change
  • Distinguish vendor claims from organizational needs

Platform-specific training has its place, but it comes after design. This course provides the design capability that makes technology decisions intelligible.


Module 4A: ORCHESTRATE — Theory

O — Observe

Adoption as Design Outcome

Adoption is a design problem.

If practitioners avoid the system, the design failed. This is the first truth about adoption, and it is consistently denied.

Organizations explain low adoption as resistance to change, inadequate training, cultural barriers, or insufficient executive sponsorship. These explanations locate the problem in people rather than design. They lead to interventions, more training, more communication, more pressure, that address symptoms while ignoring causes.

The design-centered view is simpler and more actionable: if practitioners find workarounds, the official system isn't serving them. The workaround is the feedback. The response is iteration, not enforcement.


Adoption Is a Design Metric

A system that practitioners don't use is a failed system, regardless of its technical capability.

This seems obvious but has radical implications. It means:

User adoption is a design specification, not a post-launch hope.

The blueprint must specify adoption requirements: What must be true for practitioners to use this workflow? What friction is acceptable? What competing alternatives must be displaced?

If these questions go unanswered during design, they'll be answered during implementation, usually by practitioners voting with their behavior.

Low adoption is design feedback, not user failure.

When adoption lags, the instinct is to push harder: more training, more reminders, more accountability. These interventions assume the design is correct and the users are wrong.

The alternative view: low adoption reveals design gaps. The design promised something practitioners don't experience. The value proposition isn't landing. The friction exceeds the benefit.

This reframe transforms low adoption from a problem to solve (push users) into information to use (improve design).

"Won't adopt" vs. "Can't adopt":

Resistance has two sources:

Won't adopt: The practitioner can use the system but chooses not to. This may be rational (the system makes their work harder) or irrational (change aversion, status quo bias). Design improvements address the former; change management addresses the latter.

Can't adopt: The practitioner lacks something required: skills, time, resources, access, clarity about how the system fits their work. These are design or implementation failures.

Most "resistance" is can't masquerading as won't. The practitioner appears resistant when actually they're blocked by friction the designer didn't anticipate.


The Workaround Signal

Workarounds are the most valuable design feedback available. They reveal what the official system doesn't provide.

Workarounds as Requirements:

When a nurse creates a whiteboard to track discharge status, she is expressing a requirement. The requirement: contextual, visual, flexible status tracking that the official system lacks.

When a sales representative maintains a personal spreadsheet alongside the CRM, she is compensating for information the CRM fails to capture or present usefully.

When a warehouse worker annotates pick tickets with handwritten notes, he is adding context the system lacks.

Each workaround is a requirement. The question is whether the designer will read it.

The Returns Bible Was a Workaround:

The Returns Bible from earlier modules is itself a workaround. It emerged because official systems failed to provide return policy information in a usable form. Someone, likely "Patricia," compiled the knowledge into a document because no other source met practitioner needs.

Understanding the Returns Bible as workaround reframes the R-01 opportunity. The goal is to absorb its function into a system that serves the same need better. A design that ignores why the Returns Bible emerged will repeat the dysfunction that created it.

Reading Workarounds:

For each workaround discovered during assessment, ask:

  • What need does this serve that official systems don't?
  • What information does this provide that practitioners can't get elsewhere?
  • What friction does this eliminate that official processes create?
  • What would happen if this workaround disappeared?

The answers are requirements. The workaround is a prototype solution built by practitioners who understand the work better than the system designers did.


Designing for Real Behavior

Humans take shortcuts. They skip optional fields. They batch work. They find easier paths.

Designs that fight this behavior fail. Designs that accommodate it succeed.

How People Actually Work:

Official processes describe how work should happen. Actual work happens differently:

  • Steps are skipped when they seem unnecessary
  • Information is entered at shift end, not in real-time
  • Fields marked "optional" are never completed
  • Communications happen through convenient channels, not official ones
  • Exceptions are handled through judgment, not documented procedures

These adaptations represent efficiency. Practitioners discover what actually matters through experience and shed what does not.

Building for Shortcuts:

The design question is direct: "How do we make the shortcut the right path?"

If practitioners will batch data entry, design for batch entry. If practitioners will skip optional fields, make required fields rare and meaningful. If practitioners will use back-channels, integrate those channels into the workflow.

The goal is alignment between the easiest path and the correct path. When these diverge, practitioners follow the easy path, and the design fails.

Removing Friction Rather Than Adding Enforcement:

Low-adoption systems often trigger enforcement responses: mandatory fields, required acknowledgments, audit of compliance, performance metrics tied to usage.

Enforcement can increase compliance metrics while decreasing actual value. Practitioners enter data to satisfy requirements rather than to support their work. The system captures inputs without producing outcomes.

The alternative is friction reduction: make the official path easier than the workaround. If the system truly serves practitioner needs better than alternatives, adoption follows. Otherwise, enforcement creates only resentful compliance.


The Adoption Curve

Not all practitioners adopt at the same rate. Design must account for this variation.

The Standard Curve:

Early adopters (10-15%): Embrace new systems quickly, often before they're fully ready. They tolerate friction because they're attracted to novelty and improvement potential. Their feedback is valuable but unrepresentative. They'll work around problems that would block others.

Mainstream (60-70%): Adopt when the system works reliably for their common cases. They need the easy path to be genuinely easy. Their adoption indicates design readiness.

Resisters (15-25%): Adopt last, if ever. Some resistance is irrational: change aversion, status quo preference, sunk cost in existing skills. But some resistance reflects legitimate concerns that others fail to articulate.

Designing for the Middle:

The design target is the mainstream, not the extremes.

Designing for early adopters produces systems that work for the technically adventurous but frustrate everyone else. These designs generate initial excitement and subsequent disappointment.

Designing against resisters produces systems optimized for edge cases that make routine work harder. These designs satisfy the skeptics by annoying everyone.

The mainstream has different needs: reliable core functionality, clear value proposition, minimal friction for common cases, graceful handling of exceptions. Nail these, and the curve takes care of itself. Early adopters are already on board, and some resisters will follow the mainstream.

Listening to Resisters:

Though design targets the mainstream, resister feedback merits attention. Resisters often articulate problems that others feel but leave unspoken:

  • "This takes longer than the old way" may reveal friction invisible in demo scenarios
  • "I can't trust the system's recommendations" may reveal accuracy issues others haven't noticed
  • "This doesn't fit how we actually work" may reveal design assumptions that don't hold

The discipline is distinguishing signal from noise: which resistance reflects design problems, and which reflects change aversion? The distinction matters because the responses are opposite. Iterate the design, or stay the course with better communication.


Measuring Adoption Meaningfully

Standard adoption metrics (logins, transactions processed, features used) measure activity, never value. Better metrics reveal design quality.

Usage Metrics That Reveal Design Quality:

Voluntary usage: For non-mandatory features, what percentage of eligible users engage? High voluntary usage suggests genuine value. Low voluntary usage despite availability suggests features that don't serve user needs.

Full-path completion: Do users complete workflows, or do they abandon partway? Abandonment patterns reveal friction points, specific steps where the design fails.

Workaround frequency: How often do users employ alternatives to official systems? Tracking workarounds (not to punish, but to learn) reveals unmet needs.

Return rate: Do users who try the system continue using it? High trial with low retention suggests the value proposition isn't sustained through actual use.

Time-to-Competency:

How long does it take a new user to reach proficient performance?

Complex designs have long competency curves, weeks or months of reduced productivity before users can work effectively. Simple designs have short curves, days to reach competent performance.

Time-to-competency is a design metric. Long curves indicate excessive complexity. The design asks too much of users.

Practitioner Satisfaction vs. Compliance Rates:

Compliance measures whether practitioners use the system. Satisfaction measures whether using the system makes their work better.

High compliance with low satisfaction is a warning sign: practitioners are complying because they must, not because the system serves them. This pattern indicates enforcement success and design failure.

The goal is high compliance driven by high satisfaction. Practitioners use the system because it genuinely helps them work.


When Low Adoption Is the Right Answer

Sometimes the workflow is wrong.

Design iteration assumes the workflow concept is correct and the implementation needs refinement. But sometimes the concept is flawed. The workflow solves the wrong problem, addresses imaginary needs, or creates more friction than it eliminates.

How to Distinguish Design Failure from Change Resistance:

SignalSuggests Design FailureSuggests Change Resistance
WorkaroundsWorkarounds recreate capability the system lacksWorkarounds replicate old habits without functional advantage
FeedbackPractitioners articulate specific unmet needsPractitioners express vague preference for the old way
Early adoptersEarly adopters struggle with same issues as mainstreamEarly adopters succeed; mainstream struggles
Improvement attemptsChanges don't improve adoptionChanges improve adoption incrementally
Comparative behaviorPractitioners work harder to avoid systemPractitioners work harder initially but adapt

The Courage to Redesign:

When evidence indicates design failure rather than change resistance, the professional response is redesign.

This requires courage. Redesign admits failure. It writes off investment. It delays promised outcomes. It may threaten careers of those who championed the original design.

But enforcing a failed design is worse. It consumes organizational energy. It damages practitioner trust. It creates compliance without value. The longer enforcement continues, the more expensive the eventual redesign.

Using Adoption Data to Iterate:

Whether the diagnosis is design failure or implementation refinement, adoption data guides response:

  • Workaround patterns reveal missing requirements
  • Abandonment points reveal friction locations
  • Satisfaction surveys reveal value perception gaps
  • Competency curves reveal complexity excess

Each data point suggests a design hypothesis. Iteration tests hypotheses against improved adoption.


Connection to Module 5

The workflow blueprint's adoption assumptions become testable in the prototype phase.

Module 4 produces a design with embedded predictions: practitioners will use this workflow because it serves their needs better than alternatives. The path is easier. The friction is lower. The value is clear.

Module 5 tests these predictions. Prototyping reveals whether the design's assumptions hold. Early practitioner interaction generates feedback before full implementation commits resources.

The blueprint is finished when it's validated, when practitioners have confirmed that the design serves their needs.

Adoption is designed in, tested through prototyping, and measured throughout operation. The blueprint is a hypothesis. Module 5 begins the experiment.



Module 4B: ORCHESTRATE — Practice

Module 4B: Practice

A systematic methodology for designing workflows that practitioners will actually use


Why This Module Exists

Module 4A established the theory: design for the person doing the work, not the person reviewing the work. The Lakewood case demonstrated how well-founded initiatives fail when workflows serve executive needs before practitioner needs. The principles are clear; the question is how to apply them.

This module provides the methodology to translate principles into designs.

The Workflow Blueprint is a design document, a structured way to map current work, identify friction, select collaboration patterns, and create future-state workflows that practitioners recognize as improvement. Every methodology step in this module has been tested against the failure patterns catalogued in Module 4A: the executive dashboard trap, compliance theater, exception obsession, and the rest.


What You Will Learn

By the end of Module 4B, you will be able to:

  1. Map current-state workflows with practitioner input, capturing reality rather than documentation
  2. Select appropriate workflow patterns for human-AI collaboration
  3. Design future-state workflows that reduce friction rather than shifting it
  4. Specify human-AI collaboration points with clarity about who decides and who executes
  5. Document workflows in blueprint format that developers and operators can use
  6. Validate designs with practitioners before committing development resources

The Practitioner's Challenge

Good designs look obvious in retrospect. The challenge is seeing practitioner experience from the inside before committing to a solution.

A systems analyst described the difficulty: "I've designed maybe a dozen workflow automation projects. The ones that failed all had something in common: I understood the process perfectly but didn't understand the work. I could draw a flowchart of how tasks moved through the system. I couldn't feel what it was like to do those tasks under pressure, with incomplete information, while handling three other things.

"The successful projects started differently. I sat with practitioners, just watching. I noticed what made them sigh, what made them reach for workarounds, what they did automatically that the official process missed. The design emerged from that observation, from watching the actual work."

This module teaches that observation-first approach. The methodology prioritizes practitioner experience over system elegance. An adoptable design that captures 80% of the value beats an elegant design that practitioners avoid.


What You're Receiving as Input

Module 4B builds on work completed in Modules 2 and 3:

From Module 2, Opportunity Audit:

  • Process observation notes from field assessment
  • Waste pattern analysis with root causes
  • Friction points identified and quantified
  • Understanding of workarounds and shadow systems

From Module 3, ROI Model:

  • Baseline metrics for priority opportunity
  • Quantified value across Time, Throughput, and Focus lenses
  • Business case with success criteria
  • Assumption documentation

The R-01 Example:

Throughout Module 4B, we continue with the Returns Bible integration (R-01) from earlier modules. The opportunity has been assessed, valued, and approved:

MetricValue
Annual Value$97,516
Implementation Cost$35,000
Payback Period4.2 months
ROI736%
Priority Rank1 of 5

R-01 becomes the worked example for every methodology step. You will see how assessment findings and ROI calculations transform into a workflow design that addresses the specific friction identified.


Field Note: The Design That Felt Like Help

A practitioner described the moment a workflow design succeeded:

"They had redesigned our returns process three times before. Each time, the new system was supposed to make things easier. Each time, it added steps: data entry, reason codes, supervisor approvals. The systems got more sophisticated and the work got harder.

"The fourth design was different. The team spent two days just watching us work. They asked questions like 'What do you wish you knew automatically?' and 'Where do you have to stop and look something up?' They didn't ask what features we wanted.

"The system they built felt invisible. I'd pull up a return, and the policy information was already there. No searching. If something was unusual, the system flagged it and suggested who could help. I never entered a reason code because the system inferred reasons from what I was already doing.

"I had no idea how much it helped until someone asked me about the new system. I had to think about it. I'd stopped noticing it was there. That's when I knew the design had worked."


Module Structure

Module 4B follows the ROOTS framework:

  • R, REVEAL: This introduction
  • O, OBSERVE: The blueprint methodology overview
  • O, OPERATE: Six-step process for workflow design
    • Current-state mapping
    • Future-state design
    • Practitioner validation
    • Blueprint documentation
    • Transition preparation
  • T, TEST: Quality metrics for design evaluation
  • S, SHARE: Reflection prompts, peer exercises, and discussion questions

Supporting materials include:

  • Reading list with academic and practitioner sources
  • Slide deck outline for presentation
  • Assessment questions with model answers
  • Instructor notes for facilitation

The Deliverable

Module 4B produces the Redesigned Workflow Blueprint, the fourth artifact in the A.C.O.R.N. cycle.

A complete Workflow Blueprint includes:

  • Current-state workflow documentation (observed, not assumed)
  • Future-state workflow design with human-AI collaboration specification
  • Friction point mapping showing where and how value is captured
  • Adoption design elements addressing practitioner concerns
  • Technology requirements (tool-agnostic)
  • Success metrics aligned with Module 3 ROI model
  • Practitioner validation summary

This deliverable feeds Module 5: REALIZE, where the blueprint becomes a working prototype tested in real conditions.



Module 4B: ORCHESTRATE — Practice

O — Observe

The Workflow Blueprint Methodology

The Workflow Blueprint is a design specification that bridges strategy and implementation. It translates the value identified in Module 3 into a concrete workflow that can be built, tested, and deployed.

This section overviews the complete methodology: what the blueprint produces, how long it takes, what inputs are required, and what quality standards apply.


What the Workflow Blueprint Produces

A complete blueprint contains six components:

1. Current-State Workflow Documentation

The workflow as it actually happens: the observed reality including workarounds, shadow systems, and informal coordination. This documentation establishes the baseline against which improvement will be measured.

2. Future-State Workflow Design

The redesigned workflow with friction points addressed. This is not a vision document; it is a step-by-step specification of what will happen, who will act, and how human-AI collaboration will function.

3. Human-AI Collaboration Specification

Explicit definition of roles at each decision point: what the system does, what humans do, how override works, and how feedback improves the system over time. This specification draws on the workflow patterns from Module 4A.

4. Adoption Design Elements

Design choices that address practitioner concerns and increase likelihood of adoption. This includes simplicity decisions, invisible automation implementations, and explicit attention to what makes the workflow feel like help rather than surveillance.

5. Technology Requirements (Tool-Agnostic)

Functional requirements that specify what the system must do without specifying which product or platform does it. These requirements allow evaluation of build vs. buy vs. configure decisions in Module 5.

6. Success Metrics Aligned with ROI Model

Specific metrics that will indicate whether the design is working, drawn directly from Module 3 baseline measurements. These metrics connect the design to the value proposition that justified investment.


The Design Timeline

Workflow blueprint development typically requires 5-7 working days for a moderately complex opportunity:

PhaseDurationActivities
Current-State Mapping1-2 daysPractitioner walkthroughs, observation sessions, workaround documentation
Pattern Selection & Design2-3 daysFriction analysis, pattern selection, future-state design, iteration
Practitioner Validation1 dayValidation sessions, feedback integration
Blueprint Documentation1 dayFinal documentation, quality review

Timeline Factors:

  • Complexity: Multi-step workflows with many decision points take longer
  • Stakeholder availability: Practitioner time for mapping and validation is often the constraint
  • Integration scope: Workflows touching multiple systems require more design iteration
  • Prior assessment quality: Strong Module 2 work accelerates current-state mapping

The timeline assumes one priority opportunity. Organizations developing blueprints for multiple opportunities should sequence them rather than parallelize. Lessons from early blueprints improve later ones.


Inputs Required

The blueprint builds on prior module work and requires new input:

From Module 2, Opportunity Audit:

InputPurpose in Blueprint
Process observation notesFoundation for current-state mapping
Waste pattern analysisIdentifies friction to address in design
Workaround documentationReveals requirements hidden in informal systems
Shadow system inventoryEnsures design absorbs shadow system function

From Module 3, ROI Model:

InputPurpose in Blueprint
Baseline metricsSuccess criteria for design evaluation
Value quantificationPrioritizes which friction to address
Business caseJustifies design investment
Assumption documentationDesign must not violate approved assumptions

New Inputs for Module 4:

InputHow to Obtain
Practitioner interviewsStructured conversations about current work and pain points
Technology inventoryDocumentation of systems touched by the workflow
Constraint documentationOrganizational policies, compliance requirements, technical limitations
Stakeholder preferencesInput from managers, IT, compliance on design requirements

The Methodology Sequence

The workflow blueprint methodology follows six steps:

Step 1: Map Current-State Workflow

Document what actually happens today. Start with practitioner walkthrough, observe actual instances, capture divergence between described and observed, include informal systems. The output is a current-state workflow map that practitioners recognize as accurate.

Key question: Does this map reflect how work actually happens, including the parts no one talks about?

Step 2: Identify Friction Points

Analyze the current-state map for value leakage. Where does time disappear? Where do errors originate? Where does cognitive load concentrate? Which steps exist only because systems don't communicate? The output is a friction point inventory prioritized by value impact.

Key question: Which friction points, if eliminated, would capture the value identified in Module 3?

Step 3: Select Workflow Pattern

Choose the human-AI collaboration pattern that fits the work: Decision Support, Automation with Override, Preparation, Verification, or Learning. The pattern provides structure for the future-state design. Multiple patterns can combine for complex workflows.

Key question: What is the fundamental nature of human-AI collaboration in this workflow? Who decides and who executes?

Step 4: Design Future-State Workflow

Create the redesigned workflow that addresses friction points using the selected pattern. Design for adoption: make the easy path the right path, make automation invisible, ensure the design feels like help. The output is a future-state workflow specification.

Key question: Would a practitioner choose to use this workflow even if it weren't required?

Step 5: Validate with Practitioners

Test the design with the people who will use it. Present the future state, explore scenarios, identify gaps. Iterate based on feedback. The output is a validated design with documented practitioner input.

Key question: Have practitioners seen this design and confirmed it would improve their work?

Step 6: Document the Blueprint

Assemble all components into the final blueprint document. Structure for multiple audiences: developers need technical specification, operations needs process documentation, leadership needs connection to business case. The output is the Workflow Blueprint deliverable.

Key question: Could someone build this system, train users, and measure success using only this document?


Quality Standard

A blueprint meets quality standard when:

Practitioners recognize the current state as accurate.

The current-state map should prompt reactions like "Yes, that's exactly what we do" and "I forgot we had to do that step." If practitioners don't recognize the map, it documents the wrong process.

Future state clearly addresses identified friction.

Every significant friction point from Step 2 should have a corresponding design element in Step 4. The connection should be explicit: "Friction: Bible lookup takes 14 minutes. Solution: System surfaces policy automatically."

Human-AI roles are explicitly specified.

For each step, the blueprint should answer: Who does this, human or system? If system, what does the human see? If human, what does the system provide? How does override work? There should be no ambiguous steps.

Adoption considerations are designed in, not added on.

Adoption is a design problem, never a training problem to solve later. Simplicity, invisibility, and help-over-surveillance should be evident in design choices, woven into the blueprint itself.

The design can be implemented in multiple tools.

The blueprint specifies what must happen, not how it's technically accomplished. A developer reading the blueprint should be able to implement it in their platform of choice without requesting additional design decisions.


The R-01 Blueprint

Throughout Module 4B, R-01 (Returns Bible integration) serves as the worked example. By the end of this module, you will have seen:

  • R-01 current-state workflow mapped with all workarounds
  • R-01 friction points identified and prioritized
  • Workflow pattern selected for R-01 (Preparation pattern)
  • R-01 future-state workflow designed
  • Practitioner validation of R-01 design
  • Complete R-01 blueprint document

The R-01 example demonstrates each methodology step at scale appropriate for a moderately complex opportunity. Your own blueprints may be simpler or more complex, but the methodology applies.



Module 4B: ORCHESTRATE — Practice

O — Operate

Step 1: Map Current-State Workflow

You cannot design improvement without understanding what exists. Module 2's audit identified friction; this mapping shows flow. The map must reflect reality, including the workarounds practitioners don't mention in meetings.


Purpose of Current-State Mapping

The current-state map serves three functions:

1. Design Foundation

You cannot redesign what you do not understand. The current-state map reveals the actual workflow: the work as it actually happens, including the parts that never appear in documentation. Future-state design emerges from this reality.

2. Friction Localization

Module 3's ROI model quantified total value. Current-state mapping localizes that value, showing exactly where in the workflow time is lost, errors originate, and cognitive load concentrates. This localization guides design prioritization.

3. Validation Baseline

The current-state map becomes the baseline against which improvement is measured. When Module 5 tests the prototype, comparison requires knowing what "before" looked like in detail.


What to Capture

For each step in the workflow, document:

ElementDescriptionWhy It Matters
TriggerWhat initiates this stepDefines scope and starting conditions
ActorWho performs this stepIdentifies human-AI role assignment
ActionWhat specifically happensEnables comparison with future state
SystemsTools touched at this stepReveals integration requirements
DecisionsChoice points and criteriaIdentifies where judgment is required
InformationData consumed and producedDefines information architecture
TimeDuration and wait timeQuantifies improvement potential
WorkaroundsUnofficial adaptationsReveals hidden requirements

Mapping Methodology

Start with Practitioner Walkthrough

Ask a practitioner to describe a recent specific instance. "Walk me through the return you handled this morning. What happened first?"

The walkthrough reveals sequence and logic. Note what the practitioner mentions automatically (important steps) and what they skip until prompted (assumed context).

Observe 3-5 Actual Instances

Walkthrough describes what practitioners think they do. Observation reveals what they actually do. The gap is significant.

During observation, note:

  • Steps the walkthrough didn't mention
  • Divergence from documented process
  • Physical artifacts (sticky notes, printouts, reference materials)
  • Communications outside the official channel
  • Moments of hesitation, frustration, or improvisation

Capture Divergence

Compare walkthrough description to observed reality. Where they diverge, the observation is correct. Common divergences:

  • Steps described as sequential actually overlap
  • "Automatic" system steps require manual intervention
  • Official process is skipped entirely for common cases
  • Workarounds are so habitual they weren't mentioned

Document the Informal System

Every workflow has shadow infrastructure: the spreadsheets, notes, tribal knowledge, and back-channel communications that make official systems tolerable. Module 4A's "Ignoring the Informal System" failure pattern warned against designing without this understanding.

For R-01, the informal system includes:

  • Patricia's Returns Bible (the physical document)
  • Mental models of which policies apply to which situations
  • Shortcuts veteran representatives have developed
  • Escalation patterns that bypass official channels

Validate with Multiple Practitioners

A single practitioner's perspective is incomplete. Validate the map with 2-3 others:

  • Does this match your experience?
  • What did I miss?
  • Is there anything you do differently?

Variation between practitioners is data. It reveals where the process lacks standardization and where individual adaptation has filled gaps.


The R-01 Current-State Workflow

Here is the complete current-state map for R-01 (Returns Bible lookup), developed through practitioner walkthrough and observation:

Trigger: Customer requests return (phone, email, or chat)


Step 1: Gather Return Information

  • Actor: Customer Service Representative
  • Action: Collect order number, item, reason for return
  • Systems: CRM (customer lookup), Order Management (order details)
  • Time: 2-3 minutes
  • Notes: Representatives have developed shortcut questions based on common return types

Step 2: Initial Assessment

  • Actor: Customer Service Representative
  • Action: Determine if return is straightforward or requires policy lookup
  • Systems: None (judgment call)
  • Time: 30 seconds
  • Decision: If return type is familiar and policy is known → Skip to Step 6
  • Notes: Experienced reps skip Bible lookup for ~40% of returns; new reps consult Bible for nearly everything

Step 3: Bible Retrieval

  • Actor: Customer Service Representative
  • Action: Locate and retrieve Returns Bible
  • Systems: Physical document (shared binder) OR digital copy (shared drive)
  • Time: 1-2 minutes
  • Workaround: Representatives often ask Patricia directly rather than searching the Bible
  • Notes: Physical copy frequently not in expected location; digital copy may be outdated

Step 4: Policy Search

  • Actor: Customer Service Representative
  • Action: Navigate Bible to find applicable policy
  • Systems: Returns Bible (300+ pages, organized by product category and return reason)
  • Time: 3-8 minutes depending on complexity
  • Decision: Multiple policies may apply; representative must determine precedence
  • Notes: Bible organization doesn't match how reps think about returns; cross-references are incomplete

Step 5: Policy Interpretation

  • Actor: Customer Service Representative
  • Action: Interpret policy language, apply to specific situation
  • Systems: Returns Bible, Order Management (for details)
  • Time: 2-5 minutes
  • Workaround: When policy is ambiguous, representatives consult Patricia or senior colleague
  • Notes: ~12% of Bible-dependent returns require escalation for interpretation

Step 6: Customer Communication

  • Actor: Customer Service Representative
  • Action: Explain return process and outcome to customer
  • Systems: CRM (communication logging), Phone/Chat/Email
  • Time: 2-4 minutes
  • Notes: Representatives often simplify policy language for customer clarity

Step 7: Return Processing

  • Actor: Customer Service Representative
  • Action: Initiate return in system, generate RMA if applicable
  • Systems: Order Management, Inventory (for restock decisions)
  • Time: 2-3 minutes
  • Notes: Some return types require supervisor approval before processing

Step 8: Documentation

  • Actor: Customer Service Representative
  • Action: Log return details and outcome in CRM
  • Systems: CRM
  • Time: 1-2 minutes
  • Workaround: Representatives often batch documentation at end of shift rather than in real-time

Total Time (Bible-dependent return): 14-28 minutes Total Time (familiar return, no Bible lookup): 7-12 minutes


Workflow Diagram Description

The workflow follows this structure (suitable for later visualization):

[Customer Request]
    ↓
[1. Gather Information]
    ↓
[2. Initial Assessment]
    ↓ (familiar return)         ↓ (needs policy lookup)
    |                           |
    ↓                     [3. Bible Retrieval]
    |                           ↓
    |                     [4. Policy Search]
    |                           ↓
    |                     [5. Policy Interpretation]
    |                           ↓
    ←←←←←←←←←←←←←←←←←←←←←←←←←←←←
    ↓
[6. Customer Communication]
    ↓
[7. Return Processing]
    ↓
[8. Documentation]
    ↓
[Complete]

Step Table Summary

StepActorSystem(s)TimeFriction Level
1. Gather InformationRepCRM, Order Mgmt2-3 minLow
2. Initial AssessmentRepNone0.5 minLow
3. Bible RetrievalRepPhysical/Digital Bible1-2 minMedium
4. Policy SearchRepReturns Bible3-8 minHigh
5. Policy InterpretationRepBible, Order Mgmt2-5 minHigh
6. Customer CommunicationRepCRM, Comm Channel2-4 minLow
7. Return ProcessingRepOrder Mgmt, Inventory2-3 minLow
8. DocumentationRepCRM1-2 minLow

Friction Point Identification

From the current-state map, identify where value leaks:

High-Friction Steps:

Step 4: Policy Search (3-8 minutes)

  • Bible organization doesn't match representative mental models
  • Cross-references are incomplete
  • Finding the right policy requires significant navigation
  • Time varies dramatically based on familiarity with Bible structure

Step 5: Policy Interpretation (2-5 minutes)

  • Policy language is often ambiguous
  • Multiple policies may apply to same situation
  • Requires judgment that new representatives lack
  • 12% escalation rate indicates decision difficulty

Medium-Friction Steps:

Step 3: Bible Retrieval (1-2 minutes)

  • Physical Bible frequently missing
  • Digital copy currency unknown
  • Time wasted locating resource before using it

Friction Concentration:

Steps 3-5 consume 6-15 minutes of the 14-28 minute total for Bible-dependent returns. This is the target zone for improvement, consistent with Module 3's value calculation, which attributed the majority of R-01 value to lookup and interpretation time.


Common Mistakes in Current-State Mapping

Mapping the Documented Process

The flowchart in the SOP isn't the workflow. The workflow is what practitioners actually do. If your map matches official documentation exactly, you haven't observed deeply enough.

Missing the Workarounds

Workarounds are so habitual that practitioners don't think of them as separate from the process. Ask specifically: "Is there anything you do that isn't in the official process?" Watch for moments when practitioners reach for non-standard resources.

Treating Practitioner Complaints as Resistance

When practitioners say "This step is annoying" or "I wish we could skip this," they're providing design requirements. These are friction points to address.

Rushing to Future State

The temptation to start designing solutions appears immediately. Resist it. Incomplete current-state mapping leads to future-state designs that solve the wrong problems or miss critical requirements hidden in informal systems.


Documentation Checklist

Before proceeding to future-state design, confirm:

  • Workflow has been described by practitioners and observed in action
  • All steps are documented with actor, system, time, and notes
  • Workarounds and informal systems are captured
  • Multiple practitioners have validated the map
  • Friction points are identified and localized
  • Time data aligns with Module 3 baseline metrics
  • The map reflects reality, even where reality is messy


Module 4B: ORCHESTRATE — Practice

O — Operate

Step 2: Design Future-State Workflow

Future-state design translates friction points into solutions. The design process applies Module 4A's principles, selects an appropriate workflow pattern, and creates a step-by-step specification that practitioners will recognize as improvement.


Design Principles Applied

The five principles from Module 4A constrain and guide every design decision:

1. Invisible Automation

If practitioners notice the system, the design has failed. The goal is reducing friction, not adding technology. Ask of each design element: Will practitioners experience this as help or as a new thing to manage?

2. Design for Adoption

The 80% solution that gets adopted beats the 100% solution that sits unused. Prioritize simplicity over comprehensiveness. Ask: Will practitioners choose to use this, or will they need enforcement?

3. Simplicity Imperative

Every step must earn its place. Complexity is the enemy of adoption. For each proposed step, ask: What happens if we remove this? If the answer is "not much," remove it.

4. Practitioner-Centered Design

Design for the person doing the work, not the person reviewing the work. When executive needs and practitioner needs conflict, practitioner needs win. Executive visibility emerges from practitioner actions.

5. Help, Not Surveillance

Automation should feel like assistance, not monitoring. Ask: Will this feel like a safety net or like Big Brother? Design choices that feel like surveillance will be resisted regardless of their objective value.

The Hierarchy:

When principles conflict, apply this order:

  1. Adoption (will they use it?)
  2. Simplicity (can they learn it quickly?)
  3. Completeness (does it handle all cases?)

A simple, adoptable design that handles 80% of cases is better than a comprehensive design that's too complex to adopt.


Pattern Selection for R-01

Current-state analysis identified high friction in Steps 3-5: Bible retrieval, policy search, and policy interpretation. These steps represent information-gathering work that delays the core task (helping the customer).

Pattern Analysis:

PatternFit for R-01?
Decision SupportPartial: relevant for interpretation step
Automation with OverridePoor: returns lack routine predictability for full automation
PreparationStrong: the core problem is assembling information
VerificationPoor: the workflow centers on decisions, not checking work
LearningAdditive: useful for improving over time

Selected Pattern: Preparation

The Preparation pattern fits R-01's friction profile:

  • The bottleneck is information gathering, not decision-making
  • Representatives have the judgment to make return decisions; they lack fast access to policy information
  • The system's role is to prepare context so humans can act quickly
  • Human authority over final decision remains intact

What Preparation Implies:

In Preparation-pattern workflows:

  • System assembles relevant information before human needs it
  • Human arrives at decision point with context already prepared
  • Decision remains with human; system accelerates decision-making
  • Feedback loop improves preparation quality over time

For R-01, this means:

  • System identifies applicable return policies when return details are entered
  • Representative sees relevant policy information without searching
  • Interpretation remains with representative; system surfaces relevant precedents
  • Unusual cases are flagged for human judgment

Future-State Design Process

Start from Friction Points

The current-state map identified three high/medium-friction steps:

  • Step 3: Bible Retrieval (1-2 min)
  • Step 4: Policy Search (3-8 min)
  • Step 5: Policy Interpretation (2-5 min)

For each friction point, ask: What would eliminate this?

Step 3 friction: Representatives waste time locating the Bible before using it. Elimination: Policy information appears automatically within existing workflow. No retrieval needed.

Step 4 friction: Bible organization mismatches how representatives think about returns. Elimination: System presents relevant policies based on return attributes. No searching needed.

Step 5 friction: Policy language is ambiguous; multiple policies may apply. Elimination: System surfaces prior similar cases and recommended actions. Interpretation is guided, never eliminated.

Design the Human Experience First

Before specifying what the system does, specify what the representative experiences:

  1. Representative pulls up return request
  2. Relevant policy information is already visible, no searching
  3. If the case is straightforward, representative proceeds immediately
  4. If the case has complexity, system shows similar prior cases
  5. Representative makes and communicates decision
  6. System captures decision for future learning

This human-centered sequence determines what the system must provide at each moment.

Add Technology to Serve the Experience

Now specify what enables that experience:

  1. Integration with Order Management to identify return attributes automatically
  2. Policy engine that maps return attributes to applicable policies
  3. Case matching that surfaces similar prior returns and their outcomes
  4. Display of policy information within existing CRM interface (no new screens)
  5. Decision capture that feeds the learning loop

Test Each Choice Against Adoption

For each design choice, ask:

  • Does this make work easier? (If not, remove it)
  • Does this add steps? (If so, justify the addition)
  • Does this feel like help or surveillance? (If surveillance, redesign)
  • Would an experienced representative choose to use this? (If not, why not?)

The R-01 Future-State Workflow

Trigger: Customer requests return (phone, email, or chat)


Step 1: Gather Return Information (Revised)

  • Actor: Customer Service Representative
  • Action: Collect order number, item, reason for return
  • Systems: CRM (customer lookup), Order Management (order details), Policy Engine (automatic)
  • Time: 2-3 minutes (unchanged)
  • Change: As representative enters return details, Policy Engine identifies applicable policies in background

Step 2: Policy Review (Replaces Steps 2-5)

  • Actor: Customer Service Representative
  • Action: Review system-surfaced policy information
  • Systems: CRM (policy display integrated), Policy Engine
  • Time: 1-2 minutes
  • Human-AI Collaboration:
    • System provides: Applicable policy summary, confidence level, similar prior cases
    • Human provides: Final policy selection, contextual judgment, exception handling
    • Override: Representative can mark "different policy applies" with one click (no explanation required)
  • Notes: Policy information appears in existing CRM interface; no navigation to separate system

Step 3: Exception Handling (When needed)

  • Actor: Customer Service Representative + System
  • Action: Address unusual cases outside standard policies
  • Systems: Policy Engine (exception flagging), Escalation routing
  • Time: 2-5 minutes (only for ~15% of cases)
  • Human-AI Collaboration:
    • System provides: Flag that case is unusual, suggested contacts, relevant policy references
    • Human provides: Judgment call or escalation decision
  • Notes: Exception handling is routed to resolution, not routed to documentation

Step 4: Customer Communication (Was Step 6)

  • Actor: Customer Service Representative
  • Action: Explain return process and outcome to customer
  • Systems: CRM (communication logging)
  • Time: 2-4 minutes (unchanged)
  • Notes: Representative uses system-provided policy summary for consistency

Step 5: Return Processing (Was Step 7)

  • Actor: Customer Service Representative
  • Action: Initiate return in system
  • Systems: Order Management, Inventory
  • Time: 2-3 minutes (unchanged)
  • Change: Policy decision is logged automatically based on earlier selection; no separate approval step for standard returns

Step 6: Implicit Documentation (Replaces Step 8)

  • Actor: System (automatic)
  • Action: Log return details from workflow actions
  • Systems: CRM, Policy Engine
  • Time: 0 minutes (no representative action required)
  • Change: Documentation is derived from actions already taken; no separate data entry

Total Time (Standard return): 9-14 minutes (vs. 14-28 current) Total Time (Exception return): 14-19 minutes


Future-State Workflow Diagram

[Customer Request]
    ↓
[1. Gather Information]
    ↓ (Policy Engine runs automatically)
    ↓
[2. Policy Review]
    ↓ (standard case)        ↓ (exception flagged)
    |                        |
    ↓                  [3. Exception Handling]
    |                        ↓
    ←←←←←←←←←←←←←←←←←←←←←←←←←
    ↓
[4. Customer Communication]
    ↓
[5. Return Processing]
    ↓ (documentation automatic)
[Complete]

Comparison: Current vs. Future State

StepCurrent StateFuture StateChange
Information Gathering2-3 min2-3 minPolicy Engine starts automatically
Assessment/Lookup6-15 min (Steps 2-5)1-2 minSystem-surfaced policies replace Bible search
Exception HandlingEmbedded in lookup2-5 min (when needed)Explicit exception path for unusual cases
Customer Communication2-4 min2-4 minNo change
Return Processing2-3 min2-3 minPolicy decision auto-logged
Documentation1-2 min0 minImplicit from workflow actions
Total (standard)14-28 min9-14 min5-14 min saved

Human-AI Collaboration Specification

For each step requiring collaboration:

Step 2: Policy Review

RoleAI ProvidesHuman Provides
Policy IdentificationApplicable policies based on return attributesConfirmation or correction
Confidence SignalHigh/medium/low confidence indicatorJudgment on whether to proceed
Similar Cases2-3 prior similar returns with outcomesRelevance assessment
OverrideOne-click "different policy applies" optionNo explanation required

Step 3: Exception Handling

RoleAI ProvidesHuman Provides
Exception DetectionFlag that case falls outside patternsDecision to handle or escalate
Routing SuggestionRecommended person/team for helpFinal routing decision
Policy ReferencesRelevant sections for unusual situationInterpretation

Design for Adoption

What makes this feel like help:

  • Information appears without searching
  • No new screens or systems to navigate
  • Policy display is integrated into existing CRM
  • Override is one click, no explanation
  • Documentation happens automatically

Where practitioners might resist:

  • Distrust of system recommendations ("It cannot know my customers")
  • Concern about being monitored through the policy log
  • Fear that system will reduce their expertise value

How design addresses resistance:

  • Recommendations are clearly labeled as suggestions, not requirements
  • Override is easy and not tracked for performance evaluation
  • System learns from representative expertise, not the reverse
  • Experienced representatives can proceed directly when confident

Minimal viable version:

  • Core: Policy surfacing for common return types
  • Deferred: Similar case matching
  • Deferred: Learning loop from representative decisions

The minimal version delivers the primary value (eliminating Bible search) without requiring the full system. Additional capabilities can be added after adoption is established.


Connecting to ROI Model

The design should capture the value calculated in Module 3:

Module 3 MetricHow Design Addresses
Time: 14.2 min per Bible lookupSteps 3-5 consolidated to 1-2 min policy review
Errors: 4.3% wrong policySystem surfaces correct policy; representative confirms
Escalations: 12% of Bible returnsException handling pathway reduces unnecessary escalations
Patricia dependencyPolicy knowledge encoded in system, not person
Onboarding: 3 days Bible trainingSystem-guided policy lookup reduces training requirement

Design Template

Use this template for your own future-state designs:

FUTURE-STATE WORKFLOW: [Opportunity ID and Name]

Pattern Selected: [Decision Support / Automation with Override / Preparation / Verification / Learning]

Pattern Rationale: [Why this pattern fits the friction profile]

Friction Points Addressed:

FrictionCurrent TimeSolutionFuture Time
[Friction 1][X min][How addressed][Y min]
[Friction 2][X min][How addressed][Y min]

Future-State Steps:

StepActorAI RoleHuman RoleTime
[Step 1][Who][What AI provides][What human does][Est.]

Adoption Design:

  • What makes this feel like help: [Specific elements]
  • Potential resistance: [Anticipated concerns]
  • Design response: [How addressed]
  • Minimal viable version: [Core vs. deferred features]

ROI Alignment:

Baseline MetricDesign Mechanism
[Metric 1][How design improves it]


Module 4B: ORCHESTRATE — Practice

O — Operate

Step 3: Validate with Practitioners

Designers have blind spots. The most elegant workflow design fails if it misses how work actually happens. Validation before building is cheaper than redesign after, and practitioner involvement increases adoption.


Why Validation Matters

Designers Miss What Practitioners See

A design that makes sense in documentation may not make sense in practice. Practitioners see edge cases designers didn't consider, workflow interactions that weren't mapped, and friction that seems minor in theory but compounds in reality.

The Lakewood case demonstrated this failure: the discharge system made sense to designers who understood healthcare operations abstractly. It missed the reality of nurses who understood the specific context of each patient, each family, each physician relationship.

Validation Before Building Is Cheaper

Changes during design cost hours. Changes during development cost days. Changes after deployment cost weeks and trust. Each validation conducted saves money.

Practitioner Involvement Increases Adoption

Practitioners who helped shape the design are more likely to use it. They've seen their concerns addressed. They understand the rationale. They have ownership of the outcome.

Practitioners who receive a design they had no hand in shaping are more likely to resist. They see something imposed. They lack context for the reasoning. They have no investment in success.


Who to Involve

Mix of Tenure Levels

  • New practitioners (< 1 year): See friction that veterans have stopped noticing
  • Experienced practitioners (1-5 years): Know the work well but haven't fully adapted to workarounds
  • Veterans (5+ years): Know edge cases, history, and why things are the way they are

A design validated only by veterans may miss friction that's become invisible to them. A design validated only by newcomers may miss complexity that experience reveals.

Include Skeptics

The temptation is to validate with friendly practitioners, people who are enthusiastic about improvement and likely to say positive things.

Resist this temptation. Skeptics see problems enthusiasts miss. Their objections, while uncomfortable, reveal design weaknesses that will otherwise surface during deployment when they're expensive to fix.

If a skeptic cannot see how the design helps them, the design probably fails to help them. Better to discover this in validation than in failed adoption.

People Who Will Actually Use It

Validate with practitioners who will use the system daily. Managers may approve designs that burden practitioners. Practitioners identify burdens managers never see.

For R-01, validation should include:

  • 2-3 customer service representatives with varied tenure
  • At least one representative who currently relies heavily on the Returns Bible
  • At least one representative who is skeptical of new systems

Validation Methodology

Walkthrough: Present Future-State, Get Reactions

Present the future-state workflow step by step. At each step, pause for reaction:

  • "Does this match how you'd want it to work?"
  • "What would you be thinking at this moment?"
  • "Is anything missing?"

Watch for:

  • Confusion (they don't understand what the step involves)
  • Hesitation (they have concerns they're not voicing)
  • Correction (they think you've described it wrong)
  • Enthusiasm (they see value you can build on)

Scenario Testing: "What Would Happen If..."

Walk through specific scenarios, both common cases and edge cases:

  • "A customer calls wanting to return a product they bought 18 months ago. What would happen in this workflow?"
  • "The system shows a policy you know is outdated. What would you do?"
  • "A customer is upset and you need to resolve quickly. Does this workflow help or slow you down?"

Scenarios reveal gaps that abstract walkthroughs miss. They force practitioners to mentally simulate using the system.

Edge Case Exploration: "What About When..."

Ask practitioners to generate edge cases:

  • "What situations would this not handle well?"
  • "What's the weirdest return case you've seen? How would this handle it?"
  • "What makes you reach for the Returns Bible most often? Would this help?"

Edge cases practitioners generate are more relevant than edge cases designers imagine. They come from real experience.

Comparison: "How Does This Compare to What You Do Now?"

Ask direct comparison questions:

  • "Would this make your work easier or harder?"
  • "Is this faster or slower than what you do now?"
  • "What would you miss about the current way?"
  • "What would you be glad to stop doing?"

Direct comparison surfaces value (or lack of value) that abstract evaluation misses.


Questions to Ask

Core Validation Questions:

QuestionWhat It Reveals
"Would this make your work easier or harder?"Net value assessment
"What would you do if the system gave wrong guidance?"Override design adequacy
"What situations wouldn't this handle well?"Edge case gaps
"What would make you avoid using this?"Adoption barriers
"What's missing?"Requirements gaps

Probing Questions:

QuestionWhat It Reveals
"Walk me through your first day using this."Onboarding friction
"When would you ignore the system's suggestion?"Trust calibration
"How would you explain this to a new colleague?"Comprehensibility
"What would your supervisor think about this?"Organizational dynamics

Reading Validation Feedback

Enthusiasm Isn't the Goal

Validation seeks honesty. Enthusiastic feedback that misses problems is worse than critical feedback that reveals them.

Watch for practitioners who say what they think you want to hear. These responses often include:

  • Generic praise ("This looks great!")
  • Quick agreement without engagement
  • Lack of questions or concerns
  • Body language that doesn't match words

Concerns Are Design Opportunities

Every concern is a design opportunity. The practitioner who says "This wouldn't work because..." is telling you something valuable.

Respond to concerns with curiosity, not defense:

  • "Tell me more about why that wouldn't work."
  • "What would need to be different?"
  • "Have you seen something similar fail before?"

Watch for Polite Agreement

Practitioners may be reluctant to criticize directly, especially to someone who has authority or who clearly invested effort. They express reservations through euphemism:

  • "That's interesting" often means "I have concerns"
  • "I'm sure you've thought of this, but..." precedes a real issue
  • "This might work for some people" means "not for me"
  • Long pauses before responding indicate internal conflict

Ask follow-up questions when you sense polite agreement:

  • "What's making you hesitate?"
  • "If you could change one thing about this, what would it be?"
  • "Who would have the hardest time with this?"

The R-01 Validation Session

Participants:

NameRoleTenurePerspective
Maria T.CS Representative8 yearsHeavy Bible user, informal expert
DeShawn W.CS Representative2 yearsMiddle experience, tech-comfortable
Jennifer R.CS Representative4 monthsNew, still learning policies
Alex P.CS Team Lead5 yearsSkeptical of system changes

Key Feedback Received:

From Maria T.: "The system won't know about the exceptions. Half the time I'm in the Bible, I'm looking for that note Patricia wrote about why we skip the standard rule for these three product lines."

Design response: Added exception notes field in Policy Engine; begin capturing Patricia's knowledge in structured format.

From DeShawn W.: "I like that the policy shows up automatically. But what if I already know the policy? Will it slow me down for the easy cases?"

Design response: Confirmed that experienced representatives can proceed directly; policy display is available but not required to acknowledge.

From Jennifer R.: "This would have saved me so much time in training. I spent the first month just learning where things are in the Bible."

Design response: Confirmed onboarding benefit. Added to adoption design: new representatives start with system from day one.

From Alex P.: "I've seen three systems that were supposed to make things easier. All of them added steps. How is this different?"

Design response: Reviewed step-by-step comparison showing total steps reduced from 8 to 5. Showed documentation step is now implicit. Acknowledged his concern is valid and invited him to pilot testing.

Design Modifications Based on Feedback:

  1. Exception notes: Added structured capture of policy exceptions (not in original design)
  2. Expert bypass: Confirmed, no mandatory acknowledgment for experienced representatives
  3. Training integration: New reps trained on system first, Bible becomes reference only
  4. Pilot inclusion: Alex invited to prototype testing to surface remaining concerns

Unresolved Concerns:

ConcernStatusResolution Plan
Policy Engine accuracy for unusual productsAcknowledgeWill test in prototype with edge cases
Speed of policy lookup vs. current mental shortcutsAcknowledgeWill measure in prototype
Patricia's departure timeline vs. knowledge captureAcknowledgeEscalate to project sponsor

Iterating Based on Feedback

When to Modify the Design:

Modify when feedback reveals:

  • A requirement that wasn't captured
  • A friction point the design creates
  • An edge case that needs different handling
  • An adoption barrier that can be designed out

The exception notes addition to R-01 is an example: the design didn't account for informal knowledge beyond formal policy. Validation revealed the gap; the design was modified.

When to Note for Prototype Testing:

Note for prototype testing when:

  • The concern is valid but its magnitude is unknown
  • The design might address the concern but confirmation requires testing
  • The feedback suggests "try it and see"

Alex's concern about whether this creates new friction is prototype-appropriate: the design should help, but real usage will confirm.

When to Push Back:

Push back when feedback reflects:

  • Resistance to change rather than design problems
  • Individual preferences that conflict with broader needs
  • Misunderstanding that can be clarified

Push back gently. The goal is understanding, not winning the argument. Sometimes apparent resistance reveals a real issue; sometimes it's genuinely just preference.

Documenting Changes:

For each change, document:

  • What feedback prompted it
  • What changed in the design
  • What the expected impact is

This documentation creates a trail from validation to design decision, useful for explaining rationale to stakeholders and for future iterations.


Validation Sign-Off

Validation is complete when:

Practitioners understand what's proposed.

They can describe the workflow in their own words. They know what changes from current state. They understand their role in the new workflow.

Major concerns are addressed or acknowledged.

Every significant concern raised has been either resolved through design modification or explicitly acknowledged as a prototype testing question. No major concerns are simply ignored.

Willingness to participate in prototype testing.

At least some validation participants are willing to test the prototype. This indicates sufficient confidence that the design merits building.

Validation sign-off is not unanimous enthusiasm. It's informed consent: practitioners have seen the design, provided input, and are willing to try it.



Module 4B: ORCHESTRATE — Practice

O — Operate

Step 4: Document the Blueprint

The Workflow Blueprint is a design specification that bridges strategy and implementation. It documents what was designed, why, and how it should be built, structured for audiences ranging from developers who will implement it to executives who will sponsor it.


Purpose of the Blueprint Document

Specification for Module 5

The blueprint tells the implementation team what to build. Module 5 (REALIZE) will construct or configure a prototype based on this specification. A clear blueprint enables accurate implementation; an ambiguous blueprint invites interpretation that may not match intent.

Record of Design Decisions

The blueprint documents what was decided and why. When questions arise during implementation ("Why does this step work this way?"), the blueprint provides rationale. This documentation prevents drift from design intent.

Communication Tool for Stakeholders

Different stakeholders need different views of the design. Developers need technical specification. Operations needs process documentation. Leadership needs business justification. The blueprint structure accommodates all three.

Reference for Future Iterations

Systems evolve. The blueprint captures baseline design so future changes can be evaluated against original intent. "We did it this way because..." prevents accidental undoing of deliberate choices.


Blueprint Structure

A complete Workflow Blueprint contains these sections:

1. Executive Summary

One-page overview suitable for leadership review:

  • What opportunity this addresses (from Module 3)
  • What the design accomplishes
  • Expected outcomes and timeline
  • Investment required and projected return

2. Current-State Workflow

Documentation from Step 1:

  • Process flow with steps, actors, and timing
  • Friction points identified
  • Informal systems and workarounds
  • Baseline metrics

3. Future-State Workflow

Documentation from Step 2:

  • Redesigned process flow
  • Changes from current state
  • Human-AI collaboration at each step
  • Projected timing improvement

4. Human-AI Collaboration Specification

Detailed specification of collaboration:

  • Pattern selected and rationale
  • Each decision point: what system provides, what human decides
  • Override mechanisms
  • Feedback loops for learning

5. Technology Requirements

Tool-agnostic specification:

  • Functional requirements (what system must do)
  • Integration requirements (what it connects to)
  • Performance requirements (speed, reliability)
  • Constraints (what system must not do)

6. Adoption Design

Elements that support adoption:

  • Simplicity choices and rationale
  • Invisible automation implementations
  • Resistance points and mitigations
  • Training implications

7. Success Metrics

From Module 3 ROI model:

  • Baseline measurements
  • Target improvements
  • Measurement methodology
  • Leading indicators

8. Appendix

Supporting details:

  • Detailed workflow diagrams
  • Validation session notes
  • Technical specifications
  • Risk and assumption documentation

Writing for the Audience

What Developers Need: Technical Specification

Developers implementing the design need:

  • Exact step-by-step workflow logic
  • Data flows and transformations
  • Integration points and data formats
  • Decision rules and exception handling
  • User interface requirements

Write with precision. Ambiguity in technical specification creates implementation variation.

What Operations Needs: Process Documentation

Operations teams managing the new workflow need:

  • Training requirements and materials
  • Support escalation procedures
  • Monitoring and exception handling
  • Relationship to other processes
  • Transition plan from current state

Write with practicality. Operations needs to know how to run this, not how to build it.

What Leadership Needs: Business Connection

Leadership approving resources needs:

  • Connection to approved business case
  • Expected outcomes and timeline
  • Risk acknowledgment and mitigation
  • Resource requirements
  • Decision points ahead

Write with directness. Leadership wants to know if this is on track and what they need to do.

Structuring for Different Reading Depths

The blueprint should support:

  • Skim reading: Executive summary conveys essence
  • Section reading: Each section is self-contained
  • Deep reading: Appendix provides complete detail

Use clear headings, summary boxes, and progressive disclosure. A reader should get value regardless of how deeply they read.

Module 4B: ORCHESTRATE — Practice

T — Test

Measuring Workflow Design Quality

A blueprint can be complete and still be wrong. This section covers how to evaluate design quality before building, and how to interpret results after.


Validating the Blueprint

Before declaring the blueprint ready for Module 5, verify four quality gates:

1. Current-State Accuracy

Question: Do practitioners recognize this as their actual work?

The current-state documentation should prompt reactions like "Yes, that's exactly what we do" and "I forgot we had to do that step." If practitioners don't recognize the map, it documents the wrong process.

Validation method: Present current-state workflow to 2-3 practitioners who weren't involved in mapping. Ask them to identify discrepancies.

Pass criteria: No major steps missing or misrepresented. Minor variations are acceptable; fundamental misunderstanding is not.

2. Future-State Clarity

Question: Could someone build from this specification?

The future-state design should be precise enough that a developer unfamiliar with the context could implement it. Ambiguity in specification creates implementation variation that may not match intent.

Validation method: Have someone outside the design team read the future-state section and identify questions they'd need answered to build it.

Pass criteria: Questions are about implementation detail, not about what should happen. If questions are "how should this work?" rather than "how should I build this?", the design isn't clear enough.

3. Human-AI Role Specification

Question: Is it unambiguous who does what at each step?

Every step should clearly specify: Does the system do this, or does the human? If both, what does each contribute? How does override work?

Validation method: Walk through each step and ask "Who is responsible for this action?" If the answer requires interpretation, the specification is incomplete.

Pass criteria: No steps where human-AI responsibility is unclear or context-dependent without explicit guidance.

4. Adoption Considerations

Question: Are adoption barriers addressed in design, not deferred to training?

Adoption elements should be concrete design choices, not "change management will handle this" deferrals. If resistance points are acknowledged but not designed for, adoption is at risk.

Validation method: Review each identified resistance point. For each, identify the specific design element that addresses it.

Pass criteria: Every significant resistance point has a design response. "Train them better" is not a design response.


Design Quality Metrics

Practitioner Validation Score

How thoroughly did practitioner input shape the design?

LevelDescription
4 (Excellent)Multiple practitioners validated; all major concerns addressed in design
3 (Proficient)Practitioners validated; most concerns addressed; some deferred to prototype
2 (Developing)Limited validation; concerns noted but not fully addressed
1 (Insufficient)No practitioner validation or concerns dismissed

Complexity Comparison

Is the future state simpler than the current state?

MeasureCurrentFutureDirection
Total steps[count][count]Should decrease
Decision points[count][count]Should decrease or clarify
Systems touched[count][count]Should decrease
Time (typical case)[minutes][minutes]Should decrease

For R-01: Current state 8 steps, future state 5-6 steps. Current 14-28 minutes, future 9-14 minutes. Both trends positive.

Step Reduction Analysis

Which steps were eliminated, combined, or automated?

Change TypeStepsExample
Eliminated[list]Bible retrieval (no longer needed)
Combined[list]Assessment + search → Policy review
Automated[list]Documentation (now derived from actions)
Unchanged[list]Customer communication

More eliminated/automated steps suggest stronger design impact. Steps that can't be reduced may indicate design limits.

Decision Point Clarity

At each decision point, does the practitioner know what to do?

Decision PointClear?If No, What's Missing
[Decision 1]Yes/No[Gap]
[Decision 2]Yes/No[Gap]

Every "No" is a design gap to resolve.


Leading Indicators (Before Prototype)

These signals predict implementation success before building begins:

Practitioners Willing to Participate in Testing

If validation participants are willing to test the prototype, they have sufficient confidence in the design. Reluctance to participate signals unresolved concerns.

IndicatorGreenYellowRed
Pilot volunteersMultiple eagerSome willingNone willing

No Unresolved Major Concerns

Major concerns from validation should be addressed in design or explicitly acknowledged as prototype testing questions. Unresolved concerns that aren't acknowledged tend to surface during deployment.

IndicatorGreenYellowRed
Major concernsAll addressed or acknowledgedSome unaddressedMany unaddressed

Blueprint Passes "Could Someone Build This" Test

A developer should be able to implement from the blueprint without design decisions.

IndicatorGreenYellowRed
Developer reviewReady to buildQuestions about designNeeds more design work

Success Metrics Aligned with ROI Model

The metrics that will evaluate the prototype should match the metrics that justified the investment.

IndicatorGreenYellowRed
Metric alignmentAll baseline metrics have corresponding targetsMost alignedMetrics disconnected

Lagging Indicators (After Prototype)

These metrics evaluate design quality once the prototype exists, a preview for Module 5:

Adoption Rate vs. Design Assumptions

Did practitioners use the system at the rates the design assumed?

MetricDesign AssumptionPrototype ResultGap
Usage rate[%][%]Positive/negative
Voluntary vs. required[description][actual]

Time Savings vs. Projected

Did the workflow actually save time?

MetricBaselineDesign ProjectionActualAccuracy
Time per task[min][min][min][%]

Error Rate vs. Projected

Did errors actually decrease?

MetricBaselineDesign ProjectionActualAccuracy
Error rate[%][%][%][%]

Practitioner Satisfaction

Do practitioners prefer the new workflow?

MetricBeforeTargetAfter
Satisfaction (1-5)[score][score][score]
Preference (old vs. new)N/ANew preferred[actual]

Red Flags

These signals indicate design problems requiring attention:

Practitioners Won't Validate

If practitioners decline to participate in validation or provide only superficial feedback, something is wrong. Possible causes:

  • Distrust of the design process
  • Fear of reprisal for criticism
  • Prior bad experiences with similar initiatives
  • Design so disconnected from work that feedback seems pointless

Response: Investigate the underlying cause before proceeding.

Too Many Exceptions in Design

If the exception handling pathway dominates the design, the "routine" case may not be as routine as assumed. Exception-heavy designs are complexity-heavy designs.

Response: Re-examine current-state data. Is the exception rate accurate? If yes, the design may need to accept that complexity is inherent to the work.

Complexity Increased Rather Than Decreased

If the future state has more steps, more decisions, or more time than the current state, the design is adding a new layer rather than improving the work.

Response: Return to design principles. What was added that doesn't serve practitioners? What can be removed?

Success Metrics Don't Connect to Business Case

If the metrics that will evaluate the prototype don't relate to the metrics that justified investment, success can't be demonstrated even if achieved.

Response: Reconcile metrics. Either adjust blueprint metrics to match business case, or acknowledge that the design addresses different value than originally proposed.


The Design Feedback Loop

Design quality improves through iteration. Module 4's blueprint is a hypothesis; Module 5's prototype tests it.

Prototype Results Inform Design Iteration

What prototype testing reveals:

  • Design assumptions that proved accurate
  • Design assumptions that proved wrong
  • Unexpected friction in the new workflow
  • Unexpected benefits not anticipated

Each finding informs design refinement. The cycle continues until design stabilizes.

Tracking Design Assumption Accuracy

Over multiple projects, track which design assumptions tend to be accurate and which tend to miss:

Assumption TypeProjectsAccuracy RatePattern
Time savings[n][%][trend]
Adoption rate[n][%][trend]
Error reduction[n][%][trend]

Patterns inform future assumptions. If adoption assumptions consistently miss by 20%, future designs should account for that bias.

Building Organizational Design Capability

Each design project builds capability:

  • Pattern recognition improves
  • Practitioner relationship deepens
  • Estimation accuracy increases
  • Failure patterns become recognizable earlier

The goal is not just a working system but an organization that designs working systems reliably.