Tuesday, October 14, 2025

The Outcome Clock: Aligning to Metrics That Matter

 

 

Dwayne Stroman, Leaning Agile, Wm. Frank Dea, USAA

 

Introduction: Why Outcome Metrics Matter

Too often, teams and leaders alike find themselves buried in productivity metrics that don’t truly tell the story of progress. Story points burned, tasks completed, and hours logged might look good on a report, but they don’t necessarily mean we’re moving in the right direction. Outcome metrics shift the focus from how much we do to what difference it makes. As Eric Ries says, “If we cannot measure progress, we cannot learn.”

Clock Metrics

To help organizations of all levels align their measurement strategies with purpose and impact, I want to introduce the Clock Metrics approach: a practical model that helps teams, managers, and senior leaders track outcomes at the right altitude and pace. 

In the context of Clock Metrics, a model is a practical framework that clarifies how different organizational levels can measure outcomes at the right altitude (strategic vs. tactical) and pace (daily to quarterly). It helps people see how their work connects to broader goals and enables better alignment across roles and time horizons.

 

Important Notes: Clock Metrics is about understanding the scope and rhythm of delivery, not about timing or pressuring teams. Each "hand" represents a different level of visibility—from immediate team activity to longer-term strategic progress. Just as in a real clock, movement at one level influences the others.

This model is not meant to measure or manage team performance. Instead, it's a tool to improve the delivery system by helping us focus on the most valuable work, align outcomes across time horizons, and increase the accuracy and effectiveness of what we deliver. When we misuse it to enforce artificial timelines or force the iron triangle of Time, Scope, and Resources, we undermine learning, quality, and flow.

For purposes of this article, we are using the example of an employee technology support organization, providing employee onboarding, tools and technology, while maintaining operational security.

The Objective: Stop Measuring Activity and Start Measuring Impact

The Clock Model for Outcome Alignment

Visualize the analog clock as an overlay to your outcome metrics strategy. Each hand on the clock represents a different level of visibility and scope within the organization. Just as a clock provides a complete picture of time through its hands, each moving at different speeds, your outcome metrics provide a complete picture of progress by collecting data across feedback loops of varying scope.

·         The Second hand tracks rapid, fast-feedback indicators.

·         The Minute hand reflects short- to mid-term directional movement.

·         The Hour hand connects to long-range, strategic goals.

·         And the Date display displays progress against key initiatives, funding and investment timelines.

The purpose of this model is to provide a basis for finding the different levels of indicators needed to properly measure progress.  The goal is to equip you with the ability to identify the various levels of cascading indicators, enabling real-time feedback for planning and adjustments.

 

 

Date Display – Guiding Strategic Investment

Purpose

The Date display moves the slowest in the Clock Metrics model, yet it delivers the most strategic form of feedback. It reflects portfolio-level insights and budgetary signals—used to assess whether the organization’s big bets and investments are generating meaningful returns.  These capture strategic signals from portfolio-level metrics and long-horizon investments, supporting decisions about where next to flow capacity and capital.

Example

v  Indicator: A reduction in total cost-to-serve through enterprise automation.

  • Purpose: Determine if long-term initiatives are delivering impact aligned with strategic intent.
  • Method: Informed by synthesis across time horizons—hour-hand outcomes (portfolio objectives) and business-wide metrics.

 

These indicators help leaders decide what not to do, ensuring focus and commitment are preserved for initiatives that continue to create value.

“You can’t manage what you can’t measure. But you also can’t measure what you don’t define.”” — W. Edwards Deming

 

Strategic metrics like these rarely map directly to a specific sprint, feature, or iteration—but they are essential for navigating trade-offs, reallocating resources, and adjusting strategic priorities.

 

The Hour Hand: Strategic Alignment and Business Outcomes (Lagging Indicators)

The Hour hand moves gradually, marking the outcomes of coordinated effort across time and teams. It reflects lagging indicators, such as improvements in productivity, quality, or adoption, that signal whether the system is moving in the right direction.  These metrics don’t fluctuate daily, but over time they reveal whether our actions are compounding toward larger goals.

v  Indicator: A 30% increase in employee self-service tool adoption over the fiscal year.

Ø   Purpose: Shows that short-term changes are cascading into meaningful results.

Ø   Connection: Aligned with enterprise OKRs, supported by team-level work but evaluated at a higher system level.

When short cycles are effective, we expect the hour hand to move with consistency and clarity, giving leaders confidence in their long-term strategies.

These indicators help senior leaders understand whether investments and change efforts are paying off. They often require alignment across multiple value streams and benefit from a clear chain of influence back to team-level actions.

“Lagging indicators are essential for feedback, but they arrive too late to prevent waste. That’s why we must connect them to faster cycles.”” — Paraphrased from Donald G. Reinertsen

 

The Second Hand: Near-Term Team Outcomes (Leading Indicators)

Purpose

The Second hand represents short-term, actionable signals that teams can observe, respond to, and improve upon quickly. These are leading indicators: the first signs that something is working (or not).

Most, if not all, second-hand metrics are short-term, small in scope, and typically limited to one team. The feedback loop delay (i.e., the time it takes to observe a change in the metrics) should ideally be no more than a sprint, with a preferable range of 1–3 days.  This fast feedback loop allows teams to make changes or adjustments in execution before significant effort is invested.

 

Examples

v  Indicator: Number of Manual Steps for a User in Process A

Ø  Purpose: Shows the incremental improvement as work is delivered towards the corresponding Minute hand metrics.

Ø  Connection: Does not confirm user experience improvement, but it is a good indicator that it will lead to UX improvement.

v  Indicator: Time to Resolve a Repetitive Request

Ø  “After the automation story, average handling time for password resets dropped by 40%.”

Ø  While there may be other factors in the cycle time of a repetitive request, this change gives us a good indicator that it should decrease.

v  Indicator:  Reduction in Support Tickets related to Password Reset

Ø  “Following the 2 UI change stories, daily how-to questions dropped from 12 to 4.”

Ø  While still not confirmation, we can assume we are even closer to a clear win for the user.

Summary

These Second hand metrics help teams make decisions quickly and help decide if each change is nudging us toward better outcomes over time.  But without context, they risk becoming local optimizations. Because these are near term indicators, we have to understand how multiple increments of these leading indicators ( Second hand ticks) should lead to a significant improvement (Minute hand tick). 

“There is nothing so useless as doing efficiently that which should not be done at all.”
Peter Drucker

 

The Minute Hand: Finding Adjustments

Purpose

Minute hand metrics are where we start to see measurable user/customer/business improvements.  These indicators show the accumulation of progress from the second hand metrics, but typically include a broader scope across multiple teams and Sprints. Minute hand metrics aggregate multiple Second hand signals to show patterns that emerge over one or more iterations. 

 

Scope/Duration

While a Minute hand metric can occasionally reflect the efforts of a single team, it more commonly represents the collective impact of multiple teams working together to move the needle.  The feedback loops for these metrics are typically external, meaning the data primarily comes from how end users or stakeholders consume the delivered value.

While Minute hand metrics can cross ARTs, they typically are contained within one ART.  Changes in these metrics should be seen at least once, if not multiple times per PI.

Examples

v  Indicator: New employee onboarding time

Ø  “With 3 teams working towards reduction in clicks and steps to establish their digital footprint, we have seen the cycle time drop by 10%.”

Ø  Starting to see real positive effect on the employee onboarding experience.

v  Indicator: Hardware acquisition request cycle time

Ø  “After aggregating the request and validation process improvements, we see a 5% reduction in time to deliver.”

Ø  Other areas contribute to this request cycle time, but improving the input processing has already made a positive impact.

Summary

These indicators often act as bridges between team-level experimentation and strategic movement. They are ideal for mid-level managers and business owners tracking directional improvements.  They are also the best place to find mid-plan or PI adjustments to better achieve the intended outcomes.

It’s critical to ensure that Minute hand metrics remain grounded in customer and employee experience, not just delivery stats.  And that’s where the Hour hand comes in.

The Flow of Time: Connecting the Hands

The beauty of a clock is not just in its hands, but in how they move together.

A team reducing onboarding time (second hand) helps improve the overall employee experience (minute hand), which contributes to increased tool adoption (hour hand), justifying the strategic investment (date display).

Outcome indicators should align across these layers, not just roll up. That means each team and leader understands:

·         Which metrics they directly influence

·         How those metrics relate to broader goals

·         What feedback loops exist to validate the connection

 

Avoiding Common Traps

·         Vanity metrics: Avoid measuring what looks good but says little (e.g., number of tickets closed).

·         Disconnection: Don’t treat metrics in isolation; they should build toward enterprise outcomes.

·         Lag without lead: Hour-hand outcomes are important, but without leading indicators, you can’t influence them in time.

·         Aggregation & Adjustments: Combining data is not the same as showing causality or purpose.

Getting Started: Practical Tips for Any Level

1.      Start with one OKR (Date/Hour) and work all the way down to a few Team (Second hand) indicators.

1.      Ask teams: what early signals show we’re making a difference?

2.      For each Key Result, identify at least one minute-hand and one second-hand metric.

2.      Look for a vanity metric to replace with an outcome metric this quarter.

3.      Ensure feedback loops exist across all levels—data without dialogue won’t drive action.

4.      Use narrative summaries alongside dashboards to explain the “why” behind the “what.”

Conclusion: Making Time Work for You

Everyone in the organization - from teams to mid-level managers to senior leaders - plays a role in measuring what matters. A healthy metric system tells a story: a story of change, of learning, and of value delivered.

Just like the hands of a clock, these signals don’t compete. They move together to keep time with progress. When you align them with purpose, you don’t just measure faster. You measure smarter.

As Nicole Forsgren writes in Accelerate, “You can’t improve what you don’t understand.”

Use the Clock model to build out metrics that help you understand—and improve—what really matters.


🗓️ Date Display (Strategic OKR)

Objective:
Increase cross-sell of insurance products to existing mobile banking customers to drive customer lifetime value and deepen financial relationships.

Key Result (used as Hour hand target):
Achieve a 15% increase in bundled banking + insurance product adoption by the end of the fiscal year.

 

🕛 Hour Hand (Lagging Outcome Indicator – Reviewed Quarterly)

Metric:
% of active mobile banking users with at least one insurance product bundled

·         Target: 15% by end of year

·         Current: 8%

·         Cadence: Reviewed quarterly to assess progress toward OKR

·         Purpose: Reflects real, long-term behavioral change and business value

This is the “proof” that our system of delivery and engagement is working over time.

 

🕐 Minute Hand (Responsive Program/ART Metrics – Updated Every PI)

Metric Examples (2–3 month feedback loops):

1.      Conversion rate on in-app insurance offer journeys
E.g., % of users clicking “Get a Quote” and completing the process

2.      Drop-off rate at key steps in the insurance quote funnel
E.g., % of users who abandon after seeing pricing or required personal info

3.      Cycle time to release new cross-sell feature (e.g., "smart insurance recommendations")
E.g., story to production time for a new personalized upsell UI

·         Cadence: Tracked and inspected every PI (minimum) or every iteration (preferable)

·         Purpose: Detect friction, validate experiments, adapt strategy

“If we can reduce the time between cause and effect, we can make better decisions.” – Don Reinertsen

 

⏱️ Second Hand (Team-level Fast Feedback – Updated Daily to Weekly)

Metric Examples (2–3 day response loops):

1.      Click-through rate (CTR) on insurance promo banners in the app
e.g., “Tap to see how you can save on life insurance”

2.      Bug/defect reports or user friction detected from newly released quote journey
Collected via in-app feedback or crash analytics

3.      Stories completed per team with cross-functional dependencies resolved
Helps visualize flow and remove blockers in real-time

4.      Daily Mobile App Store Reviews mentioning “insurance” keyword
Used to surface emergent insights on user sentiment

·         Cadence: Monitored daily or within 2–3 days

·         Purpose: Micro-adjustments to UI, messaging, and feature toggles

At this level, we're shaping delivery precision and learning fast.

 

Putting It Together (Chain of Influence)

·         Second Hand data (e.g., low CTR on a quote CTA) leads to an A/B experiment.

·         That experiment improves Minute Hand conversion rates in the funnel.

·         Increased conversions accumulate, moving the Hour Hand closer to the Key Result.

·         Achieving that Key Result fulfills the strategic intent of the Date Display OKR.

 

Wednesday, February 19, 2025

Flow Metric 7: Flow Turbulence


Lean Principle 3: Make value flow without interruptions.

SAFe® Lean Agile Principle 6 – Make Value Flow without Interruptions

In today’s fast-paced delivery environments, businesses that thrive prioritize flow of value (for purposes of this paper, I am using “flow” to designate the flow of a Feature through the a) ART Kanban or b) through the ARTs delivery pipeline).  ensuring work flows smoothly through the value stream without unnecessary interruptions.    While the existing Scaled Agile Framework Flow Metrics such as Flow Velocity, Flow Time, and Flow Load provide valuable insights, this paper introduces a seventh flow metric: Flow Turbulence.

Flow Turbulence measures disruptions caused by low Percent Complete and Accurate (%C&A) rates across the value stream.  %C&A has been used as First Pass Yield in lean for the past 60 years and in the world of engineering and DevOps since at least 2011 (Humble & Farley)

If overlooked, Flow Turbulence can ripple through upstream and downstream processes, creating delays, rework, and dependencies that can impact the entire system’s ability to deliver value. Let’s explore this metric and the new Scaled Agile Flow Accelerator to address it: Design the System for Flow.


Understanding Flow Turbulence: The Turtle Problem

Imagine a turtle swimming upstream in a flowing river. As it pushes forward, its movements create ripples that extend all the way to the shoreline, disturbing the water’s natural flow. These ripples represent the turbulence caused by low %C&A at any step of the value stream. Just as the turtle’s ripples can disrupt an entire ecosystem, defects and incomplete work disrupt the smooth progression of work in an enterprise flow system.

 


The goal of Flow Turbulence is to measure and reduce these disruptions by understanding how %C&A performance at various points causes rework, bottlenecks, and delays.

What is Percent Complete and Accurate (%C&A)?

%C&A assesses whether outputs of a step in the value stream are delivered in a state that is both:

  • Complete: The task or item is fully finished with all necessary information and components.  Everything is there to successfully act on this value when pulled to the next step.
  • Accurate: The output meets the quality standards required for the next stage without requiring corrections.  For each step in a value stream, %C&A tells us how often work flows successfully through the step – e.g. no defects and no rework

When %C&A is low, two major disruptions occur:

  1. Upstream Disruptions: Work with errors or missing details must be reinjected into the flow for rework, creating bottlenecks and delays.
  2. Downstream Disruptions: Tasks passed along without being complete and accurate lead to missed dependencies, delays, and quality issues in later stages.

%C&A has is origins in Lean Thinking and Lean Six Sigma, with key examples from Toyota by emphasizing the reduction of defects and rework through built-in quality (jidoka),


How Flow Turbulence Affects the Value Stream

Upstream Turbulence

When an incomplete feature or deliverable requires rework, it clogs the upstream workflow. Teams must pause their current tasks to fix errors, causing delays and disrupting their planned capacity.

Example: A product design team delivers specifications that are incomplete, forcing downstream developers to pause, wait for clarification, or make assumptions. As the issue flows back upstream for resolution, it creates a backlog of unfinished work.

Downstream Turbulence

When low-quality outputs are passed downstream, they create cascading effects that disrupt schedules, cause dependency failures, and increase delivery risks.

Example: A partially tested software component causes bugs to propagate downstream to testing and deployment stages, where it requires costly remediation and impacts delivery timelines.

Both types of turbulence result in delays, inefficiencies, and reduced predictability, emphasizing the need to optimize %C&A.


Flow Accelerator: Design the System for Flow

To address Flow Turbulence, organizations must proactively design systems and value streams that prioritize quality at every step. This concept builds on Lean principles, similar to the andon cord approach used in manufacturing to immediately flag quality issues.

Key Practices for Designing the System for Quality:

  1. Integrate Quality Checks in the Flow: Embed checkpoints within the workflow to validate the work is complete and accurate before advancing to the next stage. This ensures defects are caught early, avoiding downstream disruptions.
  2. Implement the Equivalency of an Andon Cord: Just like in manufacturing, where a worker can pull the andon cord to stop production when a defect is detected, teams should have the authority and mechanisms to pause the flow and address issues before they escalate.  SAFe Principle 9: Decentralize Decision Making is vital to success in this step.
  3. Introduce a 'Pause State' for Reflection: Build a systematic approach for pausing and reflecting at key intervals within the value stream. During these pauses, teams assess %C&A, identify the root causes of turbulence, and implement improvements.  SAFe cadenced events such as the Inspect and Adapt Workshop and Team Retrospectives are great examples of incorporating a ‘Pause’ state.
  4. Optimize Feedback Loops: Rapid feedback from upstream and downstream teams ensures that issues can be corrected before they cause significant disruptions. Encourage experimentation and continuous improvement through feedback cycles.  Remember that feedback is already present, we just need to improve our ‘receptors’ ability to gather and use feedback.

Improving %C&A with Systematic Quality Practices

  • Cross-Functional Alignment: Ensure that all stakeholders have a shared understanding of what “complete and accurate” means for each stage of the value stream.
  • Automated Testing and Validation: Use continuous integration, continuous testing, and automation of those tests to verify that work meets quality standards at each step.
  • Root Cause Analysis: Regularly conduct retrospectives to identify why %C&A is low and implement improvements.
  • Have a clear agreement on Definition of Ready and Definition of Done: Encourage ongoing discussions on DoR and DoD and continue to optimize these components.

Measuring Flow Turbulence

To quantify Flow Turbulence, organizations can track:

  • The percentage of stories, tasks or features requiring rework due to low %C&A.
  • The frequency and delay time of disruptions reported at downstream stages.
  • The cumulative delay caused by rework and defects.

Visualizations such as cumulative flow diagrams (CFD) or scatter plots of rework incidents can help identify where turbulence originates and its impact on delivery timelines.


Benefits of Managing Flow Turbulence

  • Faster Delivery: By reducing rework and dependencies, teams can deliver value more predictably and quickly.
  • Improved Quality: Higher %C&A ensures downstream teams receive high-quality inputs, leading to fewer defects and better outcomes, preventing the ‘good work on top of bad’ problem.
  • Enhanced Collaboration: Teams communicate and collaborate better when there are clear quality standards and feedback mechanisms.

Conclusion

Flow Turbulence, driven by low %C&A, is a critical indicator of disruptions within a value stream. Addressing this turbulence through the Design the System for Quality Flow Accelerator ensures that organizations can reduce rework, improve flow efficiency, and enhance overall business agility. Just as the ripples from a turtle swimming upstream can disrupt an entire riverbank, unchecked turbulence can have cascading effects on delivery outcomes.

By building quality into every step of the value stream and embracing reflective pauses and rapid feedback, organizations can swim upstream smoothly, delivering high-quality outcomes without the turbulence.

 

# First Pass Yield (measuring quality at each step), and by measuring the quality of handoffs between stages (Six Sigma).  (FPY = (Number of good units + Number of acceptable units) / Total units entering the process × 100%)

Andon Chord - means "STOP". The work needs to be fixed before it moves forward. In a CDP this might translate into "all tests run green" before the work gets pulled into the next step.

Why has “Agile” become a negative buzzword?

 

Why do many companies struggle to achieve success with Agile implementations?



The answer is simple: many companies are using Agile for the wrong purpose and expecting results it was never designed to deliver. Agile, at its core, was created for team level agility, not for transforming entire enterprises. It’s like trying to use antibiotics to cure cancer—antibiotics are powerful when used correctly but aren’t a cure all.

 

So, what’s missing?

 

The answer is Lean.

 


Why Lean Complements Agile

Lean isn’t just about relentless improvement, it’s a mindset and philosophy that focuses on value creation, reducing waste, and continuously improving workflows.  Lean is first and foremost a way of thinking, that leads to a different way of doing.  With five core principles and numerous patterns and practices (e.g., the 5S’s, Kaizen, and flow efficiency), Lean provides the enterprise-level structure that Agile lacks.

  • Example: While Agile helps teams deliver working software incrementally, Lean helps ensure that work flows efficiently through value streams, reducing delays, dependencies and handoffs.

The Real Power Lies in Combining Lean and Agile

I am in no way discounting the value of agile values, principles, practices, and mindset. However, we have been asking too much from a standalone agile approach. Incredibly powerful when properly focused, incredibly disappointing when spread too thin and trying to solve problems it was never designed for.   By combining Agile’s team-level collaboration with Lean’s focus on enterprise flow and value delivery, organizations can solve problems more effectively.

  • Agile’s Role: Improve team collaboration, delivery, and iteration.
  • Lean’s Role: Understand and optimize value streams, reduce waste, and instill relentless improvement.

Adding the bedrock of lean thinking, principles, and practices to focus on understanding value streams, waste reduction, minimizing dependencies and handoffs, and instilling a relentless improvement mindset is vital to success with an agile approach.  Without Lean, Agile often leads to frustration because organizations attempt to apply team-level solutions to enterprise-level challenges.


Agile + Lean in Scaled Agile (SAFe)

As Dean Leffingwell, creator of SAFe®, correctly said: “Agile for the teams and Lean for the enterprise.” The Scaled Agile Framework is built on this principle, yet many implementations fail because they overemphasize Agile while neglecting Lean.

What happens when Lean is missing:

  • Teams deliver incrementally, but enterprise-wide bottlenecks (e.g., dependencies, handoffs, or unclear value streams) slow progress.
  • Improvements are localized, but systemic inefficiencies remain hidden.
  • Agile is expected to “cure” everything, leading to disappointment.

The Antibiotic Analogy: Applying the Right Treatment

Antibiotics are a key component of today’s medical field.  However, they cannot cure everything.  Agile is like a powerful antibiotic, effective at solving specific issues like team collaboration and incremental delivery. But when applied broadly to systemic organizational issues, it fails to deliver results, just as antibiotics can’t treat cancer. Lean acts as the enterprise-level treatment, addressing large-scale inefficiencies by improving the overall flow of value.


Practical Steps for Success:

  • Assess your current SAFe® or Agile implementation: Are you applying Agile principles where Lean practices are required?
  • Map and optimize your value streams: Use Lean principles to understand where delays, waste, and bottlenecks occur.
  • Combine Agile’s team delivery with Lean’s flow efficiency: This ensures local improvements translate into enterprise-level outcomes.
  • Focus on relentless improvement: Lean instills a relentless improvement mindset across the organization.

Conclusion:
Agile, when used correctly, is incredibly powerful at the team level. But without the foundational support of Lean principles, organizations will continue to struggle with scalability and efficiency. Lean and Agile are two sides of the same coin—embrace both to unlock their full potential.

Top of Form

 

The Outcome Clock: Aligning to Metrics That Matter

    Dwayne Stroman, Leaning Agile, Wm. Frank Dea, USAA   Introduction: Why Outcome Metrics Matter Too often, teams and leaders a...