Tuesday, October 14, 2025

The Outcome Clock: Aligning to Metrics That Matter

 

 

Dwayne Stroman, Leaning Agile, Wm. Frank Dea, USAA

 

Introduction: Why Outcome Metrics Matter

Too often, teams and leaders alike find themselves buried in productivity metrics that don’t truly tell the story of progress. Story points burned, tasks completed, and hours logged might look good on a report, but they don’t necessarily mean we’re moving in the right direction. Outcome metrics shift the focus from how much we do to what difference it makes. As Eric Ries says, “If we cannot measure progress, we cannot learn.”

Clock Metrics

To help organizations of all levels align their measurement strategies with purpose and impact, I want to introduce the Clock Metrics approach: a practical model that helps teams, managers, and senior leaders track outcomes at the right altitude and pace. 

In the context of Clock Metrics, a model is a practical framework that clarifies how different organizational levels can measure outcomes at the right altitude (strategic vs. tactical) and pace (daily to quarterly). It helps people see how their work connects to broader goals and enables better alignment across roles and time horizons.

 

Important Notes: Clock Metrics is about understanding the scope and rhythm of delivery, not about timing or pressuring teams. Each "hand" represents a different level of visibility—from immediate team activity to longer-term strategic progress. Just as in a real clock, movement at one level influences the others.

This model is not meant to measure or manage team performance. Instead, it's a tool to improve the delivery system by helping us focus on the most valuable work, align outcomes across time horizons, and increase the accuracy and effectiveness of what we deliver. When we misuse it to enforce artificial timelines or force the iron triangle of Time, Scope, and Resources, we undermine learning, quality, and flow.

For purposes of this article, we are using the example of an employee technology support organization, providing employee onboarding, tools and technology, while maintaining operational security.

The Objective: Stop Measuring Activity and Start Measuring Impact

The Clock Model for Outcome Alignment

Visualize the analog clock as an overlay to your outcome metrics strategy. Each hand on the clock represents a different level of visibility and scope within the organization. Just as a clock provides a complete picture of time through its hands, each moving at different speeds, your outcome metrics provide a complete picture of progress by collecting data across feedback loops of varying scope.

·         The Second hand tracks rapid, fast-feedback indicators.

·         The Minute hand reflects short- to mid-term directional movement.

·         The Hour hand connects to long-range, strategic goals.

·         And the Date display displays progress against key initiatives, funding and investment timelines.

The purpose of this model is to provide a basis for finding the different levels of indicators needed to properly measure progress.  The goal is to equip you with the ability to identify the various levels of cascading indicators, enabling real-time feedback for planning and adjustments.

 

 

Date Display – Guiding Strategic Investment

Purpose

The Date display moves the slowest in the Clock Metrics model, yet it delivers the most strategic form of feedback. It reflects portfolio-level insights and budgetary signals—used to assess whether the organization’s big bets and investments are generating meaningful returns.  These capture strategic signals from portfolio-level metrics and long-horizon investments, supporting decisions about where next to flow capacity and capital.

Example

v  Indicator: A reduction in total cost-to-serve through enterprise automation.

  • Purpose: Determine if long-term initiatives are delivering impact aligned with strategic intent.
  • Method: Informed by synthesis across time horizons—hour-hand outcomes (portfolio objectives) and business-wide metrics.

 

These indicators help leaders decide what not to do, ensuring focus and commitment are preserved for initiatives that continue to create value.

“You can’t manage what you can’t measure. But you also can’t measure what you don’t define.”” — W. Edwards Deming

 

Strategic metrics like these rarely map directly to a specific sprint, feature, or iteration—but they are essential for navigating trade-offs, reallocating resources, and adjusting strategic priorities.

 

The Hour Hand: Strategic Alignment and Business Outcomes (Lagging Indicators)

The Hour hand moves gradually, marking the outcomes of coordinated effort across time and teams. It reflects lagging indicators, such as improvements in productivity, quality, or adoption, that signal whether the system is moving in the right direction.  These metrics don’t fluctuate daily, but over time they reveal whether our actions are compounding toward larger goals.

v  Indicator: A 30% increase in employee self-service tool adoption over the fiscal year.

Ø   Purpose: Shows that short-term changes are cascading into meaningful results.

Ø   Connection: Aligned with enterprise OKRs, supported by team-level work but evaluated at a higher system level.

When short cycles are effective, we expect the hour hand to move with consistency and clarity, giving leaders confidence in their long-term strategies.

These indicators help senior leaders understand whether investments and change efforts are paying off. They often require alignment across multiple value streams and benefit from a clear chain of influence back to team-level actions.

“Lagging indicators are essential for feedback, but they arrive too late to prevent waste. That’s why we must connect them to faster cycles.”” — Paraphrased from Donald G. Reinertsen

 

The Second Hand: Near-Term Team Outcomes (Leading Indicators)

Purpose

The Second hand represents short-term, actionable signals that teams can observe, respond to, and improve upon quickly. These are leading indicators: the first signs that something is working (or not).

Most, if not all, second-hand metrics are short-term, small in scope, and typically limited to one team. The feedback loop delay (i.e., the time it takes to observe a change in the metrics) should ideally be no more than a sprint, with a preferable range of 1–3 days.  This fast feedback loop allows teams to make changes or adjustments in execution before significant effort is invested.

 

Examples

v  Indicator: Number of Manual Steps for a User in Process A

Ø  Purpose: Shows the incremental improvement as work is delivered towards the corresponding Minute hand metrics.

Ø  Connection: Does not confirm user experience improvement, but it is a good indicator that it will lead to UX improvement.

v  Indicator: Time to Resolve a Repetitive Request

Ø  “After the automation story, average handling time for password resets dropped by 40%.”

Ø  While there may be other factors in the cycle time of a repetitive request, this change gives us a good indicator that it should decrease.

v  Indicator:  Reduction in Support Tickets related to Password Reset

Ø  “Following the 2 UI change stories, daily how-to questions dropped from 12 to 4.”

Ø  While still not confirmation, we can assume we are even closer to a clear win for the user.

Summary

These Second hand metrics help teams make decisions quickly and help decide if each change is nudging us toward better outcomes over time.  But without context, they risk becoming local optimizations. Because these are near term indicators, we have to understand how multiple increments of these leading indicators ( Second hand ticks) should lead to a significant improvement (Minute hand tick). 

“There is nothing so useless as doing efficiently that which should not be done at all.”
Peter Drucker

 

The Minute Hand: Finding Adjustments

Purpose

Minute hand metrics are where we start to see measurable user/customer/business improvements.  These indicators show the accumulation of progress from the second hand metrics, but typically include a broader scope across multiple teams and Sprints. Minute hand metrics aggregate multiple Second hand signals to show patterns that emerge over one or more iterations. 

 

Scope/Duration

While a Minute hand metric can occasionally reflect the efforts of a single team, it more commonly represents the collective impact of multiple teams working together to move the needle.  The feedback loops for these metrics are typically external, meaning the data primarily comes from how end users or stakeholders consume the delivered value.

While Minute hand metrics can cross ARTs, they typically are contained within one ART.  Changes in these metrics should be seen at least once, if not multiple times per PI.

Examples

v  Indicator: New employee onboarding time

Ø  “With 3 teams working towards reduction in clicks and steps to establish their digital footprint, we have seen the cycle time drop by 10%.”

Ø  Starting to see real positive effect on the employee onboarding experience.

v  Indicator: Hardware acquisition request cycle time

Ø  “After aggregating the request and validation process improvements, we see a 5% reduction in time to deliver.”

Ø  Other areas contribute to this request cycle time, but improving the input processing has already made a positive impact.

Summary

These indicators often act as bridges between team-level experimentation and strategic movement. They are ideal for mid-level managers and business owners tracking directional improvements.  They are also the best place to find mid-plan or PI adjustments to better achieve the intended outcomes.

It’s critical to ensure that Minute hand metrics remain grounded in customer and employee experience, not just delivery stats.  And that’s where the Hour hand comes in.

The Flow of Time: Connecting the Hands

The beauty of a clock is not just in its hands, but in how they move together.

A team reducing onboarding time (second hand) helps improve the overall employee experience (minute hand), which contributes to increased tool adoption (hour hand), justifying the strategic investment (date display).

Outcome indicators should align across these layers, not just roll up. That means each team and leader understands:

·         Which metrics they directly influence

·         How those metrics relate to broader goals

·         What feedback loops exist to validate the connection

 

Avoiding Common Traps

·         Vanity metrics: Avoid measuring what looks good but says little (e.g., number of tickets closed).

·         Disconnection: Don’t treat metrics in isolation; they should build toward enterprise outcomes.

·         Lag without lead: Hour-hand outcomes are important, but without leading indicators, you can’t influence them in time.

·         Aggregation & Adjustments: Combining data is not the same as showing causality or purpose.

Getting Started: Practical Tips for Any Level

1.      Start with one OKR (Date/Hour) and work all the way down to a few Team (Second hand) indicators.

1.      Ask teams: what early signals show we’re making a difference?

2.      For each Key Result, identify at least one minute-hand and one second-hand metric.

2.      Look for a vanity metric to replace with an outcome metric this quarter.

3.      Ensure feedback loops exist across all levels—data without dialogue won’t drive action.

4.      Use narrative summaries alongside dashboards to explain the “why” behind the “what.”

Conclusion: Making Time Work for You

Everyone in the organization - from teams to mid-level managers to senior leaders - plays a role in measuring what matters. A healthy metric system tells a story: a story of change, of learning, and of value delivered.

Just like the hands of a clock, these signals don’t compete. They move together to keep time with progress. When you align them with purpose, you don’t just measure faster. You measure smarter.

As Nicole Forsgren writes in Accelerate, “You can’t improve what you don’t understand.”

Use the Clock model to build out metrics that help you understand—and improve—what really matters.


🗓️ Date Display (Strategic OKR)

Objective:
Increase cross-sell of insurance products to existing mobile banking customers to drive customer lifetime value and deepen financial relationships.

Key Result (used as Hour hand target):
Achieve a 15% increase in bundled banking + insurance product adoption by the end of the fiscal year.

 

🕛 Hour Hand (Lagging Outcome Indicator – Reviewed Quarterly)

Metric:
% of active mobile banking users with at least one insurance product bundled

·         Target: 15% by end of year

·         Current: 8%

·         Cadence: Reviewed quarterly to assess progress toward OKR

·         Purpose: Reflects real, long-term behavioral change and business value

This is the “proof” that our system of delivery and engagement is working over time.

 

🕐 Minute Hand (Responsive Program/ART Metrics – Updated Every PI)

Metric Examples (2–3 month feedback loops):

1.      Conversion rate on in-app insurance offer journeys
E.g., % of users clicking “Get a Quote” and completing the process

2.      Drop-off rate at key steps in the insurance quote funnel
E.g., % of users who abandon after seeing pricing or required personal info

3.      Cycle time to release new cross-sell feature (e.g., "smart insurance recommendations")
E.g., story to production time for a new personalized upsell UI

·         Cadence: Tracked and inspected every PI (minimum) or every iteration (preferable)

·         Purpose: Detect friction, validate experiments, adapt strategy

“If we can reduce the time between cause and effect, we can make better decisions.” – Don Reinertsen

 

⏱️ Second Hand (Team-level Fast Feedback – Updated Daily to Weekly)

Metric Examples (2–3 day response loops):

1.      Click-through rate (CTR) on insurance promo banners in the app
e.g., “Tap to see how you can save on life insurance”

2.      Bug/defect reports or user friction detected from newly released quote journey
Collected via in-app feedback or crash analytics

3.      Stories completed per team with cross-functional dependencies resolved
Helps visualize flow and remove blockers in real-time

4.      Daily Mobile App Store Reviews mentioning “insurance” keyword
Used to surface emergent insights on user sentiment

·         Cadence: Monitored daily or within 2–3 days

·         Purpose: Micro-adjustments to UI, messaging, and feature toggles

At this level, we're shaping delivery precision and learning fast.

 

Putting It Together (Chain of Influence)

·         Second Hand data (e.g., low CTR on a quote CTA) leads to an A/B experiment.

·         That experiment improves Minute Hand conversion rates in the funnel.

·         Increased conversions accumulate, moving the Hour Hand closer to the Key Result.

·         Achieving that Key Result fulfills the strategic intent of the Date Display OKR.

 

No comments:

Post a Comment

The Outcome Clock: Aligning to Metrics That Matter

    Dwayne Stroman, Leaning Agile, Wm. Frank Dea, USAA   Introduction: Why Outcome Metrics Matter Too often, teams and leaders a...