The short answer
UX metrics are tools for understanding experience, not substitutes for the experience itself. When teams optimise primarily for a specific metric — completion rate, time-on-task, click count — the journey tends to reshape itself around that number. The result can be a service that scores well on its chosen measure while quietly degrading the user’s confidence, trust, and clarity that the metric was intended to reflect. The metric becomes the goal, and the experience suffers.
How metrics become design constraints
Digital teams rarely optimise at random. They optimise what they decide matters, and what matters is largely determined by what gets measured. This is not a flaw in how teams work, it is just a rational response to the need to demonstrate impact and make defensible design decisions.
The problem emerges when a specific metric becomes the primary lens through which a journey is evaluated. At that point, the journey stops being shaped by user needs and starts being shaped by what the metric rewards.
Fewer clicks sounds like better usability. Faster completion sounds like efficiency. Higher completion rate sounds like success. None of these interpretations are wrong in isolation, but each of them is incomplete. Pursued without context, each one creates its own specific blind spots.
What happens when each common metric becomes the goal
When click count is the primary metric, steps that users find genuinely useful — review screens, confirmation dialogs, progress summaries — get removed because they add clicks making the journey shorter. The journey also becomes less legible, less forgiving, and less trustworthy for users who are not following the ideal path.
When time-on-task is the primary metric, anything that slows users down becomes suspect. Deliberate pauses before commitment, opportunity to review complex information, and time taken to read outcome messages all look like inefficiency. But these are often the moments where the service is doing its most important work — giving users the information they need to proceed confidently. Removing them produces faster journeys that generate more errors and more support contacts.
When completion rate is the primary metric, teams optimise to get users through. This can produce the right outcome when the completion path is genuinely clear and well-designed. It can also produce journeys where users are pushed through to completion before they are ready, resulting in problems only surfacing after the journey ends.
In each case, the service can improve on its chosen measure while simultaneously degrading the experience as a whole. The metric and the reality diverge, and because the metric is what gets reported, the degradation remains invisible until it appears as another metric in the form of an increase in support volume, error rates, or user complaints.
The behaviours that standard metrics don’t capture
The dimensions of experience that matter most to user confidence are also the hardest to measure through standard performance metrics:
- Whether the user understood what happened at each step
- Whether the user felt confident about the information they committed
- Whether the user trusted the outcome they received
- Whether the user would be able to return, correct, or recover if needed
None of these appear in a completion rate. None of them appear in a click count. They do appear, eventually, in support contact volume, in repeat attempts, in repeat visits to check an outcome, and in the qualitative data from user research. Unfortunately, by the time they’re visible at that level, the metric being optimised has long since declared the journey a success.
This gap between what is easy to count and what actually determines experience quality is not a reason to abandon metrics. It is a reason to use them with deliberate awareness of what they do and don’t capture.
Better questions to ask before optimising a metric
Before asking ‘how do we improve this metric?’, it’s worth asking a set of more contextual questions:
- Where does clarity matter more than speed? Are there points in this journey where understanding is more important than moving through quickly?
- Where does confidence matter more than compression? Are there moments where users need time to deliberate or review before committing?
- What behaviour are we unintentionally encouraging? If users are completing quickly but contacting support afterward, what does the metric miss?
- What would this metric look like if users were confused but compliant? High completion on a poorly designed journey is possible. What else would we expect to see?
These questions don’t invalidate the metric, in fact, they contextualise it. They force a conversation about what the number actually means and what it might be hiding.
Measuring what isn’t easy to count
Some of the most valuable signals for journey quality are qualitative or indirect: task completion with minimal error, return visit rates, support contact volume per journey type, time to recovery after an error, and qualitative feedback from research sessions.
These aren’t replacements for quantitative metrics, rather they are complementary to them. A journey that performs well on both its primary metric and its qualitative indicators is more reliably well-designed than one that performs well only on the number.
The goal is a measurement approach that reflects behavioural reality: not just whether users got through, but whether they understood what happened, felt confident in what they committed, and trust the service they used.
Frequently asked questions
Why do UX metrics sometimes make experiences worse?
UX metrics describe aspects of experience, such as speed, completion, and interaction count, but they don’t fully capture the confidence, clarity, and trust that make an experience genuinely good. When teams optimise directly for a metric without considering what it misses, the journey can improve on the measure while degrading the reality of the experience.
What is Goodhart’s Law and how does it apply to UX?
Goodhart’s Law states that when a measure becomes a target, it ceases to be a good measure. In UX, this means that optimising directly for a metric like completion rate or time-on-task can reshape the journey in ways that improve the number while degrading the experience. The metric stops reflecting what it was intended to measure because the design is now optimised to satisfy the measure rather than the underlying user need.
What should you measure in UX besides completion rate?
Useful complements to completion rate include: error rate per journey, support contact volume per task type, time-to-recovery after an error, qualitative confidence indicators from research sessions, and return visit rate for tasks users often repeat to verify an outcome. These signals collectively describe whether users are completing with confidence or completing despite confusion.
How do you know if a UX metric is distorting your design decisions?
Signs that a metric may be distorting design include: design changes that improve the metric but increase support contact, user research that contradicts what the metric suggests, journeys that score well on speed but poorly on user confidence in research, and teams that feel pressure to remove clarity-supporting features because they add steps or time. When the metric and qualitative evidence diverge, the metric deserves scrutiny before it shapes any further decisions.