Skip to content

Why Click Count Is a Poor UX Metric

The short answer

Click count is a poor proxy for usability because it measures interaction quantity, not interaction quality. A journey with fewer clicks is not automatically a better journey. Users tolerate and often prefer additional steps when each step is clear, purposeful, and expected. Removing clicks that serve a user need in the name of efficiency typically produces a journey that is shorter but less trustworthy. The ‘3-click rule’ specifically has no empirical basis and has been formally debunked by usability research.

The origin and persistence of click-count thinking

The idea that fewer clicks equals better UX is intuitive enough to have persisted long after the evidence against it became clear. The ‘3-click rule’ — the notion that users should be able to find any information within three clicks of the homepage — emerged in the early 2000s as a practical heuristic. It was simple, measurable, and easy to apply in design reviews.

The problem is that it was never empirically validated. Nielsen Norman Group’s research found no evidence that users are more likely to abandon a task after three clicks than after any other number. Users’ satisfaction and task completion are not determined by how many clicks they made, rather they are determined by whether each step made sense and whether they felt confident about the outcome.

The 3-click rule has been formally challenged and largely abandoned in UX practice. But click-counting as a design instinct has proved harder to dislodge. The question ‘how do we reduce clicks?’ still appears regularly in design reviews, stakeholder conversations, and product briefs.

What click count actually measures and what it misses

Click count measures the number of interactions a user takes to complete a task. What it doesn’t measure is anything about the quality of those interactions; whether they were clear, whether they were expected, whether the user felt confident after each one.

A journey with three clicks that leaves the user uncertain at each step is a worse experience than a journey with six clicks where each step is clear and each outcome is explicit. The three-click journey will score better on the metric and produce more abandonment, more support contacts, and more repeat attempts in practice.

The metric also creates a structural bias against the design patterns that most reliably support user confidence. Review screens, confirmation steps, and summary pages all add clicks. Under click-count optimisation pressure, they are often the first things to be removed. And they are among the most valuable things a service can provide to users at high-commitment moments.

What a useful interaction metric looks like

The question of ‘is this journey as efficient as it should be?’ that the click count metric tries to answer is a valid one. The problem is the method by which this is done, not the goal of understanding efficiency. More useful ways to evaluate journey efficiency include:

  • Task completion with minimal error rate: not just whether users finished, but how many errors they made along the way
  • Time-to-completion with comprehension: not just how fast, but whether users understood what happened
  • Recovery rate after an error: how successfully users resolved problems when they arose
  • Qualitative confidence indicators from research: whether users described the journey as clear, predictable, and trustworthy

These are harder to count than clicks and they require more than a dashboard to evaluate, but they describe the actual experience rather than an indirect substitute for it.

The right question to ask instead

When a journey feels like it has too many steps, the question is not ‘how do we reduce the number of clicks?’ The more useful question to ask is:

Is each step in this journey earning its place? Does it give the user something they need — information, confirmation, orientation, or the opportunity to review — that they couldn’t get by removing it?

Steps that earn their place should stay. Steps that compensate for ambiguity elsewhere rather than resolving it should be fixed at source. The distinction requires understanding what each step is doing for the user, which click count doesn’t capture.

Frequently asked questions

Is the 3-click rule valid in UX design?

No. The 3-click rule has no empirical basis and has been formally challenged by usability research. Nielsen Norman Group found no evidence that users abandon tasks more frequently after three clicks than after any other number. User satisfaction and task success are determined by the clarity and predictability of each step, not by how many steps there are.

Why is click count a poor measure of usability?

Click count measures the quantity of interactions, not their quality. A journey with few clicks can be deeply confusing and untrustworthy if each step is ambiguous. A journey with more clicks can be highly efficient if each step is clear and each outcome is explicit. Click count creates a structural bias against the design patterns such as review screens, confirmations, and summaries that most reliably support user confidence.

What should you measure instead of click count in UX?

More useful metrics include task completion with error rate, time-to-completion with qualitative comprehension indicators, recovery rate after errors, and direct confidence ratings from usability research. These measure whether users understood what happened, felt confident about their actions, and were able to correct mistakes, which click count alone does not capture.

How do you reduce unnecessary steps in a user journey without removing useful ones?

Evaluate each step by asking what it provides to the user. Does it provide information, confirmation, orientation, or review opportunity? Steps that provide something users need at that moment earn their place. Steps that exist because unclear design upstream hasn’t been resolved should be fixed at the source where clarity is lacking. The goal is not simply fewer steps, it is ensuring all steps are purposeful.