Introduction
User experience (UX) teams have many types of data at their disposal to ascertain the quality of a digital product’s user experience. Traditionally, these sources have focused on direct customer feedback through methods such as interviews and usability studies, as well as surveys[1] and in-product feedback mechanisms. Beyond survey methodologies, however, it can be time-consuming to create a recurring channel of in-depth UX insights through these traditional UX research methods because they require time to conduct, analyze, and create reports of findings.
Product managers rely on metrics that require little effort to gather and report on to give them a sense of business health. These metrics—conversion rate, renewal rate, average order value, and so on—speak to the overall quality of the business, but they cannot typically pinpoint specific user experience issues.
UX teams can benefit from metrics that are specific to user experience to augment their traditional customer feedback channels. Usage metrics—data captured through product instrumentation as people visit a website, use a web application or SaaS product, or interact with an app—can allow teams to infer user experience issues and understand what customers are doing within a product with little effort after the initial setup.
Usage metrics, for example, can identify a place within a product where customers frequently access online help, suggesting this aspect of the product is problematic for customers. Usage metrics can thus help monitor a product’s user experience and help identify when and where issues may be occurring.
In this article, we describe some metrics user experience teams can gather to begin to monitor their products’ user experience in an automated way. This article builds off of the work of Digital Telepathy/Google Ventures (2016) and Pavliscak (2014), but focuses less on process and methods and more on specific metrics that may be of value to UX teams.
Caveat emptor
The promise of usage metrics is offset by a large limitation: Telemetric data is strictly behavioral and does not illuminate user intent, expectations, or satisfaction (see “The Signal Problem” in Pavliscak, 2014). For measures like those, surveys, interviews, and the like remain essential. Behavior alone, as UX researchers know, can be easy to misinterpret. One must be careful to know the limits of the usage metrics collected to draw the proper conclusions. For example, you may notice customers abandoning an eCommerce site on the shopping cart page—that may be because the cart page is poorly designed, or it may be due to the “sticker shock” of high product prices.
Thus, usage metrics should not be considered in isolation; instead, they should be considered the starting point for additional research (or A/B tests) or a means of triangulating insights gleaned from surveys, usability studies, heuristic analyses, and so forth. Those traditional research methods remain the richest source of customer insights. Usage metrics should remain a complement to those research methods, not a replacement.
Another caution: It is easy to fixate on one or several metrics that may not be representative of product quality or customer intent, creating blind spots. For example, an airline product team might monitor conversion rates of its mobile app, whereas perhaps flight check-ins are a more meaningful metric for that product. Tracking conversion rates might lead that team to believe its app is performing well when in fact it is not for customers who are attempting to use the app to check-in. Aligning metrics to product and user goals is critical.
Sample UX usage metrics
Below is a list of sample usage metrics that can help illuminate user experience issues and identify design opportunities in eCommerce sites, web applications, and apps. These metrics are meant to answer questions such as these:
- Where might UX issues exist in the product?
- To what extent are customers able to complete the core, critical tasks the product supports?
- How is the product’s user experience changing over time?
- How do customers navigate within our product? What are they doing with our product?
Note that metrics should be based on or be aligned with specific product goals—just because you can measure something doesn’t mean it matters to you or your business. The sample metrics below are meant to help teams think through what UX metrics may be useful to them based on their unique product and business goals; in fact, these metrics themselves can provide the basis for measurable goals/Key Performance Indicators (KPIs). (See Digital Telepathy, 2016, for more insight into how to associate goals with metrics.) This list is broad and includes metrics that are not universally useful; it seems likely that teams would choose from this list a subset of the most meaningful metrics for their products. It is better to have a few carefully chosen metrics that motivate action than a large number of metrics that are not acted on.
Lastly, all of these metrics could be tracked over time and be reviewed alongside business-oriented metrics (e.g., conversion rate, renewal rate, revenue) and technology-oriented metrics (e.g., high priority tickets, team velocity, etc.) as well as qualitative customer data gleaned from surveys and the like.
Sample UX metrics
Topic | Metric | Research question(s) | What this may indicate |
Theme: Interaction behavior | |||
Feature use | % of customers interacting with each key, critical feature or page, sorted by frequency | How much use do key, critical features/pages receive as a proportion of all traffic? | Understanding how much use certain features get in a product or website can reveal what areas a UX team should focus on. |
Time to first interaction | Average or median time until customers first interact with frequent, critical functions, listed by duration | How long does it take people to interact with key, critical features/pages for the first time? | This metric can indicate discoverability issues with common tasks/actions, confusion about what a certain page or feature is for, or general information overload. |
Filtering and sorting | Filters/Sorts used by customers, sorted by frequency | What are the most typical ways customers filter or sort information in our displays? | Filtering or sorting can reveal customer preference as to what information should be displayed or how that information should be ordered. |
Abandonments or “fallout” | Abandonment (or “fallout”) by page/step/feature, sorted by frequency | Which pages, steps, or features drive site or product abandonment? | Atypical abandonment rates on a page or feature in a site or tool may indicate a problematic feature, or at least a potential area for UX optimization efforts. |
Device/ Browser behaviors | Conversions/abandonments, sorted by devices/browser types | What are customer abandonment and/or conversion rates by device type and browser? Are there any anomalies? | Reviewing conversions/abandonments and other types of success/failure rates by browser and/or device types can reveal where issues might exist for customers with specific technology configurations, highlighting QA issues. |
Navigational patterns | Heat maps and/or fallout charts of click streams by page/feature | What is the navigational behavior of customers? Where do customers tend to click on a given page and how do they proceed through important workflows? | Heat maps, fallout charts and other visualizations of click stream data can reveal unexpected customer behavior in frequent, critical task flows. They can also reveal where valuable content should or should not go. |
Navigation depth | Scrolling depth on key pages/feature | How deep into a page or feature do customers tend to scroll? | Scrolling depth can reveal places where customers tend to get stuck, or reveal where valuable content should or should not go. |
“Back” and “undo” functions | Frequency of back button or undo button use, sorted by feature/page | Where do customers tend to use their browser “back” buttons or use the product’s “undo” functionality most often? | Back button or undo button use can reveal areas in the product or site that customers frequently make mistakes or suggest confusing navigation. |
Repeated actions | List of actions customers tend to repeat more than once during a session, sorted by frequency | What actions do customers repeat within a session? | Repeated actions within the same session can indicate mistakes or confusion. For example, repetitively adding or removing the same thing, repetitive navigation between tabs, or adding something new and canceling several times in a row can all indicate customer confusion. Similarly, repeated use of help features is a red flag. |
Theme: Task support | |||
Task completion/ success | Rate of completion/ success, sorted by task | To what extent are customers completing a key task (or tasks) in the product? For example, if a key task of a product is the ability to export reports, that could be a metric tracked. | Measuring the success rates of specific tasks a product supports can identify user experience issues and allow teams to monitor UX quality over time. |
Time-on-task[2] | Average or median of durations to complete frequent/critical tasks, sorted by duration | How long does it take customers to complete frequent and critical tasks? | Generally speaking, UX teams want customers to be able to complete frequent and critical tasks quickly in their systems. |
Theme: Customer support/help | |||
Customer support (e.g., visits to a “Contact Us” page or call center number) drivers | Visits to customer support pages/features, sorted by the page/step/feature the visit was generated from | Which pages, steps, or features are driving visits to customer support (e.g., Contact Us)? | Atypical rates for accessing customer support features/agents from a particular page may indicate a particularly confusing feature. |
Traffic to online help/ knowledge base articles | Visits to help/knowledge base article visits, sorted by page/features | Which pages, steps, or features drive visits to online help pages? | Atypical rates to online help from a given page may indicate a particularly confusing page or procedure on that page. |
Traffic on online help/ knowledge base articles | Traffic on online help pages, sorted by topic | Within the online support system, which articles get the most traffic? | The online help pages that customers access most frequently suggest areas of the product/feature causing customer confusion. |
Contact center support drivers | Call volume by topic/feature | Which topics/features does the customer support team address most frequently; particularly, those topics that should be self-service for customers? | “Instructional” contacts at a call center–those contacts that could be self-service yet require a customer to contact the call center to resolve–suggest processes that may be difficult to complete or find online. |
Search terms | Top 25-50 in-product search terms, sorted by frequency | What are the top search terms in our product/on our site? | Search behavior can identify discoverability and usability issues; a large number of customers searching for “Export CSV,” for example, may reveal discoverability or usability issues with that feature. |
Theme: Engagement | |||
Active users | Average daily, weekly, and/or monthly users | How much traffic does the product receive, and is it increasing or decreasing over time? | Usage trends can help teams determine if a product is getting better/more popular or not. |
Overall product stickiness | Revisit frequency listed by feature use and/or customer profile | To what extent do people revisit our site/application? | Repeat visits are a measure of customer engagement, particularly if prior visits resulted in conversions or completed processes. |
Session duration | Average or median session durations compared to prior time period | How long do customers spend in the product by day, week, month (etc.)? | Measuring the extent to which customers rely on a particular product by measuring frequency of use can help identify how vital a product is to customers. For example, if customers tend to spend a long time on a particular site or in a particular product, that may be a sign that customers find the service important or beneficial (or conversely that the feature is hard to use and is thus time-consuming!) |
Feature duration | Time spent using key, critical features, sorted by duration | How long do customers spend on key, critical features? | This metric can be the result of many things, and thus, it is one of the most challenging to interpret. This metric can reveal areas of a product that are hard to use, but it may also reveal areas of the product that are most important to customers. |
Sign-ups | The number of people signing-up for a service compared to a prior time period | What is the rate of people signing-up for the product or service? | This metric can reveal how loyal a customer base is, and tracking over time can reveal the effectiveness of efforts to create more customer loyalty. |
Sign-ins | The number of people signing-in to the product or service compared to a prior time period | What is the rate of people signing-in to the product or service? | Related to product stickiness, this metric can reveal trends in engagement. |
Theme: Voice of customer | |||
Themes | Word cloud of themes of customer feedback about product or feature from various channels (social media, customer care channels, etc.) | What are customers saying most frequently about the product or service in social media, customer care, or elsewhere? | Word clouds can reveal a number of things about a product based on what is frequently said: they can identify features that are causing pain/delight, they can identify intention, they can identify feature gaps, and so on. |
Customer sentiment | Sentiment of customer feedback about product or feature from various channels (social media, customer care, etc.), tracked over time | What’s the overall sentiment of feedback received about the product or feature? | Sentiment can convey the relative satisfaction customers have with a product and can be helpful in understanding the overall experience people have with a product. |
Theme: Client-side technology | |||
Display sizes | Viewport sizes and/or screen resolution, sorted by % | What are the typical viewport sizes and/or screen resolutions for people accessing our site/features? | Screen sizes can help design and product teams understand what types of displays to optimize for, and the minimum widths and heights the design team should support. |
Browsers and devices | Client browsers and devices accessing the site/features, sorted by % of traffic | What are the typical browsers and devices accessing our site/features? | Knowing what types of browsers and devices are accessing a product can help design and product teams understand where to concentrate their efforts. It can also reveal gaps in product experience and quality issues for certain combinations of client-side technology. |
Advice for getting started
For many UX teams and practitioners using metrics such as the above will be new. For those organizations and practitioners we offer some advice for getting started.
Start small
If you start your program by instrumenting everything, stakeholders may get lost in the data and this will make it difficult for the team to stay focused and resolve issues. Additionally, without a few successes to demonstrate the value of instrumenting products it will be hard to justify the effort of setting up the program and purchasing or building product instrumentation abilities. Therefore, it is often most effective to start your metrics program small. Further, with a small program, it is easier to educate stakeholders on how to most effectively interpret and use usage metrics. That way, you can demonstrate the value of measuring specific metrics in a controlled environment to make sure they are not taken out of context or used without complementary qualitative data (e.g., data collected from interviews, usability studies).
Metrics are addictive: Well-chosen metrics will whet people’s appetite for more. Start small and use successes to grow the program and mature your organization’s data acumen.
Have a case study ready
If there is any resistance to your metrics program, find a case study of a similar product or service describing the value of their metrics program and how it has improved products, services, and processes. Or, if possible, capture one meaningful metric for your product, and use it to demonstrate the value of metrics to make the case for a more robust program.
Start with visualizing the end results
Help stakeholders understand what investments in gathering and publishing usage metrics will get them by visualizing the end result or report they can expect. This will not only help everyone get on the same page about what data should be gathered and why, it will also help your engineering team implement the right solution.
Create alignment
It is vital that the metrics you collect support product goals, and of course product goals should be shared amongst team members. It is therefore crucial to align with other stakeholders (product managers, engineering leads, sales staff) on the most important UX metrics to consider for a particular product. If the UX team works to improve metrics that do not support the larger goals of the enterprise, those goals are unhelpful. A meeting to discuss what UX metrics are most meaningful for the organization can generate alignment and support for the initiative.
Conclusion
Usage metrics can illuminate customer behavior with a product and a serve as a starting point for identifying user experience issues as well as a way to gauge product health.
Again, usage metrics are only one tool in the user experience research toolkit because they cannot identify causes and cannot illuminate end user intent or satisfaction. Surveys, interviews, and usability studies will always be essential for helping capture those aspects of the user experience and should form the core of any substantial UX program.
That said, the promise of usage metrics is a means of providing an automated stream of data to help point UX and product teams in the right direction when issues arise, and complement the standard research UX teams conduct on customer behavior, needs, and experience.
Acknowledgement
The authors thank Liz Aderhold, a UX professional at Alaska Airlines, for her help reviewing a draft of this article.
Additional Reading
Digital Telepathy and Google Ventures. “How to Choose the Right UX Metrics for Your Product.” How to Choose the Right UX Metrics for Your Product. N.p., n.d. Web. Accessed 19 Dec. 2016.
Pavliscak, Pamela. 2014. “Choosing the Right Metrics for User Experience.” UXmatters.com. UX Matters, n.d. Web. Accessed 19 Dec. 2016.
[1]Disclosure: The authors work for a company that offer tools for research, including a survey tool.
[2]Note: Time-on-task can be difficult to interpret as long durations can be a positive or negative depending on the intent of the product/feature and how customers feel about the durations. For example, customers on a social media platform may spend significant time in the product, and this time spent might indicate a positive engagement; on the other hand, long durations on a payment page could indicate the page is overly complicated.
Thanks!