Report Review: Nielsen/Norman Group’s Usability Return on Investment

Written by: Peter Merholz
“The key strategy is to get businesses to recognize that user experience is not simply a cost of doing business, but an investment–that with appropriate expenditure, you can expect a financial return.”In the business world, user experience endeavors are typically seen as a cost—a line item expense to be minimized to the greatest extent possible while still remaining competitive. User experience practitioners are, in part, to blame for this. We’ve been so focused on developing methods, processes, and solutions that we haven’t bothered to help businesses measure, and thereby understand, our financial worth.

We thought our worth was self-evident. Of course you’ll sell more products if they’re more usable! Or you’ll decrease costs because of heightened productivity! Exactly how much will you profit? I don’t know, but don’t you want to build the best product you can anyway?

As our field has matured, and as the economy’s continued belt-tightening means, striking out line items associated with costs, we’re realizing we need to prove our economic value. For consultants and agencies, this proof is necessary to sell services. Inside the corporation, employees have to show their contribution for fear of being let go. And all around there exists a strong push to increase our stake in the game, utilize our experience and methods not simply to make a product better, but to determine what to make in the first place.

The key strategy is to get businesses to recognize that user experience is not simply a cost of doing business, but an investment–that with appropriate expenditure, you can expect a financial return. Proving a return can be remarkably hard–tying user experience metrics (e.g., reduced error rates, increased success rates) to key financial metrics (e.g., increased sales, improved retention) requires access to data most of us simply don’t have. So, we look to others to help make our case, if not specifically for us, for our industry.

Nielsen Norman Group’s Usability Return on Investment
This has led to a number of essays, articles, and books on proving the value of user experience. Into this fray steps the Nielsen Norman Group’s (NN/g) famously quoted in New Scientist for saying:

“Why do we have so many unusable things when we know how to make them usable? I think it has to do with the fact that the usability advocates don’t understand business. Until they understand it and how products get made, we will have little progress.”

Unfortunately, the NN/g report does not seem to follow this advice. Although it does make a reasonable anecdotal case for investing in usability, the report methodology is so fundamentally flawed that any financial analyst worth her salt would immediately question its findings. Very simply, the authors do not make a strong business case for usability—a requirement for passing the muster with the accountants and senior managers who have ultimate accountability for profit and loss in a business.

The report is split into three sections: Cost of Usability; Benefits of Usability; and Case Studies. In “Cost of Usability,” the authors report on a survey conducted with attendees of the Nielsen Norman Group User Experience World Tour where they found that the “best practice” for the usability portion of a web design budget is 10 percent. In the heart of the report, “Benefits of Usability,” usability metrics are divided into four classes (Sales/conversion rate, Traffic/visitor count, User Performance, and Feature Use), and analysis of the case studies reveals an average improvement of 135 percent in those metric classes. The remaining 80 or so pages are devoted to the 35 Case Studies, showing before –and after states of key metrics and how usability methods helped achieve improvements.

Questionable sampling
Alert readers of this review were no doubt scratching their heads at the phrase in the last paragraph: “… survey conducted with attendees.” … Perhaps the gravest sin of this report is the extremely questionable sampling that went into both the measuring of costs and the findings of benefits. The authors acknowledge this when measuring costs:

Thus, there is an inherent selection bias that has excluded companies that do not care much for usability, because such companies would not be likely to invest the conference fee and the time for their staff to attend. This selection bias is acceptable for the purposes of the present analysis, which aims at estimating usability budgets for companies that do have a commitment to usability.

By couching this in “best practices,” all that matters is that the reader understand trends within companies whose efforts could qualify as “best practices.” And a key identifier of such a company is attending the Nielsen Norman Group User Experience World Tour. Um, okay.

For the case studies demonstrating benefits, it’s worth quoting the methodology for their collection:

Some of the case studies were collected from the literature or our personal contacts, but most came from two calls for case studies that were posted on Jakob Nielsen’s website, useit.com, in 2001 and 2002. Considering how widely the call was read, it is remarkable how relatively few metrics we were able to collect. Apparently, the vast majority of projects either don’t collect usability metrics at all or are unwilling to share them with the public, even when promised anonymity.

Simply posting a call suggests a remarkable laziness, considering what this report is trying to accomplish. No one can be expected to voluntarily submit a failing case study, so of course the findings show nothing but positive improvements from usability. In order to truly understand the benefits of usability, it’s necessary for the researchers to actually perform a little legwork in finding a range of activity. Fact is, the usability design community can learn as much from its mistakes as from its success—an analysis of cases where usability improvements did not necessarily contribute to financial success, or better, an acknowledgement of cases where the financial success was difficult to attribute, would have provided an equally valuable (and perhaps more credible) report for real practitioners in the field. While the individual case studies are basically valid, this sampling approach renders any aggregate findings and observed trends meaningless.

Case studies
About 75 percent of this report (83 of the 111 pages) addresses the 35 self-selected case studies individually. For each case you are told what NN/g identified as the important metrics to be measured before and after the usability project, and are given some background, the problem that was faced, the solutions arrived at (illustrated with before –and after screenshots), and the ROI impact.

One revealing detail is how the report refers to the improvement of a metric as an “ROI measurement,” yet never discusses what the new solution costs to develop. Yes, sales might have improved by 100 percent, but without understanding the costs needed to realize that improvement, you cannot actually state a “return.”

A number of cases are quite solid—readers will most likely find it quite clear that usable design methods had a direct impact on the key financial metrics for Performance Bikes, Broadmoor, eBags, macys.com, Junior’s Restaurant, and Deerfield.com, and perhaps a few others. However, for the bulk of cases, the link the authors make between usability and financial returns was questionable or even non-existent. Here are a few examples:

No accounting for cannibalization.
In ADC’s case study, sales increased dramatically, but the case infers that these purchases would have otherwise been made on the phone. To understand actual impact, you would need to tease out the percentage of sales actually created online from the percentage that was captured from other more expensive channels. It would also be nice to have some estimate of cost savings that placing sales online makes possible.

Not enough detail was provided in the case study.
For the Anonymous Electric Company, it is not clear why the improved customer survey is important to the company or to the customer. It appears to have something to do with energy conservation, but the case study does not provide enough detail, and as a result, no returns data can be attributed to energy conservation (which is arguably a financial return for the company, the customer, and society as a whole).

“The fundamental question when considering this report, and the driving reason for this review, is “What, exactly, am I getting for my $122?”No accounting for other mitigating factors.
In the case of opentable.com, the company was in the process of going national at the time of their re-launch. The “number of reservations made” metric does not screen for natural expansion, which is why financial analysts evaluate retail chains, such as Gap, based on “same store sales” rather than “total sales.” By the same logic, a better metric for opentable.com would be “reservations per restaurant.”

Dynamic Graphics totally changed their brand and product offering at the same time as the UX re-launch. Similarly, Omni Hotels vastly changed their visual design. The NN/g report awards “usability” as the sole contribution to these improved metrics, though other factors undoubtedly had an impact.

Vesey’s Seeds’ previous site was plagued by technical problems like slow or unsuccessful page downloads. How much of their metrics improvement was simply from technical improvements?

No clear link to financial returns.
Despite being a government agency, the Ministry of Finance, Israel, must have some idea of the monetary benefits of having a usable website (reduced phone calls, etc.). The case study makes no attempt to link changes in user behavior to return on investment and simply reports a traffic analysis.

Poor baseline data.
Any case study showing infinite improvement is an example of poor baseline data. You cannot ascribe infinite improvement just because the feature did not exist or the data was not collected before the design change.

What you get for $122
The fundamental question when considering this report, and the driving reason for this review, is “What, exactly, am I getting for my $122? (Or $248 for the site license) What can I do with this report?”

This report seems to be directed at usability practitioners, to support their efforts in increasing their budgets. Presumably, usability practitioners will, in turn, show this to management. They will tell management that current “best practice” is to devote 10 percent of a project’s budget to usability efforts. They will also tell management that, “on average,” usability provides measurable improvements of around 135 percent.

Unfortunately, unless management simply focuses on the executive summary and doesn’t actually read the report, this approach may backfire for practitioners. It is likely that a manager with any real or intuitive sense of hypothesis testing, financial benchmarking, and calculating ROI will be skeptical of the report’s validity because of the weak methodology, specious accounting, and sampling bias issues already discussed.

To its credit, some truly valuable takeaways from the report are the usability metrics–both the four classes (Sales, Traffic, User Performance, and Feature Use), and the specific metrics utilized in the individual cases. These metrics are a great starting point for practitioners to begin capturing baseline data and developing hypotheses for how these metrics are linked to financial performance. After doing this leg work, practitioners can begin the task of demonstrating the economic value of usability investments. However, these metrics are only a starting point—the report hints at linkages between usability metrics and financial returns without providing any real detailed analysis of how this was done in the individual cases or offering any guidelines for addressing this challenge at your business.

I hear some folks wonder, “But what about the 83 pages of case studies? There must be good stuff in there!” Sadly, this is not the case. The bulk of this report is simply not useful, because the cases are too wedded to particular contexts. The focus of each case study is the improvement made, which is utterly meaningless to the reader. So what if, as in the case of Deerfield, the team “[r]emoved the breadcrumb from the first page in the site, where it served no practical function,” or “[a]dded support information to the homepage.” Yes, it’s interesting that through usability methodology, they increased product downloads by 134 percent. But it’s not really interesting how they did it, unless the report authors think that you, too, can improve your metrics by doing what they did. Nor is it interesting to see screenshots demonstrating this.

The case studies’ primary function seems to pad the report to 111 pages, which is much more likely to warrant a $122 payment than, say, 40 pages.

You can get more with less
The intended audience for this report will be better served by Aaron Marcus’ “Return on Investment for Usable User-Centered Design: Examples and Statistics” [PDF], an essay that combines both literature review and some cogent, simple analysis. And, as that direct link suggests, it’s free.

The essay directly refers to 42 articles addressing different aspects of the financial impact of usability. If nothing else, it would serve as a valuable bibliography on this topic. To his credit, Aaron goes further, breaking down the metrics into three classes, each with subclasses:

Development:
Reduce Costs
Sales:
Increase Revenue
Use:
Improve Effectiveness
Save development costs Increase transactions/purchases Increase success rate
Save development time Increase product sales Reduce user error
Reduce maintenance costs Increase traffic Increase productivity
Save redesign costs Retain customers Increase user satisfaction
Attract more customers Increase job satisfaction
Increase market share Increase ease of use
Increase ease of learning
Increase trust in systems
Decrease support costs
Reduce training costs

This framework helps make sense of the metrics miasma, and readers can begin to understand which metrics they can effect and how to interpret that value.

Where to go from here?
While there have been efforts to underscore the value of usability, the state of doing so is immature. Forthwith are suggestions that combine learnings from the Nielsen Norman Group report and Marcus’ essay, along with some observations made in working with Adaptive Path clients to better ascribe financial results to user experience design.

Create a cross-functional team. Academic and professional literature on product development and design has shown again and again that cross-functional teams improve the design process. Even if it’s an informal ad hoc committee, the insights of marketers, accountants, and senior managers can really help designers attach user needs and usability interventions to business goals and financial metrics.

Example: Usability Design Managers alone may not have access to (or even be aware of) important marketing and financial data that can help them to better measure the impacts of their work.

Collect good baseline data. Meaningful evaluations of design improvements are best shown by before/after snapshots of site performance. However, not all performance metrics are strong. By decomposing aggregate data (e.g., sales) into meaningful components closely linked to usability (e.g., conversion, sales per page view), designers can gain clearer understanding of the before and after snapshots.

Example: Total sales is most likely not a meaningful measure of usability improvements to an online shopping cart because many other factors influence total sales—a better metric might be reduction in abandoned carts or a reduction in errors. Contribution to sales (rather than total sales) can then be more realistically calculated from improvements in these usability metrics.

Isolate the expected impacts of usability improvements. Often usability improvements are accompanied by larger strategic changes in the brand position, marketing, and site technology. It is best to attempt to isolate usability improvements from these other changes.

Example: A design improvement that occurs during a time of natural expansion for the business will require more finesse to accurately measure the contribution that user experience design made to that growth. For instance, increased total transactions loses its meaning if the number of products or vendors has also greatly increased—a measure such as transactions per product or per vendor will help tease out design improvements that are independent of natural growth.

Use hypothesis testing. Similarly, the linkage between design performance improvements and financial returns may not occur as expected. It can be helpful to brainstorm a list of possible metrics and returns, and analyze each of these individually to determine which best captures the improvements made through user experience design. Of particular interest are what we call “indicator” metrics, whose movement is correlated to more direct financial metrics.

Example: In the Nielsen Norman Group report, the Deerfield.com team figured out that they could directly impact the number of product downloads. They also knew that, separately, product downloads tracked to product sales. So by increasing downloads, they could increase sales.

Make user experience people specifically accountable. Too often, the people performing web design are not held accountable. Their endeavors are seen simply as a cost of doing business. We’ve seen talented user experience people used as a kind of free internal consulting, spinning wheels on half-baked projects because their efforts are not believed to have truly remunerative value. User experience workers must seek accountability for the metrics we’ve been discussing.

Example: Don’t tie an entire team to the responsibility of a single aggregate metric (such as sales). This will only engender frustration because employees will feel as if their individual contribution is futile toward this grand larger goal. Make specific groups or individuals responsible for metrics over which they have direct influence, perhaps beginning with some of the metrics from Aaron Marcus’ paper. This will be cumbersome at first, but will prove immensely valuable once underway.

Celebrate success and revisit the process. To institutionalize lessons learned in any design process, it helps to share successes with members of the cross-functional team and within the business as a whole.

Example: Many firms post internal white papers to the corporate intranet to share success and recognize valuable contributions to the business. This is also a great way to maintain an “institutional memory” of projects that have succeeded as the champions of the project move on to different endeavors.

Where this can lead
User experience practitioners have long known in their guts that their efforts truly add value when developing products or systems. And we’ve been long frustrated to see our abilities relegated to the tactical end of the product development process, where we’re given poorly considered ideas and told to make them into useful, usable, and desirable products. By concretely demonstrating our impact on the success of our works, we will find ourselves involved earlier in the process, helping determine what will be made, not just how to make it.


Peter Merholz is a founding partner of Adaptive Path, which provides user experience leadership for all manner of organizations. He is an experienced information architect, writer, speaker, and leader in the field of user-experience design. Clients include PeopleSoft, Cathay Pacific, and Intuit, and he’s spoken at the ASIS IA Summits, SXSW, and DUX2003 conferences. When he ought to be working, he’s writing on his personal site, http://peterme.com/.

Scott Hirsch
A recent graduate of Berkeley’s Haas School of Business, Scott Hirsch is passionate about web design, product development processes, and creative uses for technology. Using his MBA powers for good instead of evil, his goal is to connect user experience design efforts with financial returns through analysis of business strategy and managerial accounting techniques. He is currently working on projects in San Francisco with Adaptive Path, WellsFargo.com, and the Haas School of Business. He has also presented on business analysis at the DUX 2003 conference.