Report Review: Nielsen/Norman Group’s Usability Return on Investment

Posted by
“The key strategy is to get businesses to recognize that user experience is not simply a cost of doing business, but an investment–that with appropriate expenditure, you can expect a financial return.”In the business world, user experience endeavors are typically seen as a cost—a line item expense to be minimized to the greatest extent possible while still remaining competitive. User experience practitioners are, in part, to blame for this. We’ve been so focused on developing methods, processes, and solutions that we haven’t bothered to help businesses measure, and thereby understand, our financial worth.

We thought our worth was self-evident. Of course you’ll sell more products if they’re more usable! Or you’ll decrease costs because of heightened productivity! Exactly how much will you profit? I don’t know, but don’t you want to build the best product you can anyway?

As our field has matured, and as the economy’s continued belt-tightening means, striking out line items associated with costs, we’re realizing we need to prove our economic value. For consultants and agencies, this proof is necessary to sell services. Inside the corporation, employees have to show their contribution for fear of being let go. And all around there exists a strong push to increase our stake in the game, utilize our experience and methods not simply to make a product better, but to determine what to make in the first place.

The key strategy is to get businesses to recognize that user experience is not simply a cost of doing business, but an investment–that with appropriate expenditure, you can expect a financial return. Proving a return can be remarkably hard–tying user experience metrics (e.g., reduced error rates, increased success rates) to key financial metrics (e.g., increased sales, improved retention) requires access to data most of us simply don’t have. So, we look to others to help make our case, if not specifically for us, for our industry.

Nielsen Norman Group’s Usability Return on Investment
This has led to a number of essays, articles, and books on proving the value of user experience. Into this fray steps the Nielsen Norman Group’s (NN/g) famously quoted in New Scientist for saying:

“Why do we have so many unusable things when we know how to make them usable? I think it has to do with the fact that the usability advocates don’t understand business. Until they understand it and how products get made, we will have little progress.”

Unfortunately, the NN/g report does not seem to follow this advice. Although it does make a reasonable anecdotal case for investing in usability, the report methodology is so fundamentally flawed that any financial analyst worth her salt would immediately question its findings. Very simply, the authors do not make a strong business case for usability—a requirement for passing the muster with the accountants and senior managers who have ultimate accountability for profit and loss in a business.

The report is split into three sections: Cost of Usability; Benefits of Usability; and Case Studies. In “Cost of Usability,” the authors report on a survey conducted with attendees of the Nielsen Norman Group User Experience World Tour where they found that the “best practice” for the usability portion of a web design budget is 10 percent. In the heart of the report, “Benefits of Usability,” usability metrics are divided into four classes (Sales/conversion rate, Traffic/visitor count, User Performance, and Feature Use), and analysis of the case studies reveals an average improvement of 135 percent in those metric classes. The remaining 80 or so pages are devoted to the 35 Case Studies, showing before –and after states of key metrics and how usability methods helped achieve improvements.

Questionable sampling
Alert readers of this review were no doubt scratching their heads at the phrase in the last paragraph: “… survey conducted with attendees.” … Perhaps the gravest sin of this report is the extremely questionable sampling that went into both the measuring of costs and the findings of benefits. The authors acknowledge this when measuring costs:

Thus, there is an inherent selection bias that has excluded companies that do not care much for usability, because such companies would not be likely to invest the conference fee and the time for their staff to attend. This selection bias is acceptable for the purposes of the present analysis, which aims at estimating usability budgets for companies that do have a commitment to usability.

By couching this in “best practices,” all that matters is that the reader understand trends within companies whose efforts could qualify as “best practices.” And a key identifier of such a company is attending the Nielsen Norman Group User Experience World Tour. Um, okay.

For the case studies demonstrating benefits, it’s worth quoting the methodology for their collection:

Some of the case studies were collected from the literature or our personal contacts, but most came from two calls for case studies that were posted on Jakob Nielsen’s website, useit.com, in 2001 and 2002. Considering how widely the call was read, it is remarkable how relatively few metrics we were able to collect. Apparently, the vast majority of projects either don’t collect usability metrics at all or are unwilling to share them with the public, even when promised anonymity.

Simply posting a call suggests a remarkable laziness, considering what this report is trying to accomplish. No one can be expected to voluntarily submit a failing case study, so of course the findings show nothing but positive improvements from usability. In order to truly understand the benefits of usability, it’s necessary for the researchers to actually perform a little legwork in finding a range of activity. Fact is, the usability design community can learn as much from its mistakes as from its success—an analysis of cases where usability improvements did not necessarily contribute to financial success, or better, an acknowledgement of cases where the financial success was difficult to attribute, would have provided an equally valuable (and perhaps more credible) report for real practitioners in the field. While the individual case studies are basically valid, this sampling approach renders any aggregate findings and observed trends meaningless.

Case studies
About 75 percent of this report (83 of the 111 pages) addresses the 35 self-selected case studies individually. For each case you are told what NN/g identified as the important metrics to be measured before and after the usability project, and are given some background, the problem that was faced, the solutions arrived at (illustrated with before –and after screenshots), and the ROI impact.

One revealing detail is how the report refers to the improvement of a metric as an “ROI measurement,” yet never discusses what the new solution costs to develop. Yes, sales might have improved by 100 percent, but without understanding the costs needed to realize that improvement, you cannot actually state a “return.”

A number of cases are quite solid—readers will most likely find it quite clear that usable design methods had a direct impact on the key financial metrics for Performance Bikes, Broadmoor, eBags, macys.com, Junior’s Restaurant, and Deerfield.com, and perhaps a few others. However, for the bulk of cases, the link the authors make between usability and financial returns was questionable or even non-existent. Here are a few examples:

No accounting for cannibalization.
In ADC’s case study, sales increased dramatically, but the case infers that these purchases would have otherwise been made on the phone. To understand actual impact, you would need to tease out the percentage of sales actually created online from the percentage that was captured from other more expensive channels. It would also be nice to have some estimate of cost savings that placing sales online makes possible.

Not enough detail was provided in the case study.
For the Anonymous Electric Company, it is not clear why the improved customer survey is important to the company or to the customer. It appears to have something to do with energy conservation, but the case study does not provide enough detail, and as a result, no returns data can be attributed to energy conservation (which is arguably a financial return for the company, the customer, and society as a whole).

“The fundamental question when considering this report, and the driving reason for this review, is “What, exactly, am I getting for my $122?”No accounting for other mitigating factors.
In the case of opentable.com, the company was in the process of going national at the time of their re-launch. The “number of reservations made” metric does not screen for natural expansion, which is why financial analysts evaluate retail chains, such as Gap, based on “same store sales” rather than “total sales.” By the same logic, a better metric for opentable.com would be “reservations per restaurant.”

Dynamic Graphics totally changed their brand and product offering at the same time as the UX re-launch. Similarly, Omni Hotels vastly changed their visual design. The NN/g report awards “usability” as the sole contribution to these improved metrics, though other factors undoubtedly had an impact.

Vesey’s Seeds’ previous site was plagued by technical problems like slow or unsuccessful page downloads. How much of their metrics improvement was simply from technical improvements?

No clear link to financial returns.
Despite being a government agency, the Ministry of Finance, Israel, must have some idea of the monetary benefits of having a usable website (reduced phone calls, etc.). The case study makes no attempt to link changes in user behavior to return on investment and simply reports a traffic analysis.

Poor baseline data.
Any case study showing infinite improvement is an example of poor baseline data. You cannot ascribe infinite improvement just because the feature did not exist or the data was not collected before the design change.

What you get for $122
The fundamental question when considering this report, and the driving reason for this review, is “What, exactly, am I getting for my $122? (Or $248 for the site license) What can I do with this report?”

This report seems to be directed at usability practitioners, to support their efforts in increasing their budgets. Presumably, usability practitioners will, in turn, show this to management. They will tell management that current “best practice” is to devote 10 percent of a project’s budget to usability efforts. They will also tell management that, “on average,” usability provides measurable improvements of around 135 percent.

Unfortunately, unless management simply focuses on the executive summary and doesn’t actually read the report, this approach may backfire for practitioners. It is likely that a manager with any real or intuitive sense of hypothesis testing, financial benchmarking, and calculating ROI will be skeptical of the report’s validity because of the weak methodology, specious accounting, and sampling bias issues already discussed.

To its credit, some truly valuable takeaways from the report are the usability metrics–both the four classes (Sales, Traffic, User Performance, and Feature Use), and the specific metrics utilized in the individual cases. These metrics are a great starting point for practitioners to begin capturing baseline data and developing hypotheses for how these metrics are linked to financial performance. After doing this leg work, practitioners can begin the task of demonstrating the economic value of usability investments. However, these metrics are only a starting point—the report hints at linkages between usability metrics and financial returns without providing any real detailed analysis of how this was done in the individual cases or offering any guidelines for addressing this challenge at your business.

I hear some folks wonder, “But what about the 83 pages of case studies? There must be good stuff in there!” Sadly, this is not the case. The bulk of this report is simply not useful, because the cases are too wedded to particular contexts. The focus of each case study is the improvement made, which is utterly meaningless to the reader. So what if, as in the case of Deerfield, the team “[r]emoved the breadcrumb from the first page in the site, where it served no practical function,” or “[a]dded support information to the homepage.” Yes, it’s interesting that through usability methodology, they increased product downloads by 134 percent. But it’s not really interesting how they did it, unless the report authors think that you, too, can improve your metrics by doing what they did. Nor is it interesting to see screenshots demonstrating this.

The case studies’ primary function seems to pad the report to 111 pages, which is much more likely to warrant a $122 payment than, say, 40 pages.

You can get more with less
The intended audience for this report will be better served by Aaron Marcus’ “Return on Investment for Usable User-Centered Design: Examples and Statistics” [PDF], an essay that combines both literature review and some cogent, simple analysis. And, as that direct link suggests, it’s free.

The essay directly refers to 42 articles addressing different aspects of the financial impact of usability. If nothing else, it would serve as a valuable bibliography on this topic. To his credit, Aaron goes further, breaking down the metrics into three classes, each with subclasses:

Development:
Reduce Costs
Sales:
Increase Revenue
Use:
Improve Effectiveness
Save development costs Increase transactions/purchases Increase success rate
Save development time Increase product sales Reduce user error
Reduce maintenance costs Increase traffic Increase productivity
Save redesign costs Retain customers Increase user satisfaction
Attract more customers Increase job satisfaction
Increase market share Increase ease of use
Increase ease of learning
Increase trust in systems
Decrease support costs
Reduce training costs

This framework helps make sense of the metrics miasma, and readers can begin to understand which metrics they can effect and how to interpret that value.

Where to go from here?
While there have been efforts to underscore the value of usability, the state of doing so is immature. Forthwith are suggestions that combine learnings from the Nielsen Norman Group report and Marcus’ essay, along with some observations made in working with Adaptive Path clients to better ascribe financial results to user experience design.

Create a cross-functional team. Academic and professional literature on product development and design has shown again and again that cross-functional teams improve the design process. Even if it’s an informal ad hoc committee, the insights of marketers, accountants, and senior managers can really help designers attach user needs and usability interventions to business goals and financial metrics.

Example: Usability Design Managers alone may not have access to (or even be aware of) important marketing and financial data that can help them to better measure the impacts of their work.

Collect good baseline data. Meaningful evaluations of design improvements are best shown by before/after snapshots of site performance. However, not all performance metrics are strong. By decomposing aggregate data (e.g., sales) into meaningful components closely linked to usability (e.g., conversion, sales per page view), designers can gain clearer understanding of the before and after snapshots.

Example: Total sales is most likely not a meaningful measure of usability improvements to an online shopping cart because many other factors influence total sales—a better metric might be reduction in abandoned carts or a reduction in errors. Contribution to sales (rather than total sales) can then be more realistically calculated from improvements in these usability metrics.

Isolate the expected impacts of usability improvements. Often usability improvements are accompanied by larger strategic changes in the brand position, marketing, and site technology. It is best to attempt to isolate usability improvements from these other changes.

Example: A design improvement that occurs during a time of natural expansion for the business will require more finesse to accurately measure the contribution that user experience design made to that growth. For instance, increased total transactions loses its meaning if the number of products or vendors has also greatly increased—a measure such as transactions per product or per vendor will help tease out design improvements that are independent of natural growth.

Use hypothesis testing. Similarly, the linkage between design performance improvements and financial returns may not occur as expected. It can be helpful to brainstorm a list of possible metrics and returns, and analyze each of these individually to determine which best captures the improvements made through user experience design. Of particular interest are what we call “indicator” metrics, whose movement is correlated to more direct financial metrics.

Example: In the Nielsen Norman Group report, the Deerfield.com team figured out that they could directly impact the number of product downloads. They also knew that, separately, product downloads tracked to product sales. So by increasing downloads, they could increase sales.

Make user experience people specifically accountable. Too often, the people performing web design are not held accountable. Their endeavors are seen simply as a cost of doing business. We’ve seen talented user experience people used as a kind of free internal consulting, spinning wheels on half-baked projects because their efforts are not believed to have truly remunerative value. User experience workers must seek accountability for the metrics we’ve been discussing.

Example: Don’t tie an entire team to the responsibility of a single aggregate metric (such as sales). This will only engender frustration because employees will feel as if their individual contribution is futile toward this grand larger goal. Make specific groups or individuals responsible for metrics over which they have direct influence, perhaps beginning with some of the metrics from Aaron Marcus’ paper. This will be cumbersome at first, but will prove immensely valuable once underway.

Celebrate success and revisit the process. To institutionalize lessons learned in any design process, it helps to share successes with members of the cross-functional team and within the business as a whole.

Example: Many firms post internal white papers to the corporate intranet to share success and recognize valuable contributions to the business. This is also a great way to maintain an “institutional memory” of projects that have succeeded as the champions of the project move on to different endeavors.

Where this can lead
User experience practitioners have long known in their guts that their efforts truly add value when developing products or systems. And we’ve been long frustrated to see our abilities relegated to the tactical end of the product development process, where we’re given poorly considered ideas and told to make them into useful, usable, and desirable products. By concretely demonstrating our impact on the success of our works, we will find ourselves involved earlier in the process, helping determine what will be made, not just how to make it.


Peter Merholz is a founding partner of Adaptive Path, which provides user experience leadership for all manner of organizations. He is an experienced information architect, writer, speaker, and leader in the field of user-experience design. Clients include PeopleSoft, Cathay Pacific, and Intuit, and he’s spoken at the ASIS IA Summits, SXSW, and DUX2003 conferences. When he ought to be working, he’s writing on his personal site, http://peterme.com/.

Scott Hirsch
A recent graduate of Berkeley’s Haas School of Business, Scott Hirsch is passionate about web design, product development processes, and creative uses for technology. Using his MBA powers for good instead of evil, his goal is to connect user experience design efforts with financial returns through analysis of business strategy and managerial accounting techniques. He is currently working on projects in San Francisco with Adaptive Path, WellsFargo.com, and the Haas School of Business. He has also presented on business analysis at the DUX 2003 conference.

23 comments

  1. Completely agree with you, as a small usability company I paid for Nielsen report to be able to extract some key research and actual return ratios for my clients. Nielsen did not come through and the research was not at all what I expected and could not be used. However thanks for the “Return on Investment for Usable User-Centered Design: Examples and Statistics” [PDF] as it is perfect in giving key ratios that I required. Thanks

  2. After completing this three-day course, you’ll not only understand the fundamental role of usability and the methods for employing it, but also how to develop a sound usability plan for a design project and effectively execute that plan.

    This intensive camp gives you the practical skills you need to recruit participants, conduct hands-on testing with real users, and turn your findings into action items for the design team.

    ===============================================

    Seasol

    MLS

  3. My jaw was dropping when I heard Nielsen make some of these ROI claims at the DC world tour a year or so ago. I’m a math major, and although there weren’t written slides that showed his math, they just sounded bizarre. I’d hoped that I just misheard. Apparently not!

  4. Great article.

    My impression from this review is not so much the (scandalous!) idea that NN/g practices slipshod methodology… what I have learned is that measuring usability ROI is quite often nearly impossible. In particular the “mitigating factors” section pulled the rug out from under almost every ROI calculation/model I’ve ever heard of (there are few usability changes that are not accompanied by simultaneous business, design, technological, and/or marketing changes). This article leaves me even more skeptical about *any* consultant that claims that usability can be statistically tied to revenue… wearing what Jesse James Garrett called “the lab coat”.

    ROI for usability may be no easier to prove than, say, ROI for any other “production quality” choice in any other industry, from the strength of the threads used to sew a t-shirt together to the amount of sawdust added to a fast-food hamburger patty.

    Is the fudging of ROI metrics really that uncommon in the usability world? Or is it, as I beleive, far and away the norm?

    Or, for that matter, is crappy ROI methodology really that uncommon in other industries? ROI may be a business concept that is abused universally, not just by usability pundits.

    My firm sells ourselves as ‘experienced and smart’, not necessarily as ‘impartial and scientific’. We are user experience designers, not research scientists. This article only confirms my suspicion that most of what we do in this industry is difficult to measure with a large degree of scientific rigor. There is always a huge space where innovative and clever people fill in the gaps.

    I’m not saying that it is *always* impossible to measure usability ROI, but I think this article is suggesting that it is a immensely more difficult task than many of us imagine — and that it is, indeed, often totally impossible. ROI numbers analysis can *inform* but will never replace smart business and design sensibilities.

    The article’s opening statements, which I think are meant to be facetious, end up seeming provide the best existing rationale for investment in usability:
    “Of course you’ll sell more products if they’re more usable! Or you’ll decrease costs because of heightened productivity! Exactly how much will you profit? I don’t know, but don’t you want to build the best product you can anyway?”

    (I had my doubts about this article before I read it, seeing that one of its authors is a partner at a firm that competes with NN/g. But after reading it these doubts were dispelled.)

    (I also am chuckling at the idea that there may be tons and tons of similar “evidence” DISproving NN/g’s thesis: where a ton of money was spent on usability and yet sales remained flat, or even decreased dramatically.)

    -Cf

  5. Based on Peter Merholz and Scott Hirsch comments, they seemed to have an expectation level for the report to strategically breakout of all the usability recommendation that led to ROI. This sounds great… but, to vividly breakdown and report the number of sites that were featured, it would be to cost prohibitive and a research nightmare to justify writing a report of that magnitude. If they were expecting that amount of information, they need to look for a book. —Tell us when you find it… 😉

    I think the goal of Nielsen/Norman Group’s report was to make a general assessment that would inspire the reader to go and interact with the available sites. Merholz’s and Hirsch’s review reads as though they got caught up in having paid for such a light report. I had to ask myself, “Did they forget to actually go and interact with the available featured web sites?”

    To be fair, I recognize other influencers were taken for granted. What about brand experience? Did usability recommendations add to brand perception? In some of the case studies, you can connect it. Having roots in software engineering, Nielson/Norman Group has a tendency of distancing itself from “how” functionality can heighten brand awareness (versus user experience). So, Yes… you could argue that brand experience was slightly discounted by the Nielson/Norman Group report.

    Anyway, Usability is about user interaction/perception and it seems Merholz and Hirsch was reacting to some of the limitations of a flat report. To their credit, I agree with their points concerning the report’s case study selection and measurement process. Further, I think that a more tactical approach with fewer sites would have allowed more design context to be established. With this context, I think several of Merholz’s and Hirsch’s issues would have been addressed.

    So, I have to ponder what both sets of Arthur’s would answer: Tactically (2 to 3 focused sites) or Strategically (10 to 15 broad vision sites), How would you address a report on usability and ROI?

  6. Based on Peter Merholz and Scott Hirsch comments, they seemed to have an expectation level for the report to strategically breakout of all the usability recommendation that led to ROI. This sounds great… but, to vividly breakdown and report the number of sites that were featured, it would be to cost prohibitive and a research nightmare to justify writing a report of that magnitude. If they were expecting that amount of information, they need to look for a book. —Tell us when you find it… 😉

    I think the goal of Nielsen/Norman Group’s report was to make a general assessment that would inspire the reader to go and interact with the available sites. Merholz’s and Hirsch’s review reads as though they got caught up in having paid for such a light report. I had to ask myself, “Did they forget to actually go and interact with the available featured web sites?”

    To be fair, I recognize other influencers were taken for granted. What about brand experience? Did usability recommendations add to brand perception? In some of the case studies, you can connect it. Having roots in software engineering, Nielson/Norman Group has a tendency of distancing itself from “how” functionality can heighten brand awareness (versus user experience). So, Yes… you could argue that brand experience was slightly discounted by the Nielson/Norman Group report.

    Anyway, Usability is about user interaction/perception and it seems Merholz and Hirsch was reacting to some of the limitations of a flat report. To their credit, I agree with their points concerning the report’s case study selection and measurement process. Further, I think that a more tactical approach with fewer sites would have allowed more design context to be established. With this context, I think several of Merholz’s and Hirsch’s issues would have been addressed.

    So, I have to ponder what both sets of arthur’s would answer: Tactically (2 to 3 focused sites) or Strategically (10 to 15 broad vision sites), How would you address a report on usability and ROI?

  7. Great article. I think this is one of the single biggest challenges facing the IA/UX field in the quest for broad acceptance and growth. As an MBA who has led several UX projects and has a good awareness of the field, I have been consistently concerned with the lack of effort given in trying to set ROI metrics for usability projects during the early stages of this field. As a lobbyist of good IA/UX within the business community, it is crucial to have hard numbers that can be delivered. If managers are given a true positive ROI proposition, they would be hard pressed to NOT approve the project. If IA/UX professionals can get to this point of justification, concerns about growth within the community will be a thing of the past.

  8. Copy-editing: Every instance of “before and after” plus a noun has mangled hyphenation (never seen space-emdash used before). “the case infers that these purchases would have otherwise been made”: Humans infer; the case might suggest or imply.

    There are others, of course.

  9. Hey Joe-

    If this is your way of offering to be a (volunteer) copyeditor on the (all-volunteer) staff of this (no-fee) publication, we’ll be glad to have the extra pair of hands. Just say the word and we’ll add you to the roster.

  10. As an interactive architect with both academic (MBA) and practical (senior management) experience in business, I am leery about making a case for usability based on ROI-type calculations (benefits minus costs). Cost-accounting is a very complex endeavor—decisions made along the way on how to treat and allocate costs can create a lot of variation between how one accountant would do it vs. another. Attributing revenue increases to particular causation factors is also tricky business—it is too easy to mistake correlation for causation. I think you could spend a lot of time quantifying ROI a given set of site improvements and ultimately come up with an ROI figure that would not endure accounting scrutiny. I believe, by the way, that the cost-savings and revenue-enhancing benefits of usability engineering are real. I just think there are some inherent pitfalls in the quantification approach if your goal is to argue the value usability has to an organization. I would love to see a thorough business case study on usability ROI, one that details the accounting methodology. I’ve not seen anything like this. I believe that the criticisms this article directs at the nn/g study could also be applied to the figures presented in the Aaron Marcus whitepaper .

    In answer to Christopher Fahey’s question about the abuse of ROI in other contexts, this also happens with vendors of Customer Management Systems (CMS). The ROI claims for these systems often don’t pass accounting scrutiny.

    I suspect that the ROI argument for usability itself does not have a positive ROI (in terms of its ultimate effectiveness). In Seattle we are fortunate to have an artistically acclaimed opera company that is also consistently on-budget (even this year when many arts organization are struggling with deficits). Over the years, it has built a diverse audience base (including many folks in the 20’s and 30’s) in part through its education and outreach programs. Opera education programs are costly and have almost zero associated revenue. How does Seattle Opera justify a program with no short-term return? They have a clear, strategic vision about audience enjoyment and are willing to invest long-term in that vision. Maybe that’s the type of thinking that needs to be needs to be present in an organization before usability become institutionalized: that no amount of quantitative data will convince an organization of the value of usability if that organization isn’t strategically focused on providing a high-quality customer experience.

  11. Good points, Heidi, I was wondering how to say what you said in your last paragraph.

    IAs take it for granted that a positive ROI for usability actually exists, or that high-quality products are always a good business idea. But this assumption may not even be true. There are countless businesses in the world for whom deliberately providing low-quality products and services is a critical ingredient of their business model. I am thinking of cheap fast food, discount big-box stores, generic or knockoff consumer products, free web hosts, Kurt Russell movies, etc.

    We IAs who are committed to building high-quality products have to recognize that companies who specialize in low-quality products may never have any justifiable use for a strong investment in our skills.

  12. Interesting that this review appears in a place where the primary audience consists of UX/IA folks.

    The methodologies used by NNG and the statistics presented can of course be questioned and can also make a skeptic out of all of us. NNG has taken a lof “data” and presented it in a way that exemplifies the points they want to make. Then they charge people for it. I’m certainly not interested in sussing the credibilty of that data or in the methods used to not only gather it but display it.

    Obviously, the findings of the NNG report have somehow caused hurt to be felt among some of the UX crowd. But also quite obviously, the NNG report it isn’t the only thing there is to read about ROI.

    Defending the profession of “user experience research” requires first that you take a look at your own practices, research methods and busines abilities to not “defend” but rather support your profession. Are you doing a good job of your job? Perhaps you will choose to get snarled up in the provability of your research, findings and recommendations; perhaps you will spend a lot time defending your recommendations to a client, and proving its worth. Or perhaps you will spend time “reviewing” an NNG report that was published 7 months ago. We pick our battles.

    Peter says on his PeterMe blog that this review is “one of the best things I’ve ever (co-)written”. Why? Because you believe you were successful in debunking an NNG report to well-oiled audience? So, what’s the return on that investment?

  13. Analogy Time!

    On one hand we have usability designer/practitioners, who come from a variety of backgrounds, who endorse a variety of yet-unproven methods, and who seem to gain notoriety by making bold, brash, sweeping pronouncements of a usability:ROI correlation that appears to explain maybe ~10% of the ROI variability.

    “10%” …hmm, where have I heard that before…

    On the other hand, we have advertisers and marketeers, who invented the jewel, “Everybody knows that only 10% of advertising works, but nobody knows which 10%.”

    Are these fields merging? Are there lessons to be learned from the dark side? The ad-men always seem to have the ROI spiel down pat.

  14. Elizabeth McLachlan brings up some interesting points. Of course we all should acknowledge that in this field, Mssrs. Nielsen and Norman are the 800 lb. gorillas of usability. Like Microsoft, and other high visibility “industry” leaders it is natural for us to pile on the criticism, as we are enthusiasts. It is natural enthusiast behavior. Nielsen and Norman are to be commended for moving the field forward and by providing the raw material for interesting and productive discussion. Still, it is somewhat irresistable to play David to their Goliath, and hopefully they take some comfort in the fact that if they weren’t so successful and influential, they would not be the target for so much critcism. But, like J-Lo and Ben, visibility invites scrutiny.

    That being said, Elizabeth wrote, “The methodologies used by NNG and the statistics presented can of course be questioned and can also make a skeptic out of all of us.” Not only can they be questioned, they SHOULD be questioned. Repeatedly. Thoroughly. Deeply. To do anything less is to abdicate our responsibility as professionals. Boxes and Arrows is the closest thing I have seen to a peer-reviewed journal in this field, and while obviously the same peer-review rigor is not applied here, the fact that criticism is part of the mix is not only valid it is vital, especially for leaders in the practice. NN group, as professionals *profess* their point of view and it is up to us to either accept or challenge their assertions. It is the basis of critical thinking and the foundation of our practice.

    Though it is important to examine ones own methods, especially to justify what is essentially a “soft” metric, reviewing and critiquing methods in a constructive wy can only help us in our quest to find the “Philosopher’s Stone” of User Experience practice: ROI. Research and assertions that cannot stand up to scrutiny help foster the idea that like perpetual motion, measuring ROI is an preposterous endeavor, one that will help to erode our credibility and pereceived value in the marketplace.

    She goes on to write, “Then they charge people for it [the report]. I’m certainly not interested in sussing the credibilty of that data or in the methods used to not only gather it but display it.” I would respectfully disagree. The data is the support for their assertions. If it is not sound, it calls into question the validity of the assertions. And it is one thing to get faulty research for free, but NN group charges for it, and they charge a lot. It is not too much to expect that a $500 report contain valid data, sound methodology and valid conclusions.

    Peter et al, have done both our profession and potential clients a service by taking a stand that simply having a reputation is not a license to publish questionable research. It also makes a statement to the world that we as a profession value our standards enough to ensure the products of our work stand up to scrutiny by peers. It ultimately earns us (and Boxes and Arrows) more credibility as a community of professionals, with standards and practices.

    And that to me is a pretty good ROI.

  15. Bravo Eric! Wish I had said all that.

    I’m very disappointed at the direct and indirect ad hominem arguments against Peter and Scott. That’s not critical thinking, it’s just the opposite – ignoring the evidence and detracting the discussion to other directions.

    I feel NNG can produce high-quality information, helpful to all. Sadly, it won’t happen unless more people like Peter and Scott professionally and critically assess reports and articles such as this. As long as low-quality reports and articles are tolerated, we’ll keep getting more of them and the entire field will be worse off because of it.

  16. A couple of things strike me about this piece:

    1. Delving into the granularity of ROI for specific UE deliverables is a good idea. If you don’t measure it, you can’t improve it. On the other hand, it may not always be necessary or practical to get to the level of detail described here. Your company may not care about the minutae of ROI, or not care to support your drilling down that much, or may just want to build usable products because they’ve been burned enough in the past to know better. If this stuff is necessary for survival, then do it. If not, then screw it. Focus your efforts on building better product instead, or revamping the company website, or improving the corporate identity, or working on business strategy, or doing detailed observational studies to gain better insight into customer behavior, or benchmarking usability improvement metrics between releases, etc…

    2. It’s often enough to discuss ROI anecdotally to make your point in business (how many other disciplines are guilty?). Of course having a set of detailed, real-world case studies on ROI would be great–but wouldn’t necessarily ensure anything. If your CEO doesn’t buy your approach, you’re screwed–regardless of how you spin it. This is true for all disciplines & players in the business arena.

    3. IMO Messrs. Hirsch & Merholz are too focused on ROI here, and not enough on how UE can make a strategic impact by most effectively leveraging its outputs to drive cross organizational collaboration, and business strategy. I think it’s important to drive home the ROI measurements, but the bigger picture is that if done right, we are squarely impacting product definition, program management, and business strategy.

    4. The authors missed a key piece of literature on the ROI for usability. There is a whole tome on the subject: “Cost-Justifying Usability (Bias & Mayhew, 1994). Actually, one would think a thorough literature search on the topic would be in order–particularly if one is going to throw stones. There are many other sources that deal with the ROI for IT projects which can be leveraged for our discipline.

    5. Finally, when purchasing anything (“research” included), the old adage–caveat emptor–should always be applied .

  17. Hmmm…it seems to me that messrs Hirsch & Merholz wrote a review of a report titled Usability Return on Investment, hence the (IMO natural) focus on ROI.

    And to borrow from Eric’s comment, assessing whether the report contained “valid data, sound methodology, and valid conclusions” does not require an ROI literature search, just some grounding in basic research and statistics.

    If their findings are valid, I think this is pretty serious stuff.

  18. First, I suppose an introduction is in order. I’m the mysterious “et al.” and co-author of this literature review with Peter Merholz. While I am new to the UX field, I have a professional background in the areas of project evaluation and business analysis, and that is the viewpoint I brought to the review – I have only limited knowledge of NN/g’s past contributions; I have never read their other publications (aside from a few online essays) nor attended any of their conferences. In other words, they are not 800-pound gorillas to me.

    However, I am very interested in the subject of quantifying investments in user experience, information architecture, and web design, having focused my MBA studies on industrial design and financial information analysis. As such, my motivation for writing the review with Peter was not to lambaste any one advocate or approach, but rather to start a discussion about how to advance the credibility of the field in general. Peter asked me to read and review the NN/g paper from a business standpoint, based on my academic and professional experience — I have no axe to grind with NN/g.

    I’ve learned a lot from you by reading the comments on b&a – the community is really fortunate to have such a great forum for these types of discussions. In that spirit, I want to clarify a few points that have come up throughout this thread:

    – I think some readers may have gotten caught up in a perception that ROI is used simply to choose the projects with the biggest potential for short-term profitability. While that is most often the case, ROI is more generally a project valuation tool (to assign value to individual projects within a larger portfolio of possible projects). Its purpose is to weigh investment costs against a future cash flow of benefits. Peter and I have both read Cost-Justifying Usability and found it lacking in this investment perspective – the book provides a tremendous amount of detail on using cost/benefit analysis to justify an expenditure, but not to value longer-term investments in gaining customer insight, building brand, and strategic positioning. It focuses more on the short-term and internal goal of reducing costs.

    – In order to make better investment decisions, ROI calculations of the type we envision tie “soft metrics” like web traffic, page views, and click paths to “hard metrics” that align with strategic business goals like profitability, market position, and differentiation. Awareness, testing, and tweaking of such calculations gives firms a tremendous opportunity to gain insight into their customers’ needs, behavior, motivation, attitudes, and perceptions (read BJ Fogg’s Persuasive Technology for more information). This is where business people and UX/IA people need to meet to develop and share a common vision and common goals. Unlike the problem of figuring out why marketing works, web technology allows us to be in much more control of the customer experience and to collect a wealth of data that traditional marketers could only dream about. Metrics and ROI-type calculations help us to use that data to make better business decisions that manage and meet customers’ needs and expectations.

    – Peter and I fully agree with comments that ROI is very hard to measure. However, it is not impossible, and if the right metrics are tracked, user experience improvements can be teased out from other mitigating factors. Of course the hard part is choosing the right metrics – my experience is that a good ROI calculation is always a “work in progress” and is never perfect on the first attempt. The process requires lots of scrutiny and fine-tuning by experts *in the company* and not just by external consultants/researchers. However, external people are often very helpful in getting the insiders pointed in the right direction, particularly in facilitating a new type of collaboration between managers and designers – the type Mr. Norman infers is necessary in his famous quote: “usability advocates don’t understand business.” I would add that the opposite is also true: business people do not understand the unrealized value of user experience. The NN/g report does not facilitate such a meeting-of-minds because it is not persuasive from a business standpoint (which was the theme of our review).

    – A good point I wish we’d anticipated is that ROI does have a tendency to be abused by unscrupulous or arrogant accountants and managers who are looking for reasons to kill or forward a particular project — but this is true of just about every business analysis tool you can think of and does not invalidate its intended purpose. Corporate scandals aside, business leaders have strong incentives to use the analysis tools they have to make the right decisions for customers, employees, and shareholders — and most do just that. As such, financial analysis tools need to be more than just anecdotal, but rather quantifiable in real terms that business leaders can use to make decisions – or even just to analyze options.

    I don’t pretend that I have the answer to this difficult problem. However, I think that ROI is a logical path to follow to facilitate better understanding between senior managers and UX advocates. Together and individually, Peter and I have talked to several business leaders who intuitively understand the business value of user experience and are looking for better project valuation tools to prove their intuition. I am continuing to work in this area and look forward to updating this review with some continuing analysis.

    Sincerely,

    –scott

    p.s. I can’t help but returning to the quality analogy mentioned in one post. My academic advisor at Berkeley is a PhD industrial engineer who has extensively researched quality management – many times, we’ve discussed quality as a corollary to the UXROI problem. Fact is that quality management is now so accepted in most major firms that it’s not even thought of as a separate project or discipline. However, 25 years ago critics made the same claims that it is too hard to measure its impact on profitability and long-term business success – a generation later, there is a body of literature (both academic and professional) on how to do this, countless case studies, programs at major universities and research institutions, entire consultancies, and even international boards to set standards for quality metrics worldwide. It’s not a perfect analogy, but I really don’t think the value of quality engineering would have been proven and accepted by business were it based on nebulous methodology and unsubstantiated claims.

  19. I’m more interested in UX as it relates to value (marketing) than development cost (engineering), although both are relevant, as we pointed out in our paper. The former is higher up the value chain, more of a driver, and has more of an intersection with human values (which is why I got into this field in the first place).

    One interesting question I was just discussing with Luke Ball, is how to create competetive advantage with UX intellectual property. Take for example, the current battle being fought in the online music space (iTunes vs. Rhapsody, et. al). There is a lot of interest right now in first-mover advantage. But the question is, what aspects of an interface are easily copyable (e.g. Yahoo’s portal style of links) by a competitive follower and what aspects are not easily copyable (e.g. Amazon as a whole). We would suggest that the more intangible aspects of UX are the more valuable:
    – brand
    – subjective sense of pleasure
    – a sense that it knows me, or it feels right
    – a sense that it is intuitive and natural, it doesnt make me think
    – a sense that the system knows what I want, has what I want.
    – fun, cute, cool
    – community

    Maybe we should all be putting more stock in attempts to quantify these intangibles, such as WAMMI (http://www.wammi.com). Which, now that I think of it, is like a UX version of Brand Asset Valuators (http://www.yr.nl/engels/nederland/hetmerk/consult/body.html)

    Additionally, I believe that User Research is eventually (soon) where UX will show it’s really value. Customer insights (deep, complex, interwoven, inculcated understanding–not bullet points) provide building blocks that not only provide first mover advantage, they allow a company (e.g. Amazon) to continually stay one step ahead.

    Thanks to the article authors for their nod to AM+A’s ROI white paper and in particular to the metrics breakdown which was my personal contribution (boy it really is easy to plug yourself online!)

  20. Great article as always Peter. This quote came immediately to mind….

    “Get your facts first, and then you can distort themas much as you please:facts are stubborn, but statistics are more pliable”

    ~ Mark Twain

  21. Splendid follow-on comments Scott. The process of building a performance management metric for the user’s experience will go a long way to closing the gap between the Biz and the UX proponents. Like your analogy of performance/quality management, many of the proponents and practitioners in that new area silently recorded their metrics and ended up showing the correlation to long term success of the enterprise. The idea of a balanced scorecard looks both long and short-term, multiple strategic levers, and i suspect the nugget to proving ROI for usability is in applying this same level of balance.

    Thanks for the thoughtful review.

  22. Not to drift too off-topic, because as Scott rightly points out the quality management analogy is not a perfect one, but there is not consensus in the business community that quality management programs (Six Sigma, TQM, etc.) do indeed correlate with the long-term success of an enterprise. One criticism is that these programs engage an organization too heavily in defect prevention at the expense of customer-facing innovation. I think the quality management analogy is one that user experience professionals should use cautiously, if at all.

  23. I am not so surprised by the different comments about the questionable figures from reports like the NN Group one. As an consultant for CGE&Y I am working in the field of customer/user experience and usability for large companies and I am still looking for that one case where I can truly say that we got a certain hard ROI. I am fully convinced that my work adds to a better brand experience (more likely to buy and become loyal), get more satisfied users (more people use the systems with less errors and less psychological stress) and provide more efficiency by using humanized technology to automatically eliminate manual labour (more productivity to lesser costs or more consistent quality). ROI from usability or user experience is not really an isolated issue. Improvements made in usability often are accompanied by relaunches of software, more sales efforts, new releases, other timeframes as the initial time in which the old product worked, etc.
    I think it is in a way like marketing: Great vs. Terrible marketing campaigns differ so much that you can adequately measure the difference. It mostly has a specific launch date and all channels are put to work to make the launch a success. To make large scale ROI cases for usability or enhanced user experience for certain products or services you need to take everything in account. Redesigning interfaces means new promotion to gain the lost users that never came back. It means support people trained at the new interface and functionalities, it means that you still might have a image problem or your project took so long that the measured new productivity, profit or saved costs are not comparable to the last situation. Anyway: There is definitely a ROI on design, interaction design or usability/UXP. But it either a very simple case of keeping everything to a status quo when measuring the exact difference or it becomes an 1MB excel sheet with every influence measured and weighted before making simple calculations. I can calculate a billion in improvements but what would the money do, spent in a different direction, etc… talk to the real accountants and you get stuff like NPV and all kind of measures that measure results far more intelligent than simple “$$$ more sold”.

    But still. There is a huge ROI to gain from good design.

Comments are closed.