Metrics for Heuristics: Quantifying User Experience (Part 1 of 2)

Posted by

In part two of Metrics for Heuristics, Andrea Wiggins examines how web analytics can quantify usability, content, and navigation.

“When analytics data is shared with information architects, a subtler and more sophisticated user experience design can emerge.”

Web analytics typically provides intelligence for marketers and executives. While valuable for proving Return On Investment (ROI), web analytics’ real potential lies in improving the online user experience. When analytics data is shared with information architects, a subtler and more sophisticated user experience design can emerge. Information architects need to understand why visitors come to the site and what they seek, so that the content can be best presented to meet user needs and business goals. First, however, it is necessary to evaluate the site using appropriate information architecture guidelines.

Using heuristics
Providing a context for heuristics is the most useful application of web metrics in a site redesign: a framework for measurement is critical to future evaluation of the success and value of strategic but intangible investments like information architecture. Analyzing pre-existing web traffic yields insights to user behavior and measures how well a site meets user needs. By comparing analytic data to an information architect’s heuristic evaluation, a basic validation emerges for otherwise subjective performance measures. In addition to heuristic validation, web analysts can use the information to engineer effective key performance indicators (KPI) for future evaluation, and the information architect can use the information to provide direction for the design.

Before further exploring the use of web analytics to support information architecture heuristics, it is necessary to note weaknesses in source data and current analytic practices. First, web traffic measurement will never reveal the complete user picture, and this is one reason to use web analytics in combination with other user experience evaluation tools, such as customer databases and user testing. Second, there are currently very few standards in measurement, methods, and terminology, which affects the analysis and interpretation of any web analytic data. Finally, available site traffic data may be suboptimal for analysis depending upon the volume and nature of the data available.

Rubinoff’s user experience audit
Cooperative selection of success measures early in the project’s definition or discovery phase will align design and evaluation from the start, and both the information architect and web analyst can better prove the value of their services and assure that the project’s focus remains on business and user goals. To provide a useful context for design, Rubinoff’s user experience audit is one of several tools information architects can use to evaluate a website.

Rubinoff presents a format quantifying useful, though subjective, measures that information architects can easily assess, and his example uses four broad, equally weighted categories with five evaluative statements in each. The most functional aspect of this tool is its complete customizability. Because every domain operates in a different context, information architects and web analysts will achieve greatest success by using the evaluative points that will be most verifiable and valuable for each website.

spider_chart.jpg
Figures 1: From “Quantifying the User Experience,” Rober Scott Runiboff

I will examine representative evaluative points drawn directly from Rubinoff’s user experience audit that can be validated through web analytics:

Branding

  • The site provides visitors with an engaging and memorable experience.
  • Graphics, collaterals and multimedia add value to the experience.

Usability

  • The site prevents errors and helps the user recover from them.
  • The site helps its visitors accomplish common goals and tasks.

Content

  • Link density provides clarity and easy navigation.
  • Content is structured in a way that facilitates the achievement of user goals.
  • Content is appropriate to customer needs and business goals.

Ideally, the most relevant metrics or KPI should be comparable both before and after a site is redesigned, although direct comparisons are often impossible after fundamental changes to site structure are implemented. By nature, a site redesign generates new points of measurement, typically enhanced by improved data collection strategies, and only a handful of the site’s previous KPI might still be directly applicable.

Quantifying brand
A popular yet extremely subjective measure of brand requires judging whether the site provides an “engaging and memorable” experience. Direct measurement of brand is elusive in any medium, so web analytics’ indirect evidence is far more tangible than brand evaluation in most other channels. As a validation metric, the most direct measure of the ephemeral value of brand is the ratio of returning visitors. The return visitor ratio for a site will vary: tech support sites prefer a low proportion of return visitors, but an ecommerce site requires a continual infusion of new visitors. To measure the online brand experience, the ideal proportion of return visitors for the site must be identified as a KPI tracked over time.

The length of the average site visit, in both time and pages viewed, also provides verification of the brand experience. A specific goal for engagement length can be set as a key performance indicator using either of these visit statistics for the site. Similarly, by considering content groups of similar pages, measuring the depth or breadth of visits to each content group can assess how engaging the users find the experience.

Unfortunately, validating an interactive element’s brand value depends on whether the interaction was created with measurement in mind. Engaging online experiences like multimedia games, Flash, and AJAX are all measurable, but only if the design incorporates JavaScript tagging to report key interactions. When this information is available, appropriate analysis can still be difficult to generate. The current analysis tools are page-oriented, not interaction-oriented, so extensive work-around measures may be required. In a redesign, KPI for interactive site elements should be a primary consideration in the experience design development process. For example, the length of time users spend with an interactive element provides a concrete idea of how long users are engaged by the application, and the average number of return visits for returning visitors would be a simple statistic to assess how memorable the experience is: just how frequently do visitors return?

Another branding element that Rubinoff suggests evaluating is the value of graphics and multimedia. As previously mentioned, KPI for multimedia must be determined in the site planning phase for appropriate implementation, and these same measures can determine whether multimedia applications are providing value that justifies their expense. Aside from ensuring the graphics do not make the pages load too slowly, measuring the experiential value of graphics can be more difficult, unless they are clickable graphics leading the user to further content. The challenge then is to differentiate clicks on graphics from clicks on embedded navigation links. Shrewd design can assure this level of measurability for future evaluation, but it is unlikely for discovery phase analysis, where it would be most useful.

Editor’s note: Check back for “Part 2” of this series where Andrea explores how designers can use metrics to quantify and evaluate a website’s content, navigation, and usability.

13 comments

  1. Nice article Andrea. Providing a context is invaluable while defining Heuristics in the first place. Though you have mentioned that Rubinoffs chart is customizable, there are Couple of things I would like to add. Business is left out in a gazillion of heuristics that I have seen to date. While some may argue that every one of the 4 given broad categories affect business, in my opinion it is not so.

    The entire purpose of a customer facing site is to realize the business objectives, and that should reflect in the design. When I say business, it means not only the Dollars that you are expecting. It can also include future strategies, and time to market of those strategies. Is your IA designed in a way to facilitate new additions, what would be the time to launch new campaigns? This involves not only looking at the site, but also the entire design development process including the CMS. This is just to name a few.

    If you look at the site from a Customer experience perspective, then each and every “touch point” should be accounted for and the experience should be checked against metrics. How you define those metrics is an interesting challenge, more difficult than the actual rating and evaluating process. Waiting for the next part of this article..

  2. While I certainly agree that business goals are another key point of measurement and evaluation, I did not include them here because the focus of the user experience audit is the users’ goals. I think everyone can agree that business and user goals are strongly related; in my personal philosophy of design, if a site does not meet user needs, it doesn’t matter what the business goal behind it was because it cannot have succeeded at that goal. Regardless of how user-centric or business-centric your perspective, my experience has been that the same web analytic measures often define success from both perspectives.

    You make an excellent point of other ways in which web analytics can be used to improve site management strategies, and I agree wholeheartedly that evaluation of multiple aspects of a site should be an up-front consideration. I look forward to more people adopting your perspective on the value of incorporating measurement in the entire iterative design process!

  3. Masood,

    How would you formulate some of the things you mention as heuristics we could apply for sites?

  4. Andrea,

    At the ’06 IA Summit, Steve Mulder gave a presentation about bringing more science to persona creation, in which he advocated (and this is a gross oversimplification here) that personas be developed and then validated/updated through surveys & research. Your insightful comments about using analytics data to validate a heuristic analysis got me to thinking that this same data could be used to either create or validate personas. I’m not sure if this is something you’ll be covering in Part 2, but it seems like another good use for this data.

    – F.

  5. I agree with Fred–and by extension, Steve Mulder. Personas can be a highly accurate starting point for fully assessing tasks and content needs through on behavioral research. There really should be some effort to validate the data by comparing what research respondents are saying within the framework of issue categorization with Web analytics and analysis of behavior in a lab or other settings appropriate for contextual inquiry.

    As an example, I’m in the middle of evaluating the impact of AJAX and social/community architectures on a client’s particular industry and marketing landscape. We have performed a preliminary round of research in three cities and have established enough data to suggest 2 solid personas, 1 emerging persona type, and suggestions or mention of about 4 others. Taking the first solid persona types, it was very easy to see them represented on YouTube and blogs as self-publishing persona types directly impacting my client’s brand. However, my client has been able to present little to no meaningful Web analytic data for their current site.

    The good news is that we have some observable persona behavior for validation out there on the Web, but Web analytics data from my client’s site would have helped us paint a more complete picture. Without them, our heuristic analyses are based more on opinion than quantifiable activity–and I suspect would have taken us further in validating our persona types and helped to reduce our overall research timeframe.

  6. I can’t agree more regarding the use of web analytic data to inform persona development. While I didn’t really get into that topic in this article, I did talk about personas and usability in a recent similarly-themed but differently-focused white paper, “Data Driven Design: Leveraging Analytics to Improve your Website Overhaul,” which is available at http://www.enlighten.com/pdfs/wp_analytics_08_06.pdf.

  7. Andrea,

    I think all you’d need to do to repair the link is remove the full stop (period) after .pdf

    Chris.

  8. Andrea,

    Very interesting article. I fully agree with part 2 and I agree with most points of part 1. However, I have strong doubts on the way you define branding and separate content, usability and branding from each other:

    Branding:
    The site provides visitors with an engaging and memorable experience.
    Graphics, collaterals and multimedia add value to the experience.”

    I think a website provides an engaging and memorable experience if it is
    a) identifiable in its character purpose and value
    b) pleasing
    c) memorable
    It is easily identifiable if it has a clear purpose that it communicates, if it is consistent in its appearance, and if follows the guidelines of corporate design if it belongs to a bigger corporation.
    It is memorable it solved my problem quicker and better than expected.
    Graphics and multimedia should follow the corporate guidelines and be highly relevant if used within contents. If they’re used just to please the user they create mistrust and annoyance.

    Usability
    The site prevents errors and helps the user recover from them.
    The site helps its visitors accomplish common goals and tasks.”

    From my experience usability is a crucial element of the overall brand experience. If you go to a website that doesn’t work, you have a bad brand experience. First of all it needs to work. People hardly pay attention to design. They feel it, but they don’t really notice it. Design is important though, as it’s critical in the first few seconds, where the user decides to stay or go. Good design doesn’t mean the user stays. Again here usability is more important, but it helps building trust during the first judgment on the quality of the website.

    Usability tests deliver good results within the test environments, they hardly assess one crucial moment of user experience: The first sight and the question: Should I stay or should I go. This moment is hard to test.

    That’s maybe why usability consultants started developing interest for branding and design in general (see Norman). Coming from a big brand company it’s quite refreshing to see a usability expert get all excited about the matter:

    http://video.google.com/videoplay?docid=5654878583447435228

    Content
    Link density provides clarity and easy navigation.
    Content is structured in a way that facilitates the achievement of user goals.
    Content is appropriate to customer needs and business goals.”

    Again here the separation and usability and content is quite fuzzy. Structuring content is not content, it’s IA, and as such rather a matter usability. Maybe I misread the separation, but I’d understand the three categories more like this:

    Brand > Usability > Content – And yet that’s still not quite correct.

    What’s my point? In as short as I can:
    1. Contemporary branding = interface (of a company or product – depending on weather you look at corporate branding or product branding), as nowadays brand is defined as an overall experience one makes in connection with a product. Best, i.e. most memorable and marketable (inciting word of mouth) experiences are interactive ones.
    2. The interface is first of all a matter of usability. Products need to work. In the end the Interfaces get skinned adapting corporate guidelines. But only bad corporate guidelines intrude on the product.
    3. An interactive product doesn’t need too much skinning, its Interface shapes through the inert functionality of what it’s built for (see craigslist, delicious, ebay, reddit).
    4. The Internet has changed and will continue to change our understanding of quality, brand and communication. And – at least form the consumer’s point of view – in a good way.
    5. Brand experience IS measurable. With web stats and usability tests on one side and by crosschecking with the corporate design manual on the other side.
    6. Brand is still widely misunderstood as a superficial discipline that starts with the logo, leads into an expensive brand manual that costs millions defining a couple of grids, colors and fonts and ends with an emotional ad on TV.
    7. Brand on the Internet does not equal to special graphic effects. The most special effect a website can make is that it’s useful and easy to use. If in plus it looks good. That’s branding. Branding and usability are not contradictory. They used to be quarreling twins. But they have grown up and now they can’t be without each other anymore.

    Well that was not so short… Anyway, I hope I’ve not been too intrusive with my little speech here. But I am excited about that matter for over 4 years now, and as I am currently writing a book on usability and branding, I think this articles and the comment section provide a wonderful opportunity to get more insights in a matter that is widely ignored.

  9. Oliver–good points! I was trying to stay as general as possible and so used some of the specific example items that Rubinoff provided. I hope that anyone using web analytic evaluation for information architectures would choose the audit statements that best fit the specific site, because these are certainly very broad, and as you point out, may not fit with any one person’s conception of what measures a given site attribute.

    In my view, brand, usability and content are parts of an interrelated continuum, but for practicality’s sake, we have to separate these characteristics of a site somehow if they are to be evaluated. Fortunately, everyone can choose or create the audit statements that best suit their specific purposes.

  10. Hi Andrea.

    This statement seems rather contentious:
    “The length of the average site visit, in both time and pages viewed, also provides verification of the brand experience……..Similarly, by considering content groups of similar pages, measuring the depth or breadth of visits to each content group can assess how engaging the users find the experience.”

    It’s contentious because users who spend a long time on a site + going deep and across many content groups could simply be users who can’t find what they want. In this case, the site is not providing an engaging experience, but an annoying one.

    Just using web analytics, do you think there are any other ways of looking at the data and being able to distinguish between users who are truly engaged and users who are simply lost?

    What do you think?

  11. Hi Tim –

    It’s definitely true that a long visit may mean that people just couldn’t find what they are looking for. However, it’s been my experience (from looking at the long-term trend stats on dozens of sites) that in general, if visitors cannot find what they are looking for, they simply leave. For the most part, people just don’t hang around long if they don’t find what they want. Visit lengths will be short (particularly in number of pages) and there may be a lot of null search results or abandonment from a search results page.

    To drill down into the question of whether visitors are really engaged, I would specifically look for navigation trends like pogosticking to identify whether users are getting lost in the site; it’s also possible that they’re viewing many pages because they’re going back and forth between a product listing and product details, for example. Like most web analytic measures, it all depends on the site; I would always look at a combination of several metrics to get an idea of whether or not the site is really engaging visitors, and would definitely recommend using visitor segmentation on the metrics as well.

Comments are closed.