Metrics for Heuristics: Quantifying User Experience (Part 2 of 2)

Posted by

In part one of Metrics for Heuristics, Andrea Wiggins discussed how designers can use Rubinoff’s user experience audit to determine metrics for measuring brand. In part two, Wiggins examines how web analytics can quantify usability, content, and navigation.

“Web analytics can only serve as verification for information architecture heuristics when the analyst and information architect are evaluating the same qualities of a website.”

Rubinoff’s usability statements echo several of Jakob Nielsen’s heuristics. There is a temptation to believe that web analytic technology can replace proven usability testing, which simply is not true. In every mention of web analytics in support of usable design, authors are quick to note that traffic analysis, no matter how reliable and sophisticated, is not able to provide the thoroughness or level of insight that lab testing can achieve. However, JupiterResearch’s 2005 report, “The New Usability Framework: Leverage Technology to Drive Customer Experience Management,” found that by combining professional usability testing with web analytics, customer satisfaction information, and a/b optimization, usability expenses can be cut dramatically, and purportedly without loss in quality.

Using usage information for usability
Error prevention and recovery is a common evaluation point for usability. Information architects will find it easy to determine whether a site offers appropriate error handling, but harder to quantify the value that error handling creates. The most direct measures are simple proportions: the percentage of site visits including a 404 (file not found) error, the percentage encountering a 500 (server) error, and the percentage of visits ending with other errors. Combined, 404 and 500 errors should occur for under 0.5% of requests logged to the server, and a quick check can reveal whether errors merit a further investigation. Path, or navigation, analysis recreates errors by examining the pages most commonly viewed one step before and after an error, so they can be understood and remedied. Examining 404 page errors will also identify pages to redirect in a redesign, ensuring a continuity of service for visits to bookmarked URLs.

Analyze success rates to determine whether a site helps its visitors accomplish their goals and common tasks. Applying scenario or conversion analysis to specific tasks, analysts can examine leakage points and completion rates. Leakage points, the places where users deviate from the designed process, should be of particular interest: when visitors leave a process unexpectedly, where do they go, and do they eventually return?

The straightforward percentage calculations that comprise conversion analysis are simple to determine once the appropriate page traffic data has been collected. Conversion rate definitions and goals are often determined with the client using key performance indicators, and the information architect’s understanding of these measures will facilitate a deeper understanding of how the design should support business and user goals. A primary difficulty often lies in choosing the tasks to assess. These measures are best defined with the assistance and buy-in of the parties who are ultimately responsible for driving conversion. Shopping cart analysis is the most common application of this type of evaluation, but online forms carry just as much weight for a lead generation site as checkout does for an ecommerce site.

Unfortunately, shopping cart analysis can misrepresent what it seeks to measure. A significant proportion of shoppers engage in online research to support offline purchasing, so a high shopping cart abandonment rate may not be as negative as it would first appear. Shopping cart analysis, however, can be very useful for determining shopping tools and cross-marketing opportunities which may add value by improving conversion rates. Taken to the level of field-by-field form completion analysis and a/b optimization, shopping cart optimization can produce impressive returns, and is a particularly rich and enticing aspect of user behavior to analyze.

A/B and multivariate testing are additional applications of web measurement for improving the user experience. These techniques simultaneously test two or more page designs, or series of pages such as shopping tools, against each other by randomly splitting site traffic between them. When statistical significance is reached, a clear winner can be declared. A/b testing solves the issue of time by testing two designs at the same time, so each design is subject to the same market conditions; multivariate testing does the same for more than two design variations. This approach, also known as a/b or multivariate optimization, is better suited to refining an established design than to informing a complete redesign.

Analyzing completion of online forms depends on advance planning and requires more work during site development, but this effort is critical to measuring the forms’ performance in the hands of users. If measurements indicate that the design of an online form or process prevents a successful user interaction, this is not bad news: it is an unparalleled opportunity for improvement. Armed with the knowledge of specific user behaviors and needs, the information architect can design a more successful, usable site.

Content evaluation beyond page popularity
Rubinoff’s final category for user experience analyzes content by asking about the site’s navigation, organization, and labels. Navigation analysis can take several forms, requires some tolerance for indeterminate results, and is often subject to complications resulting from the limitations of data collection methods and analysis techniques. Navigation analysis findings are typically too valuable to disregard, and they have a host of applications from scenario analysis to behavioral modeling.

Performing navigation analysis on the site’s most popular pages shows the paths users travel to arrive at and depart from those pages. When pages with similar content do not achieve comparable traffic, web analytics can provide the insight to determine the differences in the ways that visitors navigate to the pages. Looking for trends in visits to content versus navigation pages can also indicate where to focus redesign efforts: if navigation pages are among the site’s most popular pages, there is probably good reason to spend some time considering ways the site’s navigation might better support user goals. Examining the proportions of visits using supplemental navigation, such as a site map or index, can also reveal problems with primary navigation elements. In these cases, however, web analytic data is more likely to point out the location of a navigation problem than to identify the problem itself.

label change
Web analytics can help explain why similar pages perform differently.

Determining whether content is appropriate to visitor needs and to business goals is a complex problem. To validate the information architect’s analysis of content value, individual content pages should be examined and compared on such measures as the proportion of returning visitors, average page viewing length, external referrals to the page, and visits with characteristics indicative of bookmarking or word-of-mouth referrals. For some sites, comparing inbound links and the associated tags from blogs or social bookmarking sites such as del.icio.us can provide a measure of content value.

Content group analysis is another common approach that measures the value of website content. Site content performance is often measured by dividing the content into logical, mutually exclusive groupings and monitoring traffic statistics and user behaviors within these content groups. Content group analysis will slice and dice data across several types of content groupings, which may include audience-specific content tracks; product-related content comparisons by numerous levels of granularity, often presented in a drill-down report that allows a simultaneous view of every sub-level of a content group; and site feature-specific content groupings. For example, content groups for a prestige beauty products site would naturally include subject-based categorizations such as Cosmetics, Skin Care, and About the Company. Groupings that would evaluate the same content from a different perspective might include Education, Fashion, Product Detail, and Purchase. Grouping pages by level in the site hierarchy is particularly useful in combination with entry page statistics for advising navigation choices.

content groups illus FINAL
Content group analysis compares performance across different parts of a site.

To determine how well site content matches user expectations, few tools can outperform search log analysis. If analysis of site search query terms reveals a significant disparity between the language visitors use and the language the site employs, the chances are good that the content does not fit user needs. Monitoring trends in search engine referral terms can provide insight into whether search indexing is matching queries to content well enough to meet user expectations, but the primary reason to track this information is to evaluate and improve upon search engine optimization results.

By comparing the user’s words to the website’s text, mining data from search engine referral terms and from onsite search queries, web analytics can identify language and terminology that may prevent the site from successfully meeting user and business goals. If the site has multiple audiences with significantly different vocabularies, comparing search terms and site text for the pages designed for these specific audience segments offers more targeted evaluation of whether the site’s labels and content meet user expectations. For example, pharmaceutical sites are often organized to address such varied audiences as investors, doctors, and patients. Content group analysis is one of the most direct methods to achieve this type of audience segmentation, but dividing the site’s audience by behavior is also very effective. The same search term analysis can also provide insight into what users expected to find on the site but did not, identifying business opportunities or gaps in search indexing.

Conclusion
The hard user data of web analytics can only serve as verification for information architecture heuristics when the analyst and information architect are evaluating the same qualities of a website. Web analytic data comes with several disclaimers, primarily that the data is never complete or wholly accurate, and cannot be, due to limitations of technology. These limitations include confounding elements for navigation analysis such as browser page caching and proxies serving cached pages; for audience segmentation and return visitor analysis, issues of cookie blocking and deletion complicate data collection. While accuracy improves reliability, accuracy and reliability are significantly different qualities: uniformly inaccurate data often provides reliable intelligence. Despite challenges to the accuracy of web traffic measurement, reliable metrics can still inform decision-making and provide solid insights to improve the user experience.

An information architect’s evaluation of user experience is often highly subjective and gains value with an evidence-based understanding produced by web analytics. User testing and web analytic data are currently the only ways to verify the heuristic assumptions upon which a website is redesigned. Access to web analytic data during the earliest phases of a redesign process informs the site’s architecture from the very beginning, allowing for a shorter engagement and resulting in a design that significantly improves the site’s usability for its primary audiences’ needs. The best way to ensure a verification of site heuristic analysis is for the information architect to develop user experience audit statements in cooperation with the web analyst who will retrieve the information to validate the audit statements. In the larger picture, including measurement in site design helps prove the ROI of intangible investments in web presence, ultimately to the benefit of all.

10 comments

  1. Andrea, I enjoyed reading the first and second part of your article. I believe that the combination of usability testing and analytics can work together.

    Of course, if you are on a budget you can get away with just using a sophisticated analytics service, if it includes click path and other valuable metrics along with someone who knows how to interperet the data.

  2. Good Article!

    Web Anlaytics can be very usefull set of information to test wether the Information Architect has done the job correctly or not.

    Web Analytics can help us calculate the difference between “how designer completes the IA work ‘THINKING from user perspective ‘ ” and “how the target audience behaves/interacts with the application ‘IN ORIGINAL’ “.

  3. Nice summary of the field. I would point out though that in my own experience, anything to do with web analytics is a huge time sink – you need to be 100% clear about about what it is you are trying to find out, and exactly how you are going to do it. Far too often, I find people looking at almost useless aggregate traffic reports in the hope that they will reveal some insight. I think you’ve shown that it takes a lot more than that to be in supporting design directions.

    By the way, I recall the presentation about SCONE with TEAa few years ago at CHI. Has anyone subsequently gone on to use that in any way?

  4. Ah – just seen “Edit (for another 15minutes)” – but you can’t delete. Oh well. Sorry about this everyone.

  5. Jonathan, you’re definitely right about the time-sink potential with web analytics. Not all statistics are useful, decision-making statistics. My favorite heuristic for deciding whether a statistic should be reported is whether it measures something that can be changed or affected by a decision of the people who would receive the report. If they can’t do anything about it, then there’s not necessarily much value in reporting on it; fortunately, we are often able to influence outcomes when we have a goal state in mind.

    Instead, my approach is to evaluate the site goals (as established by the site stakeholders) and work out the measures that can be used to reflect those site goals based on the available data set. Sometimes you can get exquisitely precise figures that really tell a tale, and sometimes you can only get the gist of it; either way, the only tenable approach is to know what you’re looking for and how you will find it before you dive in. That’s just good old-fashioned research design. You can’t build a site architecture if you don’t know what the site is intended to do and for whom, and you can’t measure that site’s success if you don’t know how that site works and what constitutes success. The other big problem with mucking about blindly in data is that you may find something of “significance” that really has no face validity.

  6. Andrea , think the approach of evlauting site goals and then creating the measures is perfect. The only problem here is that many of the site goals or wishlists are too esoteric to work out measures. In many a case, even the objectives are not clear. So what does the UX practitioner do? Define the objectives independently?. This is something that should not be done, as the true context behind an intitiave can be accurate only when it comes frm , or is endorsed by business owners.

  7. Masood,

    Even when they’re not stated, we have assumptions about the goals.

    If you don’t have clear goals and metrics, you should definitely discover them for yourself and state them. The business owner can verify whether you’re right or not. And, once that discussion starts, not only do you have clear goals, but you’ve changed the overall project’s conversation.

  8. Masood,
    Austin has it nailed. My agency work experience has involved the discovery of site goals as a primary part of the site development process. Even if the organization already has goals for their site, they often require adjustment for a new site structure and features. For example, if having 3 or more pageviews in an average visit was a goal metric, a change to a different site technology could interfere by invisibly adding another pageview to every visit, and the original goal metric would need to be adjusted accordingly. In my experience, setting site goals is often a “committee” activity, so I definitely agree that the UX practitioner shouldn’t be defining objectives independently.

Comments are closed.