Building a Data-Backed Persona

by:   |  Posted on

Incorporating the voice of the user into user experience design by using personas in the design process is no longer the latest and greatest new practice. Everyone is doing it these days, and with good reason. Using personas in the design process helps focus the design team’s attention and efforts on the needs and challenges of realistic users, which in turn helps the team develop a more usable finished design. While completely imaginary personas will do, it seems only logical that personas based upon real user data will do better. Web analytics can provide a helpful starting point to generate data-backed personas; this article presents an informal 5-step process for building a “persona of the people.” Continue reading Building a Data-Backed Persona

Metrics for Heuristics: Quantifying User Experience (Part 2 of 2)

by:   |  Posted on

In part one of Metrics for Heuristics, Andrea Wiggins discussed how designers can use Rubinoff’s user experience audit to determine metrics for measuring brand. In part two, Wiggins examines how web analytics can quantify usability, content, and navigation.

“Web analytics can only serve as verification for information architecture heuristics when the analyst and information architect are evaluating the same qualities of a website.”

Rubinoff’s usability statements echo several of Jakob Nielsen’s heuristics. There is a temptation to believe that web analytic technology can replace proven usability testing, which simply is not true. In every mention of web analytics in support of usable design, authors are quick to note that traffic analysis, no matter how reliable and sophisticated, is not able to provide the thoroughness or level of insight that lab testing can achieve. However, JupiterResearch’s 2005 report, “The New Usability Framework: Leverage Technology to Drive Customer Experience Management,” found that by combining professional usability testing with web analytics, customer satisfaction information, and a/b optimization, usability expenses can be cut dramatically, and purportedly without loss in quality.

Using usage information for usability
Error prevention and recovery is a common evaluation point for usability. Information architects will find it easy to determine whether a site offers appropriate error handling, but harder to quantify the value that error handling creates. The most direct measures are simple proportions: the percentage of site visits including a 404 (file not found) error, the percentage encountering a 500 (server) error, and the percentage of visits ending with other errors. Combined, 404 and 500 errors should occur for under 0.5% of requests logged to the server, and a quick check can reveal whether errors merit a further investigation. Path, or navigation, analysis recreates errors by examining the pages most commonly viewed one step before and after an error, so they can be understood and remedied. Examining 404 page errors will also identify pages to redirect in a redesign, ensuring a continuity of service for visits to bookmarked URLs.

Analyze success rates to determine whether a site helps its visitors accomplish their goals and common tasks. Applying scenario or conversion analysis to specific tasks, analysts can examine leakage points and completion rates. Leakage points, the places where users deviate from the designed process, should be of particular interest: when visitors leave a process unexpectedly, where do they go, and do they eventually return?

The straightforward percentage calculations that comprise conversion analysis are simple to determine once the appropriate page traffic data has been collected. Conversion rate definitions and goals are often determined with the client using key performance indicators, and the information architect’s understanding of these measures will facilitate a deeper understanding of how the design should support business and user goals. A primary difficulty often lies in choosing the tasks to assess. These measures are best defined with the assistance and buy-in of the parties who are ultimately responsible for driving conversion. Shopping cart analysis is the most common application of this type of evaluation, but online forms carry just as much weight for a lead generation site as checkout does for an ecommerce site.

Unfortunately, shopping cart analysis can misrepresent what it seeks to measure. A significant proportion of shoppers engage in online research to support offline purchasing, so a high shopping cart abandonment rate may not be as negative as it would first appear. Shopping cart analysis, however, can be very useful for determining shopping tools and cross-marketing opportunities which may add value by improving conversion rates. Taken to the level of field-by-field form completion analysis and a/b optimization, shopping cart optimization can produce impressive returns, and is a particularly rich and enticing aspect of user behavior to analyze.

A/B and multivariate testing are additional applications of web measurement for improving the user experience. These techniques simultaneously test two or more page designs, or series of pages such as shopping tools, against each other by randomly splitting site traffic between them. When statistical significance is reached, a clear winner can be declared. A/b testing solves the issue of time by testing two designs at the same time, so each design is subject to the same market conditions; multivariate testing does the same for more than two design variations. This approach, also known as a/b or multivariate optimization, is better suited to refining an established design than to informing a complete redesign.

Analyzing completion of online forms depends on advance planning and requires more work during site development, but this effort is critical to measuring the forms’ performance in the hands of users. If measurements indicate that the design of an online form or process prevents a successful user interaction, this is not bad news: it is an unparalleled opportunity for improvement. Armed with the knowledge of specific user behaviors and needs, the information architect can design a more successful, usable site.

Content evaluation beyond page popularity
Rubinoff’s final category for user experience analyzes content by asking about the site’s navigation, organization, and labels. Navigation analysis can take several forms, requires some tolerance for indeterminate results, and is often subject to complications resulting from the limitations of data collection methods and analysis techniques. Navigation analysis findings are typically too valuable to disregard, and they have a host of applications from scenario analysis to behavioral modeling.

Performing navigation analysis on the site’s most popular pages shows the paths users travel to arrive at and depart from those pages. When pages with similar content do not achieve comparable traffic, web analytics can provide the insight to determine the differences in the ways that visitors navigate to the pages. Looking for trends in visits to content versus navigation pages can also indicate where to focus redesign efforts: if navigation pages are among the site’s most popular pages, there is probably good reason to spend some time considering ways the site’s navigation might better support user goals. Examining the proportions of visits using supplemental navigation, such as a site map or index, can also reveal problems with primary navigation elements. In these cases, however, web analytic data is more likely to point out the location of a navigation problem than to identify the problem itself.

label change
Web analytics can help explain why similar pages perform differently.

Determining whether content is appropriate to visitor needs and to business goals is a complex problem. To validate the information architect’s analysis of content value, individual content pages should be examined and compared on such measures as the proportion of returning visitors, average page viewing length, external referrals to the page, and visits with characteristics indicative of bookmarking or word-of-mouth referrals. For some sites, comparing inbound links and the associated tags from blogs or social bookmarking sites such as del.icio.us can provide a measure of content value.

Content group analysis is another common approach that measures the value of website content. Site content performance is often measured by dividing the content into logical, mutually exclusive groupings and monitoring traffic statistics and user behaviors within these content groups. Content group analysis will slice and dice data across several types of content groupings, which may include audience-specific content tracks; product-related content comparisons by numerous levels of granularity, often presented in a drill-down report that allows a simultaneous view of every sub-level of a content group; and site feature-specific content groupings. For example, content groups for a prestige beauty products site would naturally include subject-based categorizations such as Cosmetics, Skin Care, and About the Company. Groupings that would evaluate the same content from a different perspective might include Education, Fashion, Product Detail, and Purchase. Grouping pages by level in the site hierarchy is particularly useful in combination with entry page statistics for advising navigation choices.

content groups illus FINAL
Content group analysis compares performance across different parts of a site.

To determine how well site content matches user expectations, few tools can outperform search log analysis. If analysis of site search query terms reveals a significant disparity between the language visitors use and the language the site employs, the chances are good that the content does not fit user needs. Monitoring trends in search engine referral terms can provide insight into whether search indexing is matching queries to content well enough to meet user expectations, but the primary reason to track this information is to evaluate and improve upon search engine optimization results.

By comparing the user’s words to the website’s text, mining data from search engine referral terms and from onsite search queries, web analytics can identify language and terminology that may prevent the site from successfully meeting user and business goals. If the site has multiple audiences with significantly different vocabularies, comparing search terms and site text for the pages designed for these specific audience segments offers more targeted evaluation of whether the site’s labels and content meet user expectations. For example, pharmaceutical sites are often organized to address such varied audiences as investors, doctors, and patients. Content group analysis is one of the most direct methods to achieve this type of audience segmentation, but dividing the site’s audience by behavior is also very effective. The same search term analysis can also provide insight into what users expected to find on the site but did not, identifying business opportunities or gaps in search indexing.

Conclusion
The hard user data of web analytics can only serve as verification for information architecture heuristics when the analyst and information architect are evaluating the same qualities of a website. Web analytic data comes with several disclaimers, primarily that the data is never complete or wholly accurate, and cannot be, due to limitations of technology. These limitations include confounding elements for navigation analysis such as browser page caching and proxies serving cached pages; for audience segmentation and return visitor analysis, issues of cookie blocking and deletion complicate data collection. While accuracy improves reliability, accuracy and reliability are significantly different qualities: uniformly inaccurate data often provides reliable intelligence. Despite challenges to the accuracy of web traffic measurement, reliable metrics can still inform decision-making and provide solid insights to improve the user experience.

An information architect’s evaluation of user experience is often highly subjective and gains value with an evidence-based understanding produced by web analytics. User testing and web analytic data are currently the only ways to verify the heuristic assumptions upon which a website is redesigned. Access to web analytic data during the earliest phases of a redesign process informs the site’s architecture from the very beginning, allowing for a shorter engagement and resulting in a design that significantly improves the site’s usability for its primary audiences’ needs. The best way to ensure a verification of site heuristic analysis is for the information architect to develop user experience audit statements in cooperation with the web analyst who will retrieve the information to validate the audit statements. In the larger picture, including measurement in site design helps prove the ROI of intangible investments in web presence, ultimately to the benefit of all.

Metrics for Heuristics: Quantifying User Experience (Part 1 of 2)

by:   |  Posted on

In part two of Metrics for Heuristics, Andrea Wiggins examines how web analytics can quantify usability, content, and navigation.

“When analytics data is shared with information architects, a subtler and more sophisticated user experience design can emerge.”

Web analytics typically provides intelligence for marketers and executives. While valuable for proving Return On Investment (ROI), web analytics’ real potential lies in improving the online user experience. When analytics data is shared with information architects, a subtler and more sophisticated user experience design can emerge. Information architects need to understand why visitors come to the site and what they seek, so that the content can be best presented to meet user needs and business goals. First, however, it is necessary to evaluate the site using appropriate information architecture guidelines.

Using heuristics
Providing a context for heuristics is the most useful application of web metrics in a site redesign: a framework for measurement is critical to future evaluation of the success and value of strategic but intangible investments like information architecture. Analyzing pre-existing web traffic yields insights to user behavior and measures how well a site meets user needs. By comparing analytic data to an information architect’s heuristic evaluation, a basic validation emerges for otherwise subjective performance measures. In addition to heuristic validation, web analysts can use the information to engineer effective key performance indicators (KPI) for future evaluation, and the information architect can use the information to provide direction for the design.

Before further exploring the use of web analytics to support information architecture heuristics, it is necessary to note weaknesses in source data and current analytic practices. First, web traffic measurement will never reveal the complete user picture, and this is one reason to use web analytics in combination with other user experience evaluation tools, such as customer databases and user testing. Second, there are currently very few standards in measurement, methods, and terminology, which affects the analysis and interpretation of any web analytic data. Finally, available site traffic data may be suboptimal for analysis depending upon the volume and nature of the data available.

Rubinoff’s user experience audit
Cooperative selection of success measures early in the project’s definition or discovery phase will align design and evaluation from the start, and both the information architect and web analyst can better prove the value of their services and assure that the project’s focus remains on business and user goals. To provide a useful context for design, Rubinoff’s user experience audit is one of several tools information architects can use to evaluate a website.

Rubinoff presents a format quantifying useful, though subjective, measures that information architects can easily assess, and his example uses four broad, equally weighted categories with five evaluative statements in each. The most functional aspect of this tool is its complete customizability. Because every domain operates in a different context, information architects and web analysts will achieve greatest success by using the evaluative points that will be most verifiable and valuable for each website.

spider_chart.jpg
Figures 1: From “Quantifying the User Experience,” Rober Scott Runiboff

I will examine representative evaluative points drawn directly from Rubinoff’s user experience audit that can be validated through web analytics:

Branding

  • The site provides visitors with an engaging and memorable experience.
  • Graphics, collaterals and multimedia add value to the experience.

Usability

  • The site prevents errors and helps the user recover from them.
  • The site helps its visitors accomplish common goals and tasks.

Content

  • Link density provides clarity and easy navigation.
  • Content is structured in a way that facilitates the achievement of user goals.
  • Content is appropriate to customer needs and business goals.

Ideally, the most relevant metrics or KPI should be comparable both before and after a site is redesigned, although direct comparisons are often impossible after fundamental changes to site structure are implemented. By nature, a site redesign generates new points of measurement, typically enhanced by improved data collection strategies, and only a handful of the site’s previous KPI might still be directly applicable.

Quantifying brand
A popular yet extremely subjective measure of brand requires judging whether the site provides an “engaging and memorable” experience. Direct measurement of brand is elusive in any medium, so web analytics’ indirect evidence is far more tangible than brand evaluation in most other channels. As a validation metric, the most direct measure of the ephemeral value of brand is the ratio of returning visitors. The return visitor ratio for a site will vary: tech support sites prefer a low proportion of return visitors, but an ecommerce site requires a continual infusion of new visitors. To measure the online brand experience, the ideal proportion of return visitors for the site must be identified as a KPI tracked over time.

The length of the average site visit, in both time and pages viewed, also provides verification of the brand experience. A specific goal for engagement length can be set as a key performance indicator using either of these visit statistics for the site. Similarly, by considering content groups of similar pages, measuring the depth or breadth of visits to each content group can assess how engaging the users find the experience.

Unfortunately, validating an interactive element’s brand value depends on whether the interaction was created with measurement in mind. Engaging online experiences like multimedia games, Flash, and AJAX are all measurable, but only if the design incorporates JavaScript tagging to report key interactions. When this information is available, appropriate analysis can still be difficult to generate. The current analysis tools are page-oriented, not interaction-oriented, so extensive work-around measures may be required. In a redesign, KPI for interactive site elements should be a primary consideration in the experience design development process. For example, the length of time users spend with an interactive element provides a concrete idea of how long users are engaged by the application, and the average number of return visits for returning visitors would be a simple statistic to assess how memorable the experience is: just how frequently do visitors return?

Another branding element that Rubinoff suggests evaluating is the value of graphics and multimedia. As previously mentioned, KPI for multimedia must be determined in the site planning phase for appropriate implementation, and these same measures can determine whether multimedia applications are providing value that justifies their expense. Aside from ensuring the graphics do not make the pages load too slowly, measuring the experiential value of graphics can be more difficult, unless they are clickable graphics leading the user to further content. The challenge then is to differentiate clicks on graphics from clicks on embedded navigation links. Shrewd design can assure this level of measurability for future evaluation, but it is unlikely for discovery phase analysis, where it would be most useful.

Editor’s note: Check back for “Part 2” of this series where Andrea explores how designers can use metrics to quantify and evaluate a website’s content, navigation, and usability.