Building a Data-Backed Persona

Written by: Andrea Wiggins

Incorporating the voice of the user into user experience design by using personas in the design process is no longer the latest and greatest new practice. Everyone is doing it these days, and with good reason. Using personas in the design process helps focus the design team’s attention and efforts on the needs and challenges of realistic users, which in turn helps the team develop a more usable finished design. While completely imaginary personas will do, it seems only logical that personas based upon real user data will do better. Web analytics can provide a helpful starting point to generate data-backed personas; this article presents an informal 5-step process for building a “persona of the people.”

In practice, outcomes indicate that designing with any persona is better than with no personas, even if the personas used are entirely fictitious. Better yet, however, are personas that are based on real user data. Reports and case studies that support this approach typically offer examples incorporating data into personas from customer service call centers, user surveys and interviews. It’s nice work if you can get it, but not all design projects have all (or even any!) of these rich and varied user data sources available.

However, more and more sites are now collecting web analytic data using vendor solutions or free options such as Google Analytics. Web analytics provides a rich source of user data, unique among the forms of user data that are used to evaluate websites, in that it represents the users in their native habitat of use. Despite some drawbacks to using web analytics that are inherent to the technology and data collection methods, the information it provides can be very useful for informing design.

Google Analytics is readily accessible and offers great service for the price, so for the sake of example, the methods described here will refer to specific reports in Google Analytics. Any web analytics solution will provide basic reporting similar to Google Analytics, give or take a few reports, so using a different tool will just require you to determine which reports will provide data equivalent to the reports mentioned here.

To illustrate the process, an example persona design scenario is included in the description for each of the five steps:

Kate is an independent web design contractor who is redesigning the website of a nonprofit professional theater company. She has hardly any budget, plenty of content, and many audiences to consider. The theater’s website fills numerous functions: it advertises the current and upcoming plays for patrons; provides patrons information about ticketing and the live theater experience; announces auditions; specifies playwright manuscript and design portfolio requirements for theater professionals; recruits theater intern staff; serves as the central repository of collected theater history in the form of past play archives and press releases; advertises classes and outreach activities; and attempts to develop a donor base as well. As she gathers requirements, Kate decides to use the theater’s new Google Analytics account as a data source for building personas.

Step One: Collect Data

After Google Analytics has been installed on a site, you must wait for data to accumulate. Sometimes you will have the good fortune to start a project that has already been collecting data for months or years, but when this isn’t the case, try to get as much data as you can before extracting the reports you will use to build personas. Ideally, you want to have enough data for reporting to have statistical power, but not all sites generate this level of traffic. As a rule of thumb, less than two weeks of data is not sufficient for any meaningful analysis. One to three months of the most recent data is much more appropriate.

If it is reasonable, try to set up two profiles to filter on new and returning visitors. While some Google Analytics reports do allow segmentation, profile filtering on new versus returning visitor status gives you the best access to the full array of reports for each visitor segment. If this setup can be arranged early in data collection, then you can later draw on a profile that contains only new visitors to determine the characteristics of your personas who are new visitors, and likewise for returning visitors.

Kate has been given administrator privileges in the theater’s Google Analytics account for the duration of her contract. The theater has just one profile that includes all site traffic, so she starts off by making two new profiles with filters to include new visitors in one profile and returning visitors in the other. Kate knows that she needs a decent sample of site data, so she monitors the profiles weekly to make sure that the data is accumulating. She starts designing her personas using the existing Google Analytics profile (all visitors), and checks back later on the custom profiles to see if the segmented data can provide any new insights to add to her personas.

Step Two: Determine How Many Personas to Use

Next, determine how many personas to use–generally no less than three and rarely more than seven or eight. This gives you the number of blank slates across which to proportionately distribute the user characteristics that you extract from Google Analytics reports. If there are four personas, each will be assigned the characteristics of 25% of the site audience in each report; if five personas, each represents 20% of the site audience. Despite the fact that you’re working with statistics, you don’t have to be exacting in proportionately representing user segments; sometimes it is very important, for business reasons, to strongly represent a small user segment.

After thinking carefully about the many functions that the site has to fill, Kate looks at the Top Content report in Google Analytics to see what pages get the most traffic. She notices that most of the top pages are related to current shows, tickets and directions, and decides that she will have at least one persona represent a first-time patron who plans to travel from out of town. The other pages that are popular include the “About Us,” “People,” and “Classes” pages; “Auditions” is a little further down the page, but well above “Support Us.” Kate determines that she will create another persona to represent people interested in joining the theater company. Kate knows that fund development is important to the theater, but it doesn’t appear to be all that important to the website audience, so she decides to create another patron persona who has attended several plays and is interested in becoming a donor. She feels that these three roles can represent the audience the theater is most interested in reaching, and starts creating a persona document for each of them. She names her personas: Regina is the first-time out-of-town patron, Monica is the would-be theater participant, and Rex is the returning patron.

Step Three: Gather Your Reports

After allowing some data to accumulate, the next step is to acquire the Google Analytics reports, whether you’re interacting directly with the application yourself or someone else is providing you with reports. If you are not the person extracting data, make sure that you receive the PDF exports of reports, as these contain summary data that is not present in some of the other export formats. Whether or not you have profiles that are filtered on new versus returning visitor segments, you will be interested in the same handful of reports:

  • Visitors Overview Report. In one convenient dashboard-style screen, you can get the percentage of “new visits,” or visits by new visitors, and a snapshot of other visitor characteristics.
  • Browsers and OS Report. While you can look at browsers and operating systems separately in other individual reports, it usually makes more sense to look at them in combination in the Browsers and OS Report. Typically only a handful of browser and operating system combinations are required to represent well over 90% of the site’s visitors.
  • Map Overlay Report. To use this report, which provides a great deal of detail on the geographic origins of site visits, you will need to do just a little bit of math. Divide the number of visits from the top country or region (whichever is of greater use to you) by the total number of visits to get the percentage of visits from that geographical area. This allows you to determine the proportions of domestic and international visits. For the visits from your country, you will want to drill down to the city level and select a few cities from the top ranks of the list, keeping in mind that big cities will statistically generate more traffic than small ones. For your international visitors, choose from the top cities in the countries that bring the most visits.
  • Keywords Report. This report shows the queries that bring users to your site. When you look at the search engine query terms, ask yourself, “What are our users looking for? What type of language do they use when searches bring them to our site?” This gives you a starting point to think about user motivations and goals.
  • Referring Sites Report. Like the Keywords Report, the Referring Sites Report gives you an opportunity to look for answers to questions like, “Where do our users come from? Are they reaching our site from search engines, other sites, or just appearing directly with no referrer, as returning visitors are more likely to do?”

If you have the segmented profiles set up, extract the same reports from both of these profiles, and get the Visitors Overview report from an unfiltered profile.

Kate starts looking for report data to build her personas. She has already generated user goals for her 3 personas, but the goals are pretty general, so she hopes to find more specific characteristics that are based on the real user population. Kate consults the Visitors Overview report and find that about 75% of the site’s visits in the last month were from new visitors; she decides that the Regina and Monica personas will be new visitors to the site and quickly brainstorms a few questions that she thinks they might have, based on their goals, that motivate their site visits. The last persona, Rex, will be a returning visitor.

Kate knows that the overwhelming majority of patrons are local because it is a regional theater company. She checks the Map Overlay report and sees that at the state level, about half of the visitors come from Michigan, where the theater is located. She decides that Monica comes from another state, and picks New York because it’s in second place behind her state, and because of the level of activity of the theater community in New York City. Kate drills down to view the traffic from Michigan, and chooses the top city for Rex’s home–the city is near the theater, so this makes intuitive sense. For Regina, who is planning to travel a little further, she selects the #4 city, which is about an hour away, and is a much bigger city. The visitors from that city have longer visits and a lower bounce rate, so she feels these characteristics would match well with Regina’s goal of planning an out-of-town visit to the theater. Coming from that city, she will also want to have dinner and stay the night at a local bed-and-breakfast, so Kate jots down these additional goals for Regina.

Since two of her personas are new visitors, Kate looks up the Traffic Sources Overlay and then the Referring Sites and Keywords reports. There’s a lot of search engine referral traffic, and some strong referrers among regional event listings sites. She decides that Regina got to the site from an event listings site that refers a lot of traffic, and that Monica arrived from a Google search on the phrase, “auditions in Michigan.” Kate thinks that a logical reason Monica would be searching for auditions in Michigan is because she’s planning to move there from New York, so Kate adds this detail to Monica’s persona.

Step Four: Fill in the Blanks

The next step is to “fill in the blanks” from the report data. Make a template for each persona, and first fill in whether they are a new or returning visitor. If you have segmented profiles on new versus returning visitor status, draw the remaining characteristics of your new visitors evenly from the new visitors profile, and likewise for the returning visitors. When you have distributed the other statistics (browser, operating system, and geographical location) among your persona templates, review them against the unfiltered “all visitors” profile for a reality check to make sure you have not unintentionally over-represented a user characteristic, which is one hazard of using segmented data. If you have no preconceptions about user goals, you can distribute the report characteristics randomly at this point, as there is not necessarily much meaningful interplay between the statistics for new/returning status, geographic location, and browser/OS. Alternately, using a goal-oriented approach as in the example, you can select persona characteristics from the user data that make sense with the goals you have established.

Kate took a goal-oriented approach to building her personas, so she has already assigned the report data to the personas. She builds her normal persona description template with the notes she made while looking at reports and adds OS and browsers based on the Google Analytics report to each of them. Kate then starts drilling down into the Google Analytics reports’ segmentation to add more detail. She clicks on Rex’s city in the Map Overlay to check the average visit length, bounce rate, and number of pageviews in the visit, which she uses to help her think about which pages Rex would be looking at, given his goals and those averages. Visits from Regina’s city are a little longer, so Kate considers what pages might show up, and checks the event listings site that referred Regina’s visit to find out what Regina might already know before visiting the theater’s site. Kate also checks on the referrers and keywords for visits from NYC and verifies that they contain some phrases similar to the one she chose for Monica.

Step Five: Bring the Personas to Life

The fifth and final step is to breathe life into these rough skeletons of personas. This is the familiar practice of generating the rest of the fictitious biography of the user, the detailed picture of who that person is and what motivates her or him, and so on. Let your creativity take over and build off the initial characteristics from the web analytics data to create a coherent persona. For example, the assigned browsers and operating systems should guide the determination of the computer makes and models that your personas use. Use the new or returning visitor status to assign the personas a level of comfort with using your site and their motivations for the site visits. The geographic location determined from the user data can help generate appropriate user goals and challenges, as well as occupations and hobbies, which may differ for domestic and international users. The reports on Keywords and Referring Sites offer insight on visitors’ interests and motivations, albeit slightly abstracted, and are a good starter for writing usage scenarios.

Kate spends some more time fleshing out her personas, and eventually decides that she needs more information about Rex, the returning patron and would-be donor. She asks the theater for some information from their patron database about how often regular patrons from Rex’s city visit the theater. Kate also interviews the company’s Development Director to gain more perspective on the characteristics of the theater’s existing donors from the local area. After learning more about the types of donors that the theater attracts and the general giving patterns they have, Kate feels that Rex is a good representation of the kind of potential donor who would visit the theater’s website repeatedly, and adds in some additional details based on her interview with the Development Director.

If you have other sources of user data, this is a great time to work it in. Survey data can often provide useful demographics that web analytics cannot, like users’ age, sex, and education level, for example. Free answers from surveys, interviews and focus groups are great sources of inspiration for filling in the details that make personas come to life. The Google Analytics Keywords report can sometimes provide the very questions that bring users to your site–and where better to answer them than in the design process?

Even when there is relatively little user data available to aid in the process of persona development, leveraging the resources at hand creates a stronger design tool. The 5-step process presented here aims to provide a starting point for developing personas using web analytic user data, rather than relying solely on assumption or imagination. An evidence-based approach like this one can lend structure and credibility to using personas and scenarios in the design process. At the same time, user data and statistics must be creatively synthesized to produce a useful representation, and imagination is always required to transform a user profile into a persona.

Metrics for Heuristics: Quantifying User Experience (Part 2 of 2)

Written by: Andrea Wiggins

In part one of Metrics for Heuristics, Andrea Wiggins discussed how designers can use Rubinoff’s user experience audit to determine metrics for measuring brand. In part two, Wiggins examines how web analytics can quantify usability, content, and navigation.

“Web analytics can only serve as verification for information architecture heuristics when the analyst and information architect are evaluating the same qualities of a website.”

Rubinoff’s usability statements echo several of Jakob Nielsen’s heuristics. There is a temptation to believe that web analytic technology can replace proven usability testing, which simply is not true. In every mention of web analytics in support of usable design, authors are quick to note that traffic analysis, no matter how reliable and sophisticated, is not able to provide the thoroughness or level of insight that lab testing can achieve. However, JupiterResearch’s 2005 report, “The New Usability Framework: Leverage Technology to Drive Customer Experience Management,” found that by combining professional usability testing with web analytics, customer satisfaction information, and a/b optimization, usability expenses can be cut dramatically, and purportedly without loss in quality.

Using usage information for usability
Error prevention and recovery is a common evaluation point for usability. Information architects will find it easy to determine whether a site offers appropriate error handling, but harder to quantify the value that error handling creates. The most direct measures are simple proportions: the percentage of site visits including a 404 (file not found) error, the percentage encountering a 500 (server) error, and the percentage of visits ending with other errors. Combined, 404 and 500 errors should occur for under 0.5% of requests logged to the server, and a quick check can reveal whether errors merit a further investigation. Path, or navigation, analysis recreates errors by examining the pages most commonly viewed one step before and after an error, so they can be understood and remedied. Examining 404 page errors will also identify pages to redirect in a redesign, ensuring a continuity of service for visits to bookmarked URLs.

Analyze success rates to determine whether a site helps its visitors accomplish their goals and common tasks. Applying scenario or conversion analysis to specific tasks, analysts can examine leakage points and completion rates. Leakage points, the places where users deviate from the designed process, should be of particular interest: when visitors leave a process unexpectedly, where do they go, and do they eventually return?

The straightforward percentage calculations that comprise conversion analysis are simple to determine once the appropriate page traffic data has been collected. Conversion rate definitions and goals are often determined with the client using key performance indicators, and the information architect’s understanding of these measures will facilitate a deeper understanding of how the design should support business and user goals. A primary difficulty often lies in choosing the tasks to assess. These measures are best defined with the assistance and buy-in of the parties who are ultimately responsible for driving conversion. Shopping cart analysis is the most common application of this type of evaluation, but online forms carry just as much weight for a lead generation site as checkout does for an ecommerce site.

Unfortunately, shopping cart analysis can misrepresent what it seeks to measure. A significant proportion of shoppers engage in online research to support offline purchasing, so a high shopping cart abandonment rate may not be as negative as it would first appear. Shopping cart analysis, however, can be very useful for determining shopping tools and cross-marketing opportunities which may add value by improving conversion rates. Taken to the level of field-by-field form completion analysis and a/b optimization, shopping cart optimization can produce impressive returns, and is a particularly rich and enticing aspect of user behavior to analyze.

A/B and multivariate testing are additional applications of web measurement for improving the user experience. These techniques simultaneously test two or more page designs, or series of pages such as shopping tools, against each other by randomly splitting site traffic between them. When statistical significance is reached, a clear winner can be declared. A/b testing solves the issue of time by testing two designs at the same time, so each design is subject to the same market conditions; multivariate testing does the same for more than two design variations. This approach, also known as a/b or multivariate optimization, is better suited to refining an established design than to informing a complete redesign.

Analyzing completion of online forms depends on advance planning and requires more work during site development, but this effort is critical to measuring the forms’ performance in the hands of users. If measurements indicate that the design of an online form or process prevents a successful user interaction, this is not bad news: it is an unparalleled opportunity for improvement. Armed with the knowledge of specific user behaviors and needs, the information architect can design a more successful, usable site.

Content evaluation beyond page popularity
Rubinoff’s final category for user experience analyzes content by asking about the site’s navigation, organization, and labels. Navigation analysis can take several forms, requires some tolerance for indeterminate results, and is often subject to complications resulting from the limitations of data collection methods and analysis techniques. Navigation analysis findings are typically too valuable to disregard, and they have a host of applications from scenario analysis to behavioral modeling.

Performing navigation analysis on the site’s most popular pages shows the paths users travel to arrive at and depart from those pages. When pages with similar content do not achieve comparable traffic, web analytics can provide the insight to determine the differences in the ways that visitors navigate to the pages. Looking for trends in visits to content versus navigation pages can also indicate where to focus redesign efforts: if navigation pages are among the site’s most popular pages, there is probably good reason to spend some time considering ways the site’s navigation might better support user goals. Examining the proportions of visits using supplemental navigation, such as a site map or index, can also reveal problems with primary navigation elements. In these cases, however, web analytic data is more likely to point out the location of a navigation problem than to identify the problem itself.

label change
Web analytics can help explain why similar pages perform differently.

Determining whether content is appropriate to visitor needs and to business goals is a complex problem. To validate the information architect’s analysis of content value, individual content pages should be examined and compared on such measures as the proportion of returning visitors, average page viewing length, external referrals to the page, and visits with characteristics indicative of bookmarking or word-of-mouth referrals. For some sites, comparing inbound links and the associated tags from blogs or social bookmarking sites such as del.icio.us can provide a measure of content value.

Content group analysis is another common approach that measures the value of website content. Site content performance is often measured by dividing the content into logical, mutually exclusive groupings and monitoring traffic statistics and user behaviors within these content groups. Content group analysis will slice and dice data across several types of content groupings, which may include audience-specific content tracks; product-related content comparisons by numerous levels of granularity, often presented in a drill-down report that allows a simultaneous view of every sub-level of a content group; and site feature-specific content groupings. For example, content groups for a prestige beauty products site would naturally include subject-based categorizations such as Cosmetics, Skin Care, and About the Company. Groupings that would evaluate the same content from a different perspective might include Education, Fashion, Product Detail, and Purchase. Grouping pages by level in the site hierarchy is particularly useful in combination with entry page statistics for advising navigation choices.

content groups illus FINAL
Content group analysis compares performance across different parts of a site.

To determine how well site content matches user expectations, few tools can outperform search log analysis. If analysis of site search query terms reveals a significant disparity between the language visitors use and the language the site employs, the chances are good that the content does not fit user needs. Monitoring trends in search engine referral terms can provide insight into whether search indexing is matching queries to content well enough to meet user expectations, but the primary reason to track this information is to evaluate and improve upon search engine optimization results.

By comparing the user’s words to the website’s text, mining data from search engine referral terms and from onsite search queries, web analytics can identify language and terminology that may prevent the site from successfully meeting user and business goals. If the site has multiple audiences with significantly different vocabularies, comparing search terms and site text for the pages designed for these specific audience segments offers more targeted evaluation of whether the site’s labels and content meet user expectations. For example, pharmaceutical sites are often organized to address such varied audiences as investors, doctors, and patients. Content group analysis is one of the most direct methods to achieve this type of audience segmentation, but dividing the site’s audience by behavior is also very effective. The same search term analysis can also provide insight into what users expected to find on the site but did not, identifying business opportunities or gaps in search indexing.

Conclusion
The hard user data of web analytics can only serve as verification for information architecture heuristics when the analyst and information architect are evaluating the same qualities of a website. Web analytic data comes with several disclaimers, primarily that the data is never complete or wholly accurate, and cannot be, due to limitations of technology. These limitations include confounding elements for navigation analysis such as browser page caching and proxies serving cached pages; for audience segmentation and return visitor analysis, issues of cookie blocking and deletion complicate data collection. While accuracy improves reliability, accuracy and reliability are significantly different qualities: uniformly inaccurate data often provides reliable intelligence. Despite challenges to the accuracy of web traffic measurement, reliable metrics can still inform decision-making and provide solid insights to improve the user experience.

An information architect’s evaluation of user experience is often highly subjective and gains value with an evidence-based understanding produced by web analytics. User testing and web analytic data are currently the only ways to verify the heuristic assumptions upon which a website is redesigned. Access to web analytic data during the earliest phases of a redesign process informs the site’s architecture from the very beginning, allowing for a shorter engagement and resulting in a design that significantly improves the site’s usability for its primary audiences’ needs. The best way to ensure a verification of site heuristic analysis is for the information architect to develop user experience audit statements in cooperation with the web analyst who will retrieve the information to validate the audit statements. In the larger picture, including measurement in site design helps prove the ROI of intangible investments in web presence, ultimately to the benefit of all.

Metrics for Heuristics: Quantifying User Experience (Part 1 of 2)

Written by: Andrea Wiggins

In part two of Metrics for Heuristics, Andrea Wiggins examines how web analytics can quantify usability, content, and navigation.

“When analytics data is shared with information architects, a subtler and more sophisticated user experience design can emerge.”

Web analytics typically provides intelligence for marketers and executives. While valuable for proving Return On Investment (ROI), web analytics’ real potential lies in improving the online user experience. When analytics data is shared with information architects, a subtler and more sophisticated user experience design can emerge. Information architects need to understand why visitors come to the site and what they seek, so that the content can be best presented to meet user needs and business goals. First, however, it is necessary to evaluate the site using appropriate information architecture guidelines.

Using heuristics
Providing a context for heuristics is the most useful application of web metrics in a site redesign: a framework for measurement is critical to future evaluation of the success and value of strategic but intangible investments like information architecture. Analyzing pre-existing web traffic yields insights to user behavior and measures how well a site meets user needs. By comparing analytic data to an information architect’s heuristic evaluation, a basic validation emerges for otherwise subjective performance measures. In addition to heuristic validation, web analysts can use the information to engineer effective key performance indicators (KPI) for future evaluation, and the information architect can use the information to provide direction for the design.

Before further exploring the use of web analytics to support information architecture heuristics, it is necessary to note weaknesses in source data and current analytic practices. First, web traffic measurement will never reveal the complete user picture, and this is one reason to use web analytics in combination with other user experience evaluation tools, such as customer databases and user testing. Second, there are currently very few standards in measurement, methods, and terminology, which affects the analysis and interpretation of any web analytic data. Finally, available site traffic data may be suboptimal for analysis depending upon the volume and nature of the data available.

Rubinoff’s user experience audit
Cooperative selection of success measures early in the project’s definition or discovery phase will align design and evaluation from the start, and both the information architect and web analyst can better prove the value of their services and assure that the project’s focus remains on business and user goals. To provide a useful context for design, Rubinoff’s user experience audit is one of several tools information architects can use to evaluate a website.

Rubinoff presents a format quantifying useful, though subjective, measures that information architects can easily assess, and his example uses four broad, equally weighted categories with five evaluative statements in each. The most functional aspect of this tool is its complete customizability. Because every domain operates in a different context, information architects and web analysts will achieve greatest success by using the evaluative points that will be most verifiable and valuable for each website.

spider_chart.jpg
Figures 1: From “Quantifying the User Experience,” Rober Scott Runiboff

I will examine representative evaluative points drawn directly from Rubinoff’s user experience audit that can be validated through web analytics:

Branding

  • The site provides visitors with an engaging and memorable experience.
  • Graphics, collaterals and multimedia add value to the experience.

Usability

  • The site prevents errors and helps the user recover from them.
  • The site helps its visitors accomplish common goals and tasks.

Content

  • Link density provides clarity and easy navigation.
  • Content is structured in a way that facilitates the achievement of user goals.
  • Content is appropriate to customer needs and business goals.

Ideally, the most relevant metrics or KPI should be comparable both before and after a site is redesigned, although direct comparisons are often impossible after fundamental changes to site structure are implemented. By nature, a site redesign generates new points of measurement, typically enhanced by improved data collection strategies, and only a handful of the site’s previous KPI might still be directly applicable.

Quantifying brand
A popular yet extremely subjective measure of brand requires judging whether the site provides an “engaging and memorable” experience. Direct measurement of brand is elusive in any medium, so web analytics’ indirect evidence is far more tangible than brand evaluation in most other channels. As a validation metric, the most direct measure of the ephemeral value of brand is the ratio of returning visitors. The return visitor ratio for a site will vary: tech support sites prefer a low proportion of return visitors, but an ecommerce site requires a continual infusion of new visitors. To measure the online brand experience, the ideal proportion of return visitors for the site must be identified as a KPI tracked over time.

The length of the average site visit, in both time and pages viewed, also provides verification of the brand experience. A specific goal for engagement length can be set as a key performance indicator using either of these visit statistics for the site. Similarly, by considering content groups of similar pages, measuring the depth or breadth of visits to each content group can assess how engaging the users find the experience.

Unfortunately, validating an interactive element’s brand value depends on whether the interaction was created with measurement in mind. Engaging online experiences like multimedia games, Flash, and AJAX are all measurable, but only if the design incorporates JavaScript tagging to report key interactions. When this information is available, appropriate analysis can still be difficult to generate. The current analysis tools are page-oriented, not interaction-oriented, so extensive work-around measures may be required. In a redesign, KPI for interactive site elements should be a primary consideration in the experience design development process. For example, the length of time users spend with an interactive element provides a concrete idea of how long users are engaged by the application, and the average number of return visits for returning visitors would be a simple statistic to assess how memorable the experience is: just how frequently do visitors return?

Another branding element that Rubinoff suggests evaluating is the value of graphics and multimedia. As previously mentioned, KPI for multimedia must be determined in the site planning phase for appropriate implementation, and these same measures can determine whether multimedia applications are providing value that justifies their expense. Aside from ensuring the graphics do not make the pages load too slowly, measuring the experiential value of graphics can be more difficult, unless they are clickable graphics leading the user to further content. The challenge then is to differentiate clicks on graphics from clicks on embedded navigation links. Shrewd design can assure this level of measurability for future evaluation, but it is unlikely for discovery phase analysis, where it would be most useful.

Editor’s note: Check back for “Part 2” of this series where Andrea explores how designers can use metrics to quantify and evaluate a website’s content, navigation, and usability.