Consolidated Assessment:

Posted by

A user-research approach which integrates our best tools into a single session

“Effectively understanding and designing for these most important factors (the 20%) is a much better use of time than spinning wheels (the 80%) over the minutiae that fall outside of most common use cases.”There are several research tools at our disposal for understanding user behavior. But how many times do we get the chance to spend as much time on research as we think is required? Since resource constraints are so common these days, we need ways to make ourselves more efficient on the job. There are two ways to do this: collect better results with less testing time, or collect expanded information without increasing testing time.

One way to gain this efficiency is to integrate three techniques —scenario design, card sorting, and participatory design —into a single session. “Consolidated Assessment” isn’t necessarily the right phrase for what is discussed here, but it’s the first one that comes to mind. It’s like cross country skiing, which works several large muscle groups and the heart, all at the same time. This kind of combined exercise is much quicker than exercising each muscle separately. Nordic track TV commercials claim that you can get twice the workout in half the time. I’ll stake a similar claim for this approach.

The findings from this type of evaluation lean strongly toward the behavioral side of user research, which studies what people do. On the other end of the spectrum, which this technique doesn’t study as closely, is attitudinal research, which studies what people think. Attitudinal research is more closely related to marketing and branding and is better explored through other testing approaches.

The consolidated testing approach is helpful for several reasons:

  • Test environment is more reflective of users’ real-life activities. For example, pure card sorting is a ‘laboratory technique’ used to evaluate users’ mental models. It’s not an activity, or even a thought process that a normal user would go through.
  • More ‘lively and engaging’ evaluation environment for respondents. Respondents tend to zone out after spending too much time on a single monotonous activity. A little variety during evaluations helps keep respondents engaged and providing good feedback.
  • More meaningful results. Rather than use more respondents to test smaller sections of the site, we are able to understand a broader slice of a user’s behavior by performing a holistic evaluation. If respondents become bored, they glaze over and don’t contribute much.
  • Improved efficiency.
    • Logistics planning and recruiting only need to happen once,
    • which is a huge reduction in overhead.
    • Single test script
    • One findings presentation

Now that I’ve explained why the approach is so useful, let’s discuss the research technique in more detail. There is no one correct way to conduct a consolidated assessment, so you’ll want to explore variations that are best suited to your particular project environment.

The 80/20 principal —Pareto’s Principal
To understand the evolution of this approach, it’s important to understand my general philosophy of user research and site design: the 80/20 principal. (Lou Rosenfeld has also been discussing and presenting this topic on his site) In the 19th century, Italian economist Vilfredo Pareto quantified a relationship between production and producers in this way: 80% of production comes from 20% of producers. If ever there was a principal that applies beyond its original definition, this is it. I can only imagine how delighted Vilfredo would be to know his work is quoted on websites about site architecture and design.

By extending the 80/20 principal to the Internet, one may make general assumptions like:

  • 20% of a site’s content accounts for 80% of its use. Think front page on NYTimes.com vs. older archived news.
  • 20% of a site’s features account for 80% of use cases. Think buying and selling stocks on Etrade vs. using the Black and Scholes Option Pricing calculator.
  • 20% of a site’s errors account for 80% of poor user experiences. Think not being able to find a unique hotmail e-mail account address under 10 letters vs. trying to add too many names to the address book.

And, carrying the extension one step further to user research, it’s reasonable to assume the following: a well planned consolidated assessment contributes insight into a majority of user experience issues —including scenario development, site organization, and site/page flow. Effectively understanding and designing for these most important factors (the 20%) is a much better use of time than spinning wheels (the 80%) over the minutiae that fall outside of most common use cases. Don’t worry, you’ll always be able to come back and iron out those smaller points later.

Ideal uses for the technique
This approach works best with sites that involve goal-driven users who come to a site with a purpose. They have an idea of what they are looking for, so it is an example of known item or known task based site usage. That includes sites built around scenarios and activities rather than simple document retrieval (for which pure card sorting is well suited). And to achieve these pre-specified goals, the user has to complete some specific tasks that are more involved than simple document retrieval. We’re not talking about finding articles, but going through a set of steps to complete a task.

This 80/20 approach to consolidated assessment works well for general consumer sites and applications where the risks are relatively low or easily remedied. I don’t trust mission critical applications that involve national security, management of large sums of money, or diagnosis of medical conditions to systems built using the 80/20 principal. Those need to be tested, and tested again —no short cuts there. But, those aren’t the kind of projects I am talking about here.

Two-fold improvement is a reasonable expectation
By combining a variation of card sorting with scenario based participatory design, we can improve our efficiency and our research yield almost two-fold. That means the feedback collected is twice as valuable, or that it’s collected in half the amount of time.

If the basics of these techniques are old hat, skip to page three for a description of how combining scenario development, card sorting, and participatory design can shorten the research time and improve the yield.

For people new to the field of user research, you should know that sometimes there are no earthshaking insights uncovered. Sometimes the testing reveals things that you knew along. When that happens, consider it validation and reinforcement —two things you can never have too much of.

If the ideas of scenario development, card sorting and participatory design are new or you would like a refresher, continue on.

“By combining a variation of card sorting with scenario based participatory design, we can improve our efficiency and our research yield almost two-fold. That means the feedback collected is twice as valuable, or that it’s collected in half the amount of time. ”Scenario Development – summarized
Scenarios document what activities and information needs people work through to complete a task. A thorough scenario takes into account what helps people make progress, what holds them back, and the different routes that people can take to reach a goal. Those are the things you have to look for.
Some scenarios are very predictable with only one or two variations, while others are full of twists and turns.

Traditional scenario development approaches may involve activities like contextual inquiry, which is a clinical way of saying that a researcher is going to watch someone as they work in their natural environment. The researcher is hoping to learn what situations a person encounters during an activity. This is understanding through observation.

Contextual inquiry can be a lengthy and costly process because of all the observation and recording that is necessary. It’s also sometimes an annoyance to the person being watched. Even though a skilled observer can be subtle, the very nature of the observation process likely changes the way the watched works.

Sure, it can yield effective results, but there is also opportunity for misinterpretation. Sometimes it’s simply easier to ask about a process than it is to spend time ‘shadowing’ a person as they work. That’s not to say there isn’t a time and place for more formal inquiry, it’s just not always appropriate.

Card Sorting – summarized
Card sorting is so simple a 6 year old could do it. Actually, that’s how old I was when I first started card sorting in the late 1970s. Not that I’ve been in the research field that long, card sorting just seemed a natural thing to do with my baseball card collection. On an almost weekly basis, I’d reorganize my cards. Usually I’d lay them out all over the floor and then get to work. Sometimes I’d sort by team (Go Orioles), or by position (all first basemen), or by year, or by card brand… Card sorting for research isn’t really much different.

In short, the technique is a simple exercise in grouping like items that share attributes. Sorts can be based on documents, concepts, tools, similar tasks, or just about anything that can be grouped. But it’s most often used to figure out navigation categories and which items belong to them. Or it is used to establish document or merchandise categories and related items. To sound like a real clinician, throw around terms like mental model or cognitive relationship. They are simply terms that describe the way people think about items in their mind.

Sorts can reveal four specific things:

  • Items that are consistently grouped together
  • Outliers that are inconsistently grouped
  • Items that can be grouped in multiple categories
  • Titles/headings for groups of like items

You will find that a 100% fit between all items in a group can’t be established all the time. No problem. As long as there is certainty that the groupings make sense to a majority of users, the activity can be considered successful —there’s that 80/20 principal again.

Participatory Design – summarized
Participatory design is exactly what it sounds like. Participants actually design the site and pages, with the moderator helping to guide them through the process. That’s not to say you sit them down in front of Dreamweaver and watch them hack at it. Rather, a design moderator works with the respondent to sketch out, in rough terms, page layouts and process flows. It might be a matter of determining what items belong on a page and the relative prominence they should receive. Or it might be a matter of walking through a process, such as finding and purchasing merchandise or trading stocks, and sequencing the key steps the user must complete.

I imagine there are some similarities to the process of working with a police sketch artist to come up with a composite image of a perpetrator, although I’m not personally familiar with how that works.

Assuming these three activities happen during the course of a project, they are usually handled separately. But can you imagine the value and time savings of combining them?

“Clients appreciate the fact that respondents create their own scenarios, which gives the whole process an extra legitimacy. By keeping your ears open, you will find that people will say all sorts of interesting and useful things during the evaluation.”Consolidated Assessments, explained by example
To illustrate, let’s use the example of a travel website that includes travel planning tools and destination content.

Broad research goal: The broad research goal is to understand how users approach the process of online travel planning and how these issues can be resolved online.

We gain insight by:

  • Developing scenarios and use cases
  • Learning what content and tools people need to understand and complete tasks
  • Having users sequence page flows and prioritize page elements

At the conclusion of the research, we will have generated the following artifacts:

  • Representative scenario narratives
  • Site flow diagrams
  • Page level schematics for key pages in the scenario
  • Prioritized content grouped by activity

Audience assumptions: Users of travel sites usually arrive with specific goalsthat are either task-related or fact-finding. Even if the user lands on the site by chance, let’s assume the site clearly states what it offers and the user is so engaged they jump right in and decide to explore.

Our sample travel site offers the following main features:

Online booking, email alerts for price reductions, a vacation planning wizard, and a very sophisticated search.

In addition to these important features, the site offers interesting content as well: syndicated content for major cities, travel tips, and maps.

Since this kind of site is targeted toward a specific group of users, it’s important that respondents are well screened. The respondents must have a realistic need for what the site offers so they can help build accurate use cases. Scenario development doesn’t work if respondents are required to fantasize about the hypothetical or work through scenarios that don’t apply to them. We want realistic scenarios with personal relevance to the respondent. In this situation, friends, family and co-workers are usually poor substitutes for a properly qualified respondent.

New way —3 steps
Step 1- scenario definition and selection
One goal of testing is to identify generalizations or recurring themes. To that end we might keep the scenarios flexible but still focused. We would provide users with a list of five to seven travel planning scenarios, and they would select three with which they identify.

These scenarios are brief, including little more than a few sentences about the goal and another sentence or two about what constitutes the end of the scenario. An example might read as follows:

You and your wife are planning a vacation. You have one week in July, $2500, and want to go somewhere warm where English is widely spoken.

Variation 1: Ask them to add details to the supplied scenario so that it’s more relevant to them. The more realistic the scenario, the easier it is for respondents to provide firsthand feedback. They might mention facts about sites they want to see, foods they want to eat, or maladies they want to avoid.

Variation 2: Ask them to describe, from scratch, a situation that is current and relevant to them. This gives a lot more latitude to the scenario and allows more focus on exploring new scenarios rather than validating existing ones.

Once we have a few scenarios established, we need to figure out how users will work through them. Since these scenarios are goal driven, we need to learn what information is needed for the user to achieve their goal. The goal here is to research and book a vacation.

As researchers, what can we expect to collect and understand from the scenario development part of the evaluation?

  • The information respondents believe they need to reach the goal; note that some information comes from external offline sources. For example, people are open to travel suggestions from friends and word of mouth. Just because it doesn’t happen online doesn’t mean it’s not worth knowing about —we still need to capture that information.
  • The tools respondents believe they need to reach the goal
  • The sequence of steps respondents might follow
  • The respondent’s level of confidence that they have completed the task accurately

Step 2 —Identifying required content and tools. Card sorting variant.
Although the focus of traditional card sorting is grouping like items, sometimes that’s not appropriate. Our goal here, which is loftier, is to relate content items to specific tasks, not each other. In other words, we are grouping information with corresponding tasks/activities.

Using this consolidated approach, users identify the information and tools they need to complete the tasks in the scenario before beginning the card sorting activity. The respondent tells us which questions need to be answered and where they expect to find the answers. They could be given a pool of article titles and travel planning tools and asked to choose those which they think would be required, as well as those which they think would be helpful.

As researchers, we must keep a keen eye open during the observation. Watch how users sequence their information needs. Which resources do they seek first? What information/tools do they need to get to the answer? Which are helpful but not required? Which are dependant on other activities/information? What do they go offline for? What’s extraneous? What’s missing?

Variation 1: Don’t provide any cards. Rather, have the users tell you exactly the information they need to complete the task.

Step 3 —Participatory paper design
Once we have an idea of important content, data, and functionality, we work with users to define page/activity flows, then we build basic page schematics that support these tasks.

We first ask the user to construct a logical task sequence that incorporates the key steps in the scenario they defined. Then we request that they “build” pages with a template that has basic global elements in place (navigation, promotional space, footer, etc). Users then draw the other key items onto the page.

Variation 1: Allow users to sketch their ideal page layouts from scratch (i.e. a travel booking tool and teasers to interesting articles on the front page).

Variation 2: Show users sections of sites that offer similar features and have them select the ‘best of breed’ approaches and build a page using the cut and paste method. Sort of like the Color Forms where people and objects are slapped down onto a background. (i.e. booking engine from Expedia, promotion section from hotwire.com, and content from Lonely Planet)

By the time several users have gone through the same scenario, the research team should have enough feedback to know the critical content/tools and the key steps of the flow.

The evaluation would probably work just as well by altering the order of steps 2 (Card Sort) and 3 (Participatory Design). A reasonable case could be made for either positioning. If the page flows and layouts are drawn out first, then the content could be organized then dropped onto the pages. If the content is identified first, the pages could be constructed as containers for the content. Either way works.

Conclusion
By and large, this type of testing has worked well for me, both in terms of getting good insight, and in terms of communicating findings to clients. Clients appreciate the fact that respondents create their own scenarios, which gives the whole process an extra level of authenticity. By keeping your ears open, you will find that people will say all sorts of interesting and useful things during the evaluation. Clients love quotes.

In my experience, while results have been mostly positive, there was one time when things didn’t work so well. I wasn’t able to get the respondents to describe the scenario in enough detail to finish the rest of the activities in the evaluation. They were stuck in a rut and I wasn’t able to nudge them out of it. Had I done a practice run or two with the scripts (piloting) before testing, I would have known this. Always pilot the test scripts before the evaluation and have alternative activities ready if needed.

If you have a chance to apply this technique, I’d be interested in hearing what works well for you, what doesn’t, and any suggestions for how to refine it. In particular, I’m interested to know how well it works in areas outside of travel. Drop me a line, or discuss in the forum here.

Appendices:
Suggestions for the moderator:

  • Pre-test the evaluation script to make sure that it flows well and that it can be completed within the timeframe users will be given.
  • Don’t lead users to conclusions, only facilitate discussion.
  • Don’t let users spend too much time revisiting their responses/recommendations. Their gut reactions are usually right. More time does not usually bring us to more accurate responses.
  • Remain flexible. Don’t be afraid to divert from the test plan as long as you’re collecting quality feedback.

Sample test plan:
Respondents: Recruit six and expect to test five. One of the six will most likely be a no show or not much of a communicator.

Materials Required: Labeled cards for sorting. Participatory design materials: Sketch paper (graph paper works well for those who like to draw in the lines). Big sticky poster board works well too. Possible A/V recording equipment for a highlights reel.

Time: 1.5 – 2 hours
The evaluation should move relatively quickly. This format isn’t designed for introspection. Rather, we want to run with user gut reactions. Once they make a choice, they need to stay with it, rather than overanalyze and edit.

10 minutes for orientation
20-30 minutes for scenario development (3 scenarios)
20-30 minutes for sorting
25-35 minutes for participatory design
20 minutes for debrief / wrap-up

Report Highlights: There will be no shortage of discussion material to include in the final report. Direct user quotes, photos, and artifacts go a long way to constructing a compelling report. Here is an example outline for your report:

  • Introduction
  • Methodology Overview
  • Executive summary of entire evaluation
  • Scenario development highlights
    • Consistencies
    • Inconsistencies
    • Surprises
  • Sorting highlights
    • Consistencies
    • Inconsistencies
    • Surprises
  • Participatory design highlights
    • Consistencies
    • Inconsistencies
    • Surprises
  • Actionable recommendations for IA and content

Seth Gordon uses his understanding of user research and IA to improve user experiences and solve business problems. He has recently completed consulting projects for the Nielsen Norman Group and Razorfish. Visit him at www.gordy.com, where there isn’t a drop of content about user experience.