Four Modes of Seeking Information and How to Design for Them

Written by: Donna Spencer
“Observe how your users approach information, consider what it means, and design to allow them to achieve what they need.”

I discovered the concepts in this article while preparing material for an introductory information architecture workshop. In the workshop, I thought it important to highlight that one aspect of designing for users was to understand the ways in which they may approach an information task. I was already familiar with the concepts of known-item and exploratory information seeking: they are common in the library and information science literature and are also discussed in Information Architecture for the World Wide Web.

In my work on intranets and complex websites, I noticed a range of situations where people didn’t necessarily know what they needed to know. Additionally, when I opened my browser history to look for examples from recently-visited sites, I noticed that the majority of my own time was spent trying to find things that I had already discovered. These two modes didn’t fit into the concepts of known-item and exploratory information seeking. I call these “don’t know what you need to know” and re-finding.

I spent a while letting this rattle around my head, talking with IAs and designers, and realized that most only thought in terms of known-item searching. When discussing the other types of tasks, they’d ask with a horrified look, “So how do you design for that?”

Let’s look at the modes of seeking information in some depth and their implications for web design.

1. Known-item
Known-item information seeking is the easiest to understand. In a known-item task, the user:

  • Knows what they want
  • Knows what words to use to describe it
  • May have a fairly good understanding of where to start

In addition, the user may be happy with the first answer they find (though not always) and the task may not change significantly during the process of finding the answer.

Some examples include finding out whether Katharine Kerr has a new novel, learning about how the CSS color:transparent attribute works, and getting a copy of the travel form. These are all clearly defined, easy to describe, and the starting point is straightforward.

There are a number of design approaches to help with this type of task:

  • Search. This is a particularly good solution: people can articulate what they need and are able to type it into a search box. As long as the search results show the word in context or show a clear description of results, they are likely to recognise suitable pages from the search results.
  • A-Z indexes. These are great at supporting this mode, as users are able to articulate the word that they are looking for. As long as the A-Z contains the word the user is thinking of, all they need to do is read down the list and spot the right item. One way to make sure that the list of terms in an A-Z index matches the words that users think of is to look at the terms used during user research or in the search logs.
  • Quick links. Links to frequently used items allow easy access to them. Again, the terms in the list must match the users’ terms.
  • Navigation. Browsing via navigation can support this behavior. It is most likely to be effective when the user can clearly identify which navigation heading to choose from.

For this mode, it is important that people are able to answer their question quickly.

2. Exploratory
In an exploratory task, people have some idea of what they need to know. However, they may or may not know how to articulate it and, if they can, may not yet know the right words to use. They may not know where to start to look. They will usually recognise when they have found the right answer, but may not know whether they have found enough information.

In this mode, the information need will almost certainly change as they discover information and learn, and the gap between their current knowledge and their target knowledge narrows.

As an example, a few years ago I was looking for information on the cognitive mechanisms that allow people to navigate the physical world (I was comparing the concept of online and physical navigation). I knew what I was after, but couldn’t describe it (‘navigation’ in a search engine would return results for web navigation). I had no idea where to start. I tried a number of places and didn’t succeed at all. (Six months later I stumbled across some wayfinding papers and realised that was the term I needed).

Other examples of exploratory tasks include looking for history on the technique of card sorting, finding examples of sites with complex forms laid out using CSS, and finding music I like.

The first challenge can be getting the user to a good starting point (this was the main problem in the navigation example). This is less of a problem on an intranet as staff may only have one place to explore. Portal sites, subject-based directories, or sites with a wide range of content (such as Wikipedia) can provide avenues to follow on the open Web.

Design approaches for this mode include:

  • Navigation. The most successful design solution will be browse, via navigation of all types. Browsing allows people to take some chances and follow a path, exploring, discovering, and learning as they go. Users may go deeper or broader in a hierarchy, or to related information.
  • Related information. Related links may be created from a list of related topics, a manually created list of relevant pages, or lists based on items purchased or recommended by other users. Contextual links may also be included in the body of the content.
  • Search. Search can be useful for exploratory tasks, but can be problematic due to the user’s inability to articulate what they are after. An initial search can help the user to learn about the domain and get some ideas for keywords. It can also be useful to provide synonyms for the search term as they may help the user to better articulate their query.
    For this mode, it is critical that there are always avenues for exploration and that the visitor never reaches a dead end.

3. Don’t know what you need to know
The key concept behind this mode is that people often don’t know exactly what they need to know. They may think they need one thing but need another; or, they may be looking at a website without a specific goal in mind.

This mode of seeking information occurs in a number of situations:

  • Complex domains such as legal, policy, or financial. For example, a staff member may want to know how many weeks maternity leave they are entitled to, but may need to know the conditions surrounding that leave. We should read the terms and conditions of new products and services as there maybe important restrictions, but they are too often buried in legal garble that we don’t read.
  • Any time we wish to persuade the user. For example, we would love people to know more about information architecture and usability, but they often don’t know that the concepts even exist. They may think they want to know how to make an accessible nested fly-out menu; we think they need to know more about organising the content properly.
  • Unknown domains. For example, when someone is told by friends that he or she should check out a new service, product or website, but does not yet know why he or she would want to know about it.
  • Keeping up to date. People often want to make sure they keep up to date with what is happening within an industry or topic, but are not looking for a specific answer.

The challenge is providing an answer while exposing people to the necessary information, thus showing what they may need to know. This can be achieved by:

  • Straightforward answers. Simple, concise answers allow people to have their initial information need met. For example, in the four situations above the websites could include a summary of the maternity leave benefit, the key issues of concern in the terms and conditions, an outline of the benefits of the new website or service, and a list of latest releases respectively.
  • More detailed information. Make more detailed information easily available. This may take the form of related links or contextual links in the body of the content.

The solutions allow people the satisfaction of getting an answer and then the opportunity to get additional information.

4. Re-finding
This mode is relatively straightforward—people looking for things they have already seen. They may remember exactly where it is, remember what site it was on, or have little idea about where it was. A lot of my personal information seeking is hunting down information I have already seen. I don’t know how prevalent this is, but discussions with others indicate that I am not alone.

Design solutions can be active (where the user takes explicit action to remember an item) or passive (where the user takes no action but items are remembered).

Active solutions exist on many web sites: wishlists (, “save for later” (emusic), and favorites (Pandora). These solutions work well but require a conscious effort from the user, who needs to know they will want to return to an item in the future. is another example of an active solution for the web as a whole.

A good passive solution allows users to see items they have seen before, order them by frequency of use, easily get to the content, and the information within it persists over time (longer than the current session).

Domains where passive solutions offer value include the following:

  • Shopping sites. Users may look at a number of products and may comparison shop before purchasing (e.g. Target,, Anthropologie, Classy Groundcovers, Expansys).
  • Weblogs. Readers may revisit favorite posts and watch comments on a post.
  • Article sites. Sites like Boxes and Arrows may have readers returning to their favorite articles frequently.
  • Support sites. Readers need to return to the same help topics.
  • Real estate sites. Potential buyers look at their favorite house over and over.
  • Complex search facilities. Users may wish to retain their search, modify it, or rerun it.
“The most important issue is not whether you notice a mode of seeking information that fits into one of these categories, but that a range of modes exist.”

Identifying the modes
Once you understand the modes, examples are easy to spot during user research.

Known-items show up in heavy use of search with accurate keywords, when users can easily list what they need from the site and support e-mail will ask for specific content.

Exploratory information seeking shows up in search when vague phrases or repeated searches for similar keywords are used; when users express that they are researching, looking for background information, or “finding out about” something; and when support e-mails ask for general information.

“Don’t know what you need to know” is a little harder to identify. In interviews, users may express that they just want to keep up with things. It may also be clear that users do not have sufficient background knowledge or have not read information they should have. You can identify gaps in content by walking through the content, acting out a scenario from the user perspective, and checking that sufficient information is available.

Re-finding is easy to identify if your site has user registration and the logs show what pages people visit. You can also look at the number of items in wish lists.

The most important issue is not whether you notice a mode of seeking information that fits into one of these categories, but that a range of modes exist. Observe how your users approach information, consider what it means, and design to allow them to achieve what they need.

Note: Thank you to IAI members for suggestions for sites that offer navigation for the re-finding task.

Card sorting: a definitive guide

Written by: Donna Spencer
“Card sorting is a great, reliable, inexpensive method for finding patterns in how users would expect to find content or functionality.”


Card sorting is a technique that many information architects (and related professionals.) use as an input to the structure of a site or product. With so many of us using the technique, why would we need to write an article on it?

While card sorting is described in a few texts and a number of sites, most descriptions are brief. There is not a definitive article that describes the technique and its variants and explains the issues to watch out for. Given the number of questions posted to discussion groups, and discussions we have had at conferences, we thought it was time to get all of the issues in one place.

This article provides a detailed description of the basic technique, with some focus on using the technique for more complex sites. This article does not cover some issues such as the use of online tools, which will be covered in a future article.


Card sorting is a quick, inexpensive, and reliable method, which serves as input into your information design process. Card sorting generates an overall structure for your information, as well as suggestions for navigation, menus, and possible taxonomies.

While card sorting might not provide you with final structure, it can help you answer many questions you will need to tackle throughout the information design phase. For example, more than likely there will be some areas that users disagree on regarding groupings or labels. In these cases, card sorting can help identify trends, such as:

  • Do the users want to see the information grouped by subject, process, business group, or information type?
  • How similar are the needs of the different user groups? >
  • How different are their needs?
  • How many potential main categories are there? (typically relates to navigation)
  • What should those groups be called?

Card sorting can help answer these types of questions, making you better equipped to tackle the information design phase.


Card sorting is a user-centered design method for increasing a system’s findability. The process involves sorting a series of cards, each labeled with a piece of content or functionality, into groups that make sense to users or participants.

According to Information Architecture for the World Wide Web, card sorting “can provide insight into users’ mental models, illuminating the way that they often tacitly group, sort and label tasks and content within their own heads.”

Card sorting is a great, reliable, inexpensive method for finding patterns in how users would expect to find content or functionality. Those patterns are often referred to as the users’ mental model. By understanding the users’ mental model, we can increase findability, which in turn makes the product easier to use.


There are two primary methods for performing card sorts.

  • Open Card Sorting: Participants are given cards showing site content with no pre-established groupings. They are asked to sort cards into groups that they feel are appropriate and then describe each group. Open card sorting is useful as input to information structures in new or existing sites and products.
  • Closed Card Sorting: Participants are given cards showing site content with an established initial set of primary groups. Participants are asked to place cards into these pre-established primary groups. Closed card sorting is useful when adding new content to an existing structure, or for gaining additional feedback after an open card sort.

Closed card sorting will be detailed in a future article.

Advantages and disadvantages

As with any other method, card sorting has both advantages and disadvantages. Keeping these in mind will help you determine whether the technique is appropriate for your situation and make decisions about how you run the activity.


  • Simple – Card sorts are easy for the organizer and the participants.
  • Cheap – Typically the cost is a stack of 3×5 index cards, sticky notes, a pen or printing labels, and your time.
  • Quick to execute – You can perform several sorts in a short period of time, which provides you with a significant amount of data.
  • Established – The technique has been used for over 10 years, by many designers.
  • Involves users – Because the information structure suggested by a card sort is based on real user input, not the gut feeling or strong opinions of a designer, information architect, or key stakeholder, it should be easier to use.
  • Provides a good foundation – It’s not a silver bullet, but it does provide a good foundation for the structure of a site or product.


  • Does not consider users’ tasks – Card sorting is an inherently content-centric technique. If used without considering users’ tasks, it may lead to an information structure that is not usable when users are attempting real tasks. An information needs analysis or task analysis is necessary to ensure that the content being sorted meets user needs and that the resulting information structure allows users to achieve tasks.
  • Results may vary –The card sort may provide fairly consistent results between participants, or may vary widely.
  • Analysis can be time consuming – The sorting is quick, but the analysis of the data can be difficult and time consuming, particularly if there is little consistency between participants.
  • May capture “surface” characteristics only – Participants may not consider what the content is about or how they would use it to complete a task and may just sort it by surface characteristics such as document types.

When should card sorting be used?

Card sorting is a user-centered, formative technique. It should be used as an input to:

  • designing a new site
  • designing a new area of a site
  • redesigning a site

Card sorting in the overall design process. Click to enlarge.

Card sorting is not an evaluation technique and will not tell you what is wrong with your current site.

Card sorting is not a silver bullet to create an information structure. It is one input in a user-centered design process and should complement other activities such as information needs analysis, task analysis, and continual usability evaluation. It is most effective once you have completed:

  • research into what users need out of the site
  • a content (functionality) audit/inventory (for an existing site) or detailed content list (for a new site). For an existing site, it is crucial that the content inventory is examined carefully to include only content that is needed by users.

Card sorting will provide benefit to most sites, but can be challenging to use against some sets of information. The table below summarizes when card sorting works well and provides good results, and when it is challenging both to run and to analyze.

  Easy Challenging
Site size Small Large
Type of content Homogeneous (e.g., product catalogues, lists of services, directories of web sites) Heterogeneous (e.g., intranets, government web sites)
Complexity of content Participants understand most of the content Complex or specialist content

Table 1.1

For sites with characteristics listed in the last column, card sorting will provide less direct input into the information structure; you may need to undertake a range of card sorts and more user-centered design activities.

Card sorting can be useful to demonstrate to people that others think differently. We have successfully included it as an exercise in workshops for web site and intranet authors.


Preparing for a typical card sorting exercise requires the following:

  1. Selecting content
  2. Selecting participants
  3. Preparing the cards

Selecting content
The first step in conducting a card sort is to determine the list of topics. This list should be drawn from a wide variety of sources:

  • existing online content
  • descriptions of business groups and processes
  • planned applications and processes
  • potential future content

By including potential future content it becomes possible to create a structure that not only works now, but also will work for future content and functionality. Adding new items in the future should require minimal rework if the structure is designed correctly.

Granularity and sampling content.
Content selected for the cards can be individual pages, functionality, small groups of pages, or whole sections of the site. Be consistent with your chosen granularity — participants will find it difficult to group content at different levels of granularity.

If you choose to use small groups of pages or sections of the site, ensure that the groups are of items that belong together. For example, don’t include a grouping of “media releases,” as this may not suit users and their tasks (they may prefer individual media releases to be grouped with other pages of similar topic.). Instead, include some individual media releases and see what participants do with them.

The content for the card sort should be representative of the site (or the part of site that you are investigating). It is important to ensure that the content has enough similarity to allow groupings to be formed. If the content chosen is too varied, participants will not be able to create natural groupings.

Selecting participants

Card sorting may be performed individually or in groups. Keep in mind that the exercise will be performed multiple times. So, if you’re using individuals, try and get seven to ten for a good sampling. If you’re using groups, our preferred method, five groups of three participants per group (a total of 15 participants) works best. Whether you choose to use individuals or groups, the most important aspect of selecting participants is that they come from and are representative of your user group. (If you have multiple user groups, it is important to include a representative sample from each, as they may view the information differently).

Scheduling individuals can be easier than scheduling groups of participants, especially if you have individuals located remotely. However, individuals can find it difficult to sort larger numbers of cards, providing less valuable input.

A benefit of group sorts is that they typically provide richer. data than individual sorts. Whereas individuals need to be prompted to “think aloud,” groups tend to discuss their decisions aloud openly. Combine this with the group’s ability to handle larger numbers of cards effectively and their tendency to walk each other through questions about content or functionality, and you have a rich data set with greater insight into users’ mental model.

The number of groups needed may depend upon the size and complexity of the site or product. However, we’ve found that patterns tend to emerge within five groups. These patterns become the basis for the site or product’s information architecture.

When inviting participants, it’s not necessary to tell them they’ll be performing a card sort. Instead, simply tell them they’ll be asked to perform a simple task, or exercise that will help you (re)design the site or product. Additionally, let them know they don’t need to prepare ahead of time; they should simply come as they are.

Preparing the cards

Each item on your list should be placed on a card. The labels you use on the cards are extremely important. They should be short enough that participants can quickly read the card, yet detailed enough that participants can understand what the content is. When necessary, the label can be supplemented with a short description or image on the back of the card.

Labels may be printed on standard (Avery) mailing labels, or printed by hand. We recommend using mailing labels as this saves time and the labels will be more legible.

Mark each card with a letter or number to make analysis easier once the sorting is done.

You can use whatever cards you have on hand, but we recommend 3″ x 5″ (10cm x 15cm). Index cards are durable, easy to see from a distance, and readily available at office supply stores. You may also use Post-it® notes, but it is our experience that cards are more durable and easier to handle.

Number of cards.

While there is no magic number, we have found that between 30 and 100 cards works well. Fewer than 30 cards typically does not allow for enough grouping to emerge and more than 100 cards can be time consuming and tiring for participants. However, we have performed successful card sorts with over 200 cards where participants understood the content well.

In addition to the labeled cards, be sure to include some blank cards in case participants need to add something. And don’t forget a pen.


For the purpose of this article, we will describe an ideal execution for a card sorting exercise. Keep in mind that there are several variations, as described above.

The cards have been labeled using Avery labels on 3″ x 5″ index cards. On the back of each card is a letter/number combination, as well as a short description or image as necessary. The letter/number combination will be used during analysis; the short description or image is provided to clarify titles that might prove confusing. The cards are shuffled prior to participants entering the room. The shuffled cards, a stack of 20 blank cards, and an ink pen are placed on the table. Three participants are brought into the room and given an introduction with some basic instructions, like these:


First of all, we’d like to thank you for coming. As you may be aware, we’re in the initial stages of (re)designing a (web site, product, intranet). In order to make it as easy to use as possible, we’d like to get some input from the people who will be using it. And that’s where you come in. We’re going to ask you to perform a very simple exercise that will give us some great insight into how we can make this (web site, product, intranet) easier to use.

Here’s how it works. In front of you is a stack of cards. Those cards represent the content and functionality for this (web site, product, intranet). Working together, you should try and sort the cards into groups that make sense to you. Don’t worry about trying to design the navigation; we’ll take care of that. Also, don’t be concerned with trying to organize the information as it is currently organized on your (web site, product, intranet). We’re more interested in seeing how you would organize it into groups you would expect to find things in.

Once your groups are established, we’d like to have you give each group a name that makes sense to you. You are allowed to make sub-groups if you feel that’s appropriate. If you feel something is missing, you can use a blank index card to add it. Additionally, if a label is unclear, feel free to write a better label on the card. Finally, if you think something doesn’t belong, you can make an “outlier” pile.

Oh, and one last thing. Feel free to ask questions during the exercise if you feel the need. I can’t guarantee that I can answer them during the exercise, but I’ll do my best to answer them when you’re finished.

Facilitating card sorts can be tricky. During the exercise, your main job is to observe and listen. Your secondary job is to keep the momentum going without leading the participants. Take notes on a small notepad to keep track of insightful comments made by participants, or questions that come up during the session.

Try to make sure each participant has the opportunity to provide input. If one of the participants tries to “take over” the sort, gently prompt the other participants. If one participant sits back, gently prompt that participant. If the group creates a “miscellaneous” group, ask them if they are satisfied with that group, or if they would like to take another look at it to see if it needs to be sorted further. Make sure not to lead them too much.

Once the sort is complete, you may see something that looks like this:

Sample of card sorting exercise. Click to enlarge.

Once the participants are finished, walk them through a particular task. This helps validate the results. For example, if the site has some type of account management, or profile feature, ask them to walk you through updating their address information.

Analyzing the results/next steps

Analyzing card sort data is part science, part magic. Analysis can be done in two ways: by looking for broad patterns in the data or by using cluster analysis software.

When performing analysis on smaller numbers of cards, you may be able to see patterns by simply laying the groups out on a table, or taping them on a whiteboard. You will be able to see patterns through similar groupings and labeling.

When performing analysis on larger numbers of cards, we suggest using a spreadsheet. Enter the results into a spreadsheet, making sure to capture the title and number on each card. If the participants changed the label on a card, record the new label and place the old label in parentheses. Once you’ve entered the data, begin looking for patterns across the groups. Keep in mind the discussions held between the group participants during the sort, as they provide additional insight that might not appear in the spreadsheet. At this point, you are not looking for a definitive answer, but for insights and ideas.

Another technique for analyzing data can be found in “Analyzing Card Sort Results with a Spreadsheet Template“; by Joe Lamantia.. Follow the instructions in Lamantia’s article to prepare the spreadsheet. As he mentions, look at the results for high-agreement cards and low agreement cards.

In both types of analysis, patterns will emerge. These patterns will likely be sensible for the actual users. It is important to note that areas of difference also provide useful insights. Areas of difference tell us about:

  • content that participants haven’t understood well
  • content that could belong to more than one area
  • alternative paths to content (for example, a list of all “how-to” articles could be created)
  • how different types of participants see information

There definitely is some magic in the analysis step, and it is difficult to provide exact instructions on what to look for. Allow yourself some time to explore more than one organizational model based on the information provided from your analysis. Remember that it is not necessary to jump straight to a taxonomy at this point. Your card sort results can be supplemented with additional user research and task analysis.


There are a range of additional tasks that you can ask participants to do during the exercise, including these:

  • Home page content: ask participants to put to one side content that they would use so often that they would want a link on the home page to it.
  • Information- seeking task: after the exercise, bundle up the piles of cards on the table so only the top level is showing. Ask participants where they put particular content. (It is worth doing this if you suspect that the participants were not thinking about how they would use the content as they sorted)

The resulting draft information architecture can be evaluated using Donna’s card-based classification evaluation. This technique provides additional information about the grouping of the content, as it focuses on tasks that users would do rather than just focusing on content. Frequently, participants will create groupings of content in a card sort that they then cannot use when asked to perform a scenario.


In summary, card sorting is a simple, reliable, and inexpensive method for gathering user input for an overall structure. It is most effective in the early stages of a (re)design. And while it’s not intended to be a silver bullet, when done correctly, it is instrumental in capturing helpful information to answer questions during the information design phase – ultimately making the product easier to use.


One reason we wanted to write this article was to get a detailed explanation of card sorting in one place. Please expand this article into a definitive card sorting resource by adding comments with your own variations or observations.

Donna Maurer works as a usability specialist and information architect for Step Two Designs, an Australian consultancy focusing on intranets, content management, usability and information architecture. She is currently researching, designing and testing information systems for Australian government and public sector clients, and is presenting usability evaluation workshops.

In her spare time Donna tutors Information Systems Design at the University of Canberra, studies for a Masters in Human Factors, and maintains a weblog, imaginatively called DonnaM, about IA, usability, and interaction design.

Todd Warfel is a Principal User Experience Architect at MessageFirst in upstate NY. With over 10 years of experience practicing user research, information architecture, interaction/interface design, and usability his work has produced several industry firsts and patented products. His work has included projects for Fortune 500 firms, government agencies, and educational institutions, including Adobe, Albertsons, Apple, AT&T Wireless, Bank of America, Charles Schwab, Cornell University, Dell, EDS, Macromedia, Palm, and Philips Electronics. In 1996, Todd developed DIVE©, a proprietary process for improving products’ ease-of-use. The DIVE process has been used across more than a hundred products, many of which are industry firsts.

Todd is currently working on a PhD in Information Science at Cornell University and has a B.A in English and Cognitive Psychology from Ball State University.

Additional Resources

Card-Based Classification Evaluation

Written by: Donna Spencer
“This testing method can be very effective in ensuring that your classification will help your users find what they need.”We hear and talk a lot about card sorting in various forms, and how it can be used as input on a hierarchy or classification system (or a taxonomy, if you like more technical words). We hear that we should test our hierarchies, but we don’t talk about how. Of course, we can test them as part of a standard usability test, but on the screen there a lot of things competing for a user’s attention. How do we tell if a problem is a result of the classification or the way the interface is presented?

I have developed and practiced a card-based system that allows me to evaluate a classification outside of its implementation. It is simple, requiring little input from individual users (10 minutes from 20 users is not a significant amount of time for them, but provides me with a significant amount of feedback). Using this technique means that I can focus my in-depth usability testing on interface issues, rather than the classification.

A bit of history
It started while I was working for the Australian Bureau of Statistics. I poked my nose into someone else’s project. I’d heard about a new hierachy being designed for our external website and decided to have a look.

After much discussion with the creators, making some changes, and addressing some remaining concerns, I said the fateful words, “We should test this.” They agreed wholeheartedly—as long as I did all of the work.

I was already in contact with a lot of customers, and knew I would be able to get people to participate. That was the easy part. The difficult part was figuring out how to test it. In this case, I faced the following problems:

  • The development work hadn’t started, and the timetable was already tight. I couldn’t wait until the system was developed to test it on screen (and if I waited, there was little chance of getting any changes implemented).
  • The classification was for an external site and, due to the technical infrastructure, we couldn’t get a prototype on a live site to test.
  • I didn’t want to bring customers into the office to test, as I would only need a short time from them. Yet I didn’t really want to lug around a laptop to customers’ offices either.
  • I knew there were usability issues with the current site and didn’t want the existing problems to impact the test of the classification system.

Given these constraints, and after practicing a few times, I developed what I call “Card-Based Classification Evaluation.” This testing method can be organized and run quickly without a lot of overhead, yet can be very effective in ensuring that your classification will help your users find what they need.

OK. Computer off, pens out.

Here’s what you need to run your own card-based classification evaluation:

  • A proposed classification system or proposed changes to an existing system. Some uncertainty, mess, and duplication are OK.
  • A large set of scenarios that will cover information-seeking tasks using the classification.
  • A pile of index cards and a marker.
  • Someone to scribe for you.

Go through your classification and number it as shown in the example below, as far into the hierarchy as you need:

   1. Heading
      1.1. Sub-heading
         1.1.1 Topic

Here’s an example of the hierarchy I tested for the Australian Bureau of Statistics:

   1. Economy
      1.1. Balance of Payments
         1.1.1 Current Account/Capital Account
         1.1.2 Exchange Rates
         1.1.3 Financial Accounts
      1.2 Business Expectations
      1.3 Business Performance
      1.4 Economic Growth
   2. Environment and Energy
   3. Industry
   4. Population/People

Next, transcribe your classification system onto the index cards. On the first index card, write the Level 1 headings and numbers. If you need to use more than one index card, do so. Write large enough that the card can be read at a distance by someone sitting at a desk.

Repeat for Level 2 onward, with just one level on each card or set of cards. Bundle all the cards with elastic bands.

On a separate set of index cards, write your scenarios. For the ABS example, a scenario might be “What is the current weekly income?” or “Which Australian city has the highest crime rate?” On one corner of each card, write a letter (A, B, C, etc.) to represent the scenario.

Write a short description of each scenario you wish to test on an index card.

Running the evaluation
Arrange 10-15 minute sessions with each participant. How you make arrangements will vary depending on your audience, so I’ll leave it to you to figure out the best way. In an office situation, I sometimes let people know that I’ll be around at a particular time, and that I’ll come to talk with them. This saves people the worry of meeting me at an exact time.

The introduction
For each evaluation, take a minute or two to introduce the exercise, and let the participant know why you are doing it. Remind them that it will only take a short time, and give them any other background information they need. These few minutes give them a chance to settle, and can provide you with some initial feedback.

My usual intro goes:

“Hi, I’m Donna Maurer, and this is [my colleague], and we work for [my department]. I’m working on a project that involves making improvements to our website. I’ve done some research and have come up with a different way of grouping information. But before it is put onto the computer, I want to make sure that it makes sense for real people. I need about 15 minutes of your time to do an exercise. You don’t need any background information or special skills.”

(They usually laugh at the “real people” part and nod hesitantly at the end.)

“I’m going to ask you to tell me where you would look for some information. On these cards is a set of things that I know people currently look for on the website. I’ll ask you to read an activity; then I’ll show you a list. I want you to tell me where in the list you would look first to find the information.

“I’ll then show you some more cards that follow the one that you choose, and get you to tell me where you would look next. If it doesn’t look right, choose another, but we won’t do more than two choices. This isn’t a treasure hunt; it is me checking that I’ve put things in places that are sensible for you. Don’t worry if this sounds strange – once we have done one, you’ll see what I mean. And if there are tasks on those cards that you wouldn’t do or don’t make sense, just skip them.

“[My colleague] is going to take some notes as we go.”

They usually still look hesitant at this point. But don’t worry—they figure it out after the first card.

Presenting the scenarios
Put the top-level card on the table, and check that the participant is reading the first scenario. Ask, “Out of these choices, where would you look for that information?”

The participant points or tells you his choice. For that item, put the second-level card on the table. Ask, “Where would you look now?”

For each item the participant chooses, put the corresponding card on the table until you get to the lowest level of your classification.

Have your colleague write down the scenario ID and classification number. Then change the scenario card, pick up all the cards except the top level, and start again.

During a scenario, if a participant looks hesitant, ask him if he’d like to choose again. I only offer two choices at any level because I want to see where the person would look first, not get him to hunt through the classification endlessly. Record all the participant’s choices.

Do this for about 10 minutes, or until the participant seems tired. I usually get through 10-15 scenarios. Wrap up briefly: If you noticed something unusual or think the participant may have something more to add, talk about it. Don’t just stop and say goodbye—let the participant wind down. Thank the person, and then move on to the next participant. Start the scenarios where you left off in the previous session. If you start from the beginning every time, you may never get through all your scenarios.

You’ll probably want to practice this procedure a bit before you do your first real test. It takes a few tries to get the hang of presenting the cards and re-bundling them, particularly with a big classification.

Record your results in a spreadsheet. In the first column of the spreadsheet, list the classification (by number or number and name). Across the top row, list the scenario IDs.

Record the results of your sessions in a spreadsheet. Mark each response from the participants at the intersection of the scenario and the classification item.

Mark the intersection of each scenario and the classification item selected by the participant. I usually use capital letters for first choices and lowercase letters for second choices. If I’m testing more than one group of participants, I use different letters for each group. (In the example below, I have used X and O to represent two different groups of participants.)

For scenarios or parts of the classification that work well, you will see clusters of responses. For instance, in Scenario B in the table above, the Xs and Os are in a cluster on item 1.2.3, indicating that all the participants chose that item. Assuming the choice is the one you wanted them to make, a cluster means that your classification works well in that scenario. In scenarios where the appropriate choice is less clear, people may look in a range of places (scenario C in the table), and the responses will be more spread out.

There will be some labels that participants consistently and confidently select, some that every participant ponders over, and some that no participant selects. Keep the first, consider the second, and ask questions about the third. Think about why some things worked well: Was it because of good labelling? Straightforward concepts? Similarity to an existing system? Think about why other things didn’t work: Was the scenario able to be interpreted in more than one way? Was there no obvious category to choose? Were there labels that people skipped entirely?

Don’t be afraid to revise and iterate your classification until you are happy with the results (actually, until the participants are happy with the results). Change labels and groupings, reword ambiguous scenarios, and ask further questions in the wrap-up. A bit of shuffling at this point is much easier than changing a navigator after implementation.

At the end of the evaluation, I usually write up a short report for the project team, which I also send out to the participants. This is a good chance to show them that you are doing something with their input, and maybe an opportunity to get some extra feedback.

A few other things to keep in mind:

One of the major benefits of this technique is its speed. Because it is fast and done on paper, you can:

  • Get people to participate easily—even busy people can spare a few minutes.
  • Get a lot of participants (which is necessary to show the clustering of responses).
  • Cover many scenarios and much of the classification.
  • Change the classification as you go or test alternatives on the fly. This is a good reason to write out the cards rather than type them—it makes them much easier to change. (You can see that I did this in the example above. The numbering system is not sequential.)
  • Rerun the evaluation whenever you make changes.

With this method, not only do you see whether your classification will be successful, you gather valuable information about how people think. Some participants will race through the exercise, hardly giving you time to get the cards out. Some will ponder and consider all of the options. Some take the new classification at face value, while others think about where things may be in an existing scheme. Some learn the classification quickly and remember where they saw things previously. Take notes of participants’ comments—they are always useful.

One trick with this process is to get people comfortable quickly so they will perform well for the short time they participate. Pay attention to the non-verbal signals they give you, and adjust your introduction and pace for each participant. (For instance, this evaluation can work with vision-impaired participants, by reading the options to them.)

The wrap-up at the end is especially useful for getting additional feedback from participants. If I am having trouble with a label, I often ask what it means to them. If there is a section of the classification that they didn’t choose (which may be because it is a new concept), I explain what might be there and see how they respond.

Make sure your classification goes down to a fairly detailed level, not just broad categories. Even though you may tell participants that this isn’t an information-seeking exercise, people are pleased when they “find it.” For this reason, it is also worth starting with an “easy” scenario.

I have used this method mostly with internal audiences, where it is easy to get people to give you a few minutes. My experience with external audiences has only been with existing customers, but there are many ways to get other external participants. For example, you could run this evaluation as part of a focus group or product demonstration.

To date, the only criticism of this technique I’ve heard is that it just tests known-item searching (where users know what they are looking for), and that it doesn’t test unknown-item searches (where the user is looking for something less well-defined). But if you have figured out how to test unknown-item searching, please let me know!

So far, I’m pleased with the results of this technique. Getting a lot of feedback from many participants has proven to be amazingly useful, both for developing the best classification system for users, and getting the important people involved and interested.

Donna Maurer works as a usability specialist and information architect for the Australian government. In this role, she hangs around with users, redesigns intranets, and designs browser-based business applications. Her colleagues have become quite accustomed to the piles of index cards and brightly colored Post-It Notes that decorate her desk.

In her spare time Donna tutors Human Computer Interaction at the University of Canberra, studies for a Masters in Internet Communication, and maintains a weblog, imaginatively called DonnaM, about IA, usability, and interaction design.