One of the dirty little secrets about being an information architect is that most of
It all started so well
- Increase card-sending statistics,
- Reorganize the collection (taxonomy/controlled vocabulary),
- Improve navigation and searching,
- Suggest key places for ads and promotions (need to “monetize”),
- Find an approach for organizing the music greeting collection,
- Improve the checkout process.
The team consisted of four Argonauts: a Lead Information Architect (myself), an Assisting Information Architect (Michele de la Iglesia), a Project Manager (Shawn Stemen), and a Usability Specialist (Keith Instone) who worked part time on the project to advise us.
We began our work by conducting a strategy and recommendations phase, knowing that Egreetings was hoping for major look-and-feel redesign with the target of a fall 2000 relaunch. An information architect’s approach should always involve an investigation of the content, the organizational context and the users. Often the user research part of the methodology gets less emphasis than it deserves because of time and budget constraints. However, this project contained user testing and research during each phase.
Furthermore, we encouraged them to create controlled vocabularies for these facets so the cards could be consistently indexed. We also delivered wireframes at this point, including one for the new main page of the site to show how to integrate our taxonomy suggestions into the site.
In addition, we drafted lists of controlled terms for the other facets. Then we tested these with users and made changes accordingly. After we delivered the new taxonomy to Egreetings, we worked with their team to provide guidance on how to apply the terms consistently as they reclassified the entire collection of cards. By the middle of summer, the client was busy handling all of the details and issues that go with a major redesign.
The problem of search
At that point we began our work on the search interface, which was planned as a future enhancement to be added after the fall relaunch. From our first meetings with Egreetings, there was controversy about how to best implement search. From the experience of the Egreetings team and our own observations during testing, we knew that users were strongly drawn to browsing rather than searching when selecting cards. This has a lot to do with the mental model formed by shopping for traditional paper cards. However, after talking to several rounds of users, I felt that I had a good idea of what they would want in a search interface. While the majority seemed to enjoy the shopping and browsing process, there was a great opportunity to shorten this process for people in a hurry. Many users came to the site with a particular occasion, recipient or emotion for a card in mind. Some also looked for particular types of subject matter or images.
For a time, the site included a search interface which was intended to allow users to select different card criteria from categories like “collection” and “publisher.” There was a lot of overlap between these categories, and users frequently got zero results when they selected more than a few choices. We didn’t shed many tears when technical changes to the content management system made this search interface disappear by surprise. This allowed us to start from scratch.
Most search interfaces offer an open text box for a user to type in a query. In this case, we felt that the ubiquitous search box could be optional. Egreetings was cautious about getting involved with a search engine vendor because of the costs involved. From a practical perspective, any free-text search on a collection of a few thousand objects (rather than hundreds of thousands of objects) would need to be fairly sophisticated in order to avoid offering users null results. We felt we could provide a great deal of utility to users by exposing the choices and controlled vocabularies for selections that would be guaranteed to deliver results. The content management database Egreetings had built could be adapted for fielded searching.
Lastly, I had some definite opinions and ideas about how search should work:
- I felt that search should leverage the work we had done to define the facets and metadata for the cards.
- I was inspired by sites like Epicurious and Virtual Vineyards. These sites combine searching and browsing via databases of content objects and products that are well classified with rich metadata.
- The new search interface should NOT disappoint users with an empty results page. On an ecommerce and advertising site like Egreetings, it is important to suggest something to the user even if it doesn’t meet all of the criteria.
With these parameters in mind, I set out to create draft wireframes of a design. My philosophy was that by using a step-by-step wizard interface, we would create an interface that would be a shopping assistant to the user, which would allow them to narrow their choices down using faceted criteria. Each page of the wizard would concentrate on a separate facet. This would give users a reasonable number of cards to browse while making it less likely that they would be returned null results.
When we next met with Egreetings they liked many of my ideas, especially the
The test took place over the course of three days with 12 users. We used a market research firm in Southfield, Michigan to recruit a variety of representative users. We were lucky enough to be able to perform the tests in this firm’s well-appointed facilities, complete with a two-way mirror for observation and videotaping equipment so that Egreetings could also review the tests.
While planning this round of user testing, I got really excited about the idea of prototyping and how to get the most out this kind of test during the design process. A colleague and close friend, Dennis Schleicher, had just returned from the UPA 2000 conference with some great ideas on prototyping. I found that different professionals had diverging opinions on how to create effective prototypes. I learned a great deal by considering the arguments for both low- and high-fidelity prototypes, and came to some of my own conclusions about how to conduct this particular test. (See What IAs Should Know About Prototypes for User Testing for some of my ideas and research on prototyping.) After some pondering, we decided to proceed with testing the search inputs with a low- to medium-fidelity prototype created with Visio printouts that we cut up into pieces users could interact with. This made sense because it meant that we didn’t need much help from the Egreetings technical and creative teams to create high-fidelity interactive prototypes. At the time, they were much too busy with the relaunch to worry about that. However, they did help us by providing some high-fidelity screen comps to use in our test sessions to get reactions from users (we showed one of each style of search interface and another of the main page access points for navigating to the search page). In order to make the test feel as automated as possible, we asked the users to imagine interacting with a computer to perform tasks with both interfaces.
We prepared about eight tasks, such as, “You regularly share jokes with your favorite brother and his birthday is next week. Find a card for him.” We made sure to compose these so that they included multiple facets and there was more than one possible answer. During testing, we varied the order in which we gave the users the tasks and we also alternated the prototype presented first between users to eliminate any first-last bias. For each task, we asked the user to interact with the interface on the tabletop and to pretend that they were using the computer. We had laminated the Visio printouts so that the users could write on them with a thin whiteboard marker. One of us took notes while the other facilitated and simulated the feedback given by the computer. For example, in the wizard interface we wrote down the number of matching cards on a slip of paper as the user made each choice.
Since it would have been very difficult to show users actual results, we stopped each task when users told us they were ready to hit the “search” button. The best way to get feedback from the users under these circumstances was to determine how confident they felt about the search. So, after each task, we asked a series of questions:
- How confident are you that the Card Finder would find cards that match the task?
1 – Not confident at all
2 – Not very confident
3 – Somewhat confident
4 – Confident
5 – Very confident
- How many cards do you think the Card Finder might find?
- Do you have any comments on this version of the Card Finder?
We also devoted the second half of each test session to a separate activity devoted to the design of the results page. We asked them to select from cutouts of elements that could appear on a results screen and build their ideal search page.
Facing the music
Nobody likes being wrong. I pride myself in my efforts not to bias tests by leading with my own opinions. I must have done a good job. I was able to hide my pain over the three days of the test as the majority of users chose the one-page search interface over my wizard approach. Our testing and analysis revealed the following:
- Users preferred to see multiple criteria on a single page.
- They had difficulty noticing “show more” functionality, which expanded their options. Some preferred to see complete lists of options by default.
- Users offered opinions on the ordering and priority of criteria. “Reason to send” and “tone” were both high priorities. “Recipient” was more important than we anticipated.
The decision was clear —I may have lost, but the users won. In the aftermath of the test and the subsequent report we delivered to the client, I needed to create an interface based on the one-page paradigm. So I updated my design for the test according to the feedback from the users. In particular, I reorganized the way the facets were presented so that “Reason to send” was the most prominent and the other facets were given equal secondary emphasis. I relied heavily on the idea that users would see a relatively short page at first that could expand as needed. We also recommended that search provide “smart” results. Because the one-page search interface presented a high risk of offering null results, we specified that the search engine would need to present best-bets if not all criteria matched.
All for naught?
Once I got over my angst about losing the battle over the search interface, I felt really great about the conclusion of the project. I had swallowed my pride and designed a direction for the interface based on what the users wanted. Moreover, I felt good because our months of working with Egreetings finished with a very successful relaunch which happened on time. Even better, the initial statistics after the launch showed a positive impact from the new card categories and navigation that we’d recommended. Transactions (cards sent) and the number of visits went up immediately —a rare achievement in any redesign because it usually takes users some time to adjust when a site undergoes major changes. Egreetings had implemented roughly half of our recommendations and the others were put onto their priority list for future updates. We even received thanks from the CEO and VP.
This was certainly disturbing news for me. I felt sorry that the Egreetings team would no longer be together. I also worried that the site on which I’d spent such a considerable amount of time and effort would be wiped out. That has not
|Chris Farnum is an information architect with over four years’ experience, and is currently with Compuware Corporation. Three of those years were spent at Argus Associates working with Lou Rosenfeld and Peter Morville, the authors of Information Architecture for the World Wide Web.|
This comment isn’t directly related to the article (you’ve heard that one before haven’t you…).
Reading the article made me wonder, how much empirical research is their into the value of facets, controlled vocabularies, etc in the domain of websites, intranets, and so forth. I can easily understand the value of these approaches in a library context, but how well do they translate to the tasks that we do on the web? I’d just be interested to know what research has been done.
The main thing that got me thinking about this was the fact you tried to apply theory/knowledge to the search issue, but your informed design proved to be unsuccessful when you did your user testing.
Hey, I don’t mind tangents. To address a couple of your points:
– I didn’t think that the user testing actually discredited the faceted org scheme. Instead it killed the wizard UI approach. I definitely included facets in the alternate design and in the final recommendations.
– I have to admit that I didn’t do a comprehensive lit survey on facet research during the project. I felt that facets were a design pattern that had already well established (invented by S.R. Ranganathan in the 1930s) in the realm of classification and info retrieval. Of course, part of my not-so-secret librarian agenda has always been to try to apply ideas from LIS to web design.
If anyone knows of some good research studies on applying classification/facets to web design, please feel free to share.
Although I found the case study interesting, I have a few comments about your process and your approach to the results.
First of all, you started the search discussion by describing this as an opporunity to “shorten this process for people in a hurry” — i.e. the goal of search was to make the process of shopping for a card super fast. Right off the bat this argues directly against having a multi-page search wizard. As someone who has designed many a wizard and many a search, the one thing I can say is that even if having multiple pages makes things quicker in the long run, users still have the perception that having multiple pages is slower and more draining when they are trying to do something fast. Wizards seem to work best when there is a redundant process that by its nature will take a while but that always follows the same steps and has a clear finish. This does not really relate to search. As a result, I was SHOCKED that the client agreed to a 12 person user study to compare the two ideas. Not only does 12 people seem like a bit of an overkill for deciding which of these two ideas works better, but having such a formal test with a two way mirror for observation with videotaping in order to make a decision about two formats that were still in rough paper prototype format seems like a particular waste of money. Note that I am not arguing against user study in this situation…I just think that you would have been able to have the exact same results testing 5 users in person without the fancy facilities and the incredible outlay of time.
However, this takes me to my next concern about the article. In the end, you discuss your pain and angst at “losing” and triumph your decision to swallow your pride. As a fellow designer, this commentary was a bit disappointing to me. I view our role as designers is to be the lone champions of users in a world of people trying to program things that they don’t like or sell them things they don’t want. Growing one’s ego about being the design expert and therefore the one with presumably the best ideas is a negative approach to interaction design. When I work with clients, I always emphasize that I don’t have a monopoly on good ideas, I am just the one they hire to make sure that the good ideas get through and the user always wins. Of course in the end, you were happy with the results, but I am surprised that your article presented your shock at being incorrect.
Ouch! I humbly accept your criticism. Your scolding is well founded. In my defence I’d like to offer a few comments:
– Hindsight vision = 20/20 At the time, there were a number of design constraints that lead me to consider a wizard. In hindsight, I’m glad that I took the time to flesh out an alternate design. In the end I learned from both.
– For the purposes of the article I’m perhaps overstating my angst. I think that everyone should have an experience like this at least once. It’s a wonderful learning opportunity when you are proven wrong, especially when you are really attached to a misguided idea. But I hope you’ll forgive me for the momentary pain I experienced in the process.
– Spending on user testing was a little different in 2000 than now. Even so, one of the outcomes that the client specifically requested throughout the project was to have a nice audio/visual record of the tests. There was actually an advantage in that this is cheaper than travel/hotel costs for moving multiple people between San Francisco and Detroit.
That was very eloquently put. Nice website too (reminds me a tad of the 37 signals site).
It’s *very* refreshing and important to hear stories about how people other than us IA’s can be right about UI issues. You are a brave soul to write an article about how you “disproved” your own best idea (at the time) and had the clarity of thought to see that in the end you DID achieve your goal because the user’s won. While it’s great to hear IA success stories, I also know that the kind of lessons like yours happen more often than not. They are just as valuable for us as individuals and as a collective as the case studies outlining why the IA was right from the get go. Nicely done.
Great story, I really appreciate you sharing the results of your search wizard testing (people often ask me about that).
I’m finding more and more situations where facts apply and avoiding dead-ends is a huge plus. All kinds of e-commerce, especially high-ticket items like jewelry or expensive vacations. Even Internet Yellow Pages are taking this approach. So I think you were really on the right track.
Comments are closed.