Quick Turnaround Usability Testing, Part II

Posted by

In part I, I discussed how to make the first three steps of quick turnaround usability testing (QTUT)—sales and kickoff, recruitment, and preparation—as short and efficient as possible. In part II, I discuss the final two steps: Testing and analysis and reporting.

Steps in the QTUT process

  • Step 1: Sales and kickoff
  • Step 2: Recruitment
  • Step 3: Preparation
  • Step 4: Testing
  • Step 5: Analysis and reporting

Testing

It’s testing day. You have successfully recruited enough participants for the first day, but you feel a bit of panic as you make the finishing touches before the first participant arrives. You have a rough but solid test script. You have five attentive stakeholders in the observation room ready to begin taking notes. Now you need to execute the test and you need to compile results as you go along.

Up to this point, the lack of time you had to plan and to refine your method creates a bit of a panic as you begin the testing phase. Often, we are working on the script until the very last second, incorporating changes from the stakeholders that they hand off when they arrive at our testing facility.

Early on the test day, I print out a screenshot of every important page and component (e.g., the primary navigation). I number these screenshots and then tape them above a large whiteboard that we keep in an “idea” room that is adjacent to our usability lab’s observation room. We use the whiteboard to keep track of issues and metrics across participants.

After you finish each participant session, immediately note changes that you need to make to your test script or the application. Then go talk with your stakeholders about the results. Here, the time that you have budgeted between sessions for discussion really pays off. If your stakeholders have been watching and taking notes, they are likely already talking about the results. They may also be already talking about potential fixes.

The whiteboard can be useful for focusing the discussion on issues. It’s often useful to set ground rules for the discussion. First, the discussion should focus on results and not solutions. It is important to manage your stakeholders by telling them to be patient and to let the results play out over several people before drawing conclusions. Second, as the person facilitating the study, you should lead the discussion on each topic by first summarizing your notes on the whiteboard. After we summarize a page, we ask for any additional feedback from the stakeholders and then quickly move on to the next page. Occasionally, we need to remind the stakeholders that since we have limited time between participants, we cannot dwell on any one finding.

You may think that stakeholders will not have much valuable feedback to add, but we have found that they often see things that we don’t because of their knowledge of the history of the application. For example, one client had just made a political decision to change a button label. Since the participants understood the new button name, we didn’t think to list it as a finding. But for the stakeholders, it was helpful to track it and mention it in the report.

After the first participant’s session, you will likely realize that some of your tasks and questions were not worded correctly. Through the whiteboard session, you may think of additional questions to ask participants, or you might even want to add, delete, or change tasks. Do not rely on your memory to do any of this. Instead, make the changes to your script and print out a new copy for the next participant.

At the end of your first day of testing, you should have a board full of findings, including some task completion data. If you test five participants per day, by the fifth participant trends are becoming clear. You may decide to stop a task because you already know the issues, which allows you to test other tasks that you couldn’t fit in. Discuss this at the end of the day so that you can make the necessary edits to your test script before you leave for the day.

In addition, if your stakeholders are already discussing potential fixes while you facilitate the test, it is important for you to be a part of that discussion. In their eagerness to fix things, our stakeholders occasionally solve the problem in such as way that a larger problem is created.

If you are in a fast-paced development environment, where developers are staying until 10 p.m. at night to make changes, your stakeholders may want to change things that night. Because of the potential to affect your testing, you should be very careful to attempt only easy fixes. You do not want to rework the navigation or radically alter a task flow overnight. However, you may want to change a button’s behavior or the text in a label.

As sessions progress, your whiteboard may start to fill up. Usually it is easy to condense the notes. We use a Post-It note for each issue and then start writing participant numbers on the note for each person that experiences the same issue (see Figure 1). We take pictures of the whiteboard throughout the process, prune non-issues, and type up results for tasks and questions that we have stopped using.

whiteboard

Figure 1: Whiteboard full of findings after two days of testing. Post-It notes contain reoccurring issues.

Finally, as you reach the last few sessions, increase the level of detail about potential recommendations. If possible, test these ideas with your participants. Get a sense from your stakeholders about what recommendations are feasible and cost effective on a short timeline, and which require long-term attention.

Testing tips

  • Keep the testing as simple as possible—for QTUT avoid technology such as eyetracking and anything else that makes your job more difficult or confusing.
  • For changes to your tasks and questions, modify the test script and do not rely on your ability to remember the changes.
  • Be willing to change questions and tasks. It is better to find and to fix the issues with your testing script early on.
  • Remind your stakeholders that because of the aggressive timeline your test script is not perfect, so you may have to change tasks and questions on the fly for the first couple of participants.
  • Keep a whiteboard in the observation room and use it to keep track of and to discuss problems with your stakeholders after each session.
  • At the end of the day, summarize the results and discuss potential recommendations. Consider making a list of the most important findings from the day.
  • Expect long work hours.
  • Don’t duplicate effort by typing up the results before you have tested all of the participants; write up the results only when you have stopped testing a task or asking a question.

Analysis and reporting

You have made it through two days of testing. The stakeholders have left and now you have to create a report that clearly communicates the issues and your recommended fixes.

The beauty of the whiteboard method is that your report becomes simply a summary of what you have already written on the whiteboard, including completion metrics, findings, and recommendations that have been vetted by key stakeholders.

Although the people who attended the sessions may understand the issues, you still need to translate them from shorthand whiteboard notations to a format that a wider audience can read. We use a presentation-style report because it is much faster to create than a report with lots of text.

Because time is important to the stakeholders, we classify recommendations as short-term and long-term fixes. Short-term fixes can be finished in a few days, or before the application is released. Long-term fixes require more time than your clients have, but they are severe enough that they need to be addressed eventually.

Our QTUT ends when we present the results to the stakeholders, which often includes several interested people who were not able to attend the sessions. We present our findings in person so that the stakeholders feel more comfortable asking questions about the test and so they can get immediate clarification on findings that have been inadequately described.

Analysis and reporting tips

  • Use a presentation-style report format that does not require long text passages.
  • If you will be presenting results to stakeholders who did not attend the sessions, give the necessary context to your findings.
  • If you change recommendations that you discussed with your stakeholders, let them know in advance so that they do not feel undermined.
  • Classify your recommendations in terms of the expected timeline for the fix.
  • Take pictures of your whiteboard in order to preserve your notes quickly and safely, enabling you to write the report from another location.

8 comments

  1. I’ve never been present for an actual usability test, but I love the process. I wish that renting a usability lab wasn’t so costly. Do you happen to have any usability reports lying around for an interested party to read? I’d like to see how they are structured.

  2. Hi Andrew. This is off topic, but usability testing does not have to be expensive. While the QTUT method shows you how to make it shorter (which can reduce cost), you can reduce cost even further by not using a lab at all. Any conference room or quiet area will do (as long as your stakeholders are willing to watch video instead of a live session). Regarding reports, we have client confidentiality agreements which prohibit us from sharing reports, but there are a lot out there on the web.

  3. Great article. In depth usability studies like these are invaluable for making your product better, we used them extensively at previous ecommerce companies I’ve been a part of. I especially like the whiteboard approach.

    I often found that the cost/effort of running usability tests like this prevented companies I’ve been at from doing enough of these types of usability tests. To that end, I created a “quick and dirty” way to usability test against a specific target market, that only costs $15 per tester, and gives you results in less than 24 hours. Check it out, I’d love any comments – http://EasyUsability.com.

    Doug Breaker
    Founder – http://EasyUsability.com

  4. Hi. This is great, thanks for putting this together and for sharing. There is both quantitative and qualitative data that can be collected during a usability test. How do you reconcile quantitative metrics or scores after a question’s wording has been changed and presented differently across different participants? I understand that this method takes many research method liberties in exchange for speed – this is why I like your method… I am just currious about your experience explaining results after questions change mid-testing. Thanks! Az

  5. Good question, Andres. The answer depends on the specific question that I’m changing. If it is just a tweak to clarify the task, then I usually make the change after the first participant, so explaining the results is not a big deal. If the task changes significantly later in the study (or is replaced by another task), I treat it like a separate task for analysis. Often, though, many of the issues you find overlap, which actually strengthens your results.

    The main problem with switching tasks halfway through is the low N on some issues. It requires some intuition to determine which issues are likely to project to a larger audience, which are not, and which require more evidence. My decision is usually based on the severity of the issue (persistence, impact, and frequency) and the type of participant. If it’s a minor problem, then I might mention it with other low priority issues. If it is a big problem, like a mislabeled button that prevents 25-50% of people from ever finding a webpage, then I might include it with high-priority fixes. The participant type matters if the participant is unreliable or only one type of participant find the issue.

    In quick turnaround studies, you have an additional constraint: the release timeline. If the issues can be quickly fixed then you do it. If it is going to take a long time, you delay it.

  6. Great article!
    A lot of people out there (as Andrew Maier pointed out) think that a usability study has to take weeks, involve expensive equipment and an interrogation room. I’m glad you’re getting the message out 🙂

    A few comments of my own:

    Regardless of how tight your deadline, I think it is an absolute must to at least run one or two pilot tests on internal users. The panic you describe, and the likelihood of having to change tasks (which could compromise your metric data) could be greatly reduced. More importantly, you lower the risk of potentially wasting a participants time (a big nono if you are testing an executive / VP user).

    I very much liked the whiteboard strategy, I’ve found that when running a study of my own, stakeholders often run into the room after a session has ended and bounce a million ideas off of me… perhaps I’ll try maintaining a living storyboard throughout the process.

    One other thing is, if you find that this is going to be the kind of usability study where developers are likely going to make changes between days of testing, and tasks are likely going to be altered… consider doing a rapid Iterative style usability test . In this kind of test, metrics are charted out in iterations based on changes that happen to the UI that affect specific tasks. My company found this technique to be a powerful tool which blends nicely with a team’s agile development style. Our site has some PDFs on the topic.

    –Etan
    UX engineer
    EchoUser Inc

  7. Doug,

    I think you should call your service EasySurvey or something else. That isn’t a usability test at all.

Comments are closed.