It was not an easy recruit. Directors of IT are busy people. Oddly, they’re hard to get hold of. They don’t answer calls from strangers. They don’t answer ads on web sites. The ones who do answer ads on web sites we had to double-check on by calling their company HR departments to verify they had the titles they said they did.
And now this.
“Hi! So we have some executives coming in tomorrow to observe the test sessions.” This was the researcher phoning. He was pretty pleased that his work was finally getting some attention from management. I would have been, too. But. He continued, “I need you to [oh yeah, the Phrase of Danger] call up the participants and move some of them around. We really want to see the experienced guy and the novice back-to-back because Bob [the head of marketing] can only come at 11:30 and has to leave at 1:00.”
“Sure,” I say, “we can see if the participants can flex. But your sessions are each an hour long. And they’re scheduled at 9:00, 10:30, 12:00, and 2:00. So I’m not quite clear about what you’re asking us to do.”
“I’m telling you to move the sessions,” the researcher says, “so the experienced guy is at 11:30 and the novice is at 12:30. Do whatever else you have to do to make it work.”
“Okay, let me check the availability right now while we’re on the phone,” I say. I pull up the spreadsheet of participant data. I can see that the experienced guy was only available at 9:00 am. “When we talked with Greg, the experienced guy, the only time he could come in was 9:00 am. He’s getting on a plane at 12:30 to go to New York.”
“Find another experienced guy then.” What?!
Five signs that you’re dissing your participants
You shake hands. You pay them. There’s more to respecting participants? These are some of the symptoms of treating user research participants like lab rats:
They seem interchangeable to you.
If you’re just seeing cells in a spreadsheet, consider taking a step back to think about the purpose and goals of your study.
You’re focused on the demographics or psychographics.
If it’s about segmentation, consider that unless you’re running a really large study, you don’t have representative sample, anyway. Loosen up.
Participants are just a way to deliver data.
You’ve become a usability testing factory, and putting participants through the mill is just part of your life as a cog in the company machine.
You don’t think about the effort it takes for a person to show up in your lab.
Taking part in your session is a serious investment. The session is only an hour. But you ask participants to come early. Most do. You might go over time a little bit. Sometimes. It’ll take at least a half hour for the participant to get to you from wherever she’s coming from. It’ll take another half hour for her to get wherever she’s going afterward. That’s actually more than 2 hours all together. Think about that and the price of gas.
You don’t consider that these people are your customers and this is part of their customer experience.
You and your study make another touch point between the customer and the organization that most customers don’t get the honor of experiencing. Don’t you want it to be especially good?
They’re “study participants” not “test subjects.”
Don’t forget that you couldn’t do what you do without interacting with the people who use (or might use) your organization’s products and services. When you meet with them in a field study or bring them into a usability lab, they are doing you a massive favor.
Although you conduct the session, the participant is your partner in exploration, discovery, and validation. That is why we call them “participants” and not “test subjects.” There’s a reason it’s called “usability testing” and not “user testing.” As we so often say in the introductions to our sessions, “We’re not testing you. You’re helping us evaluate the
Throw away your screener: Tips on recruiting humans
I’m not kidding. Get rid of your screener and have a friendly chat with your market research people. Tell them you’re not going to recruit to match the market segments anymore. Why not? Because they usually don’t matter for what you’re doing. In a usability test, you focus on behavior and performance, right? So recruit for that.
Focus on behavior, not demographics
Why, if you’re testing a web site for movie buffs, will selecting for household income matter? What you want to know is whether they download movies regularly. That’s all. Visualize what you will be doing in the session, and what you want to learn from participants. This should help you define what you absolutely require.
Limit the number of qualifiers
Think about whether you’re going to compare groups. Are you really going to compare task success between people from different sized companies, or who have multiple rental properties versus only one, or different education levels? You might if you’re doing a summative test, but if most of your studies are formative, then it’s unlikely that selecting for multiple attributes will make a big difference when you’re testing 5 or 6 people in each audience cell.
Ask open-ended questions
Thought you covered everything in the screener, but fakers still got into your study? Asking multiple-choice questions forces people to choose the answer that best fits. And smart people can game the questionnaire to get into the study because they can guess what you’re seeking. Instead, ask open-ended questions: Tell me about the last time you went on a trip. Where did you go? Where did you stay? Who made the arrangements? You’ll learn more than if you ask, Of your last three trips taken domestically, how many times did you stay in a hotel?
Learn from the answers
You get “free” research data when you pay attention to the answers given to open-ended screening questions because now people have volunteered information about their lives, giving you more information about context in which you can make decisions about your study design and the resulting data.
Flex on the mix
If you use an outside recruiting firm, ask to review a list of candidates and their data before anyone gets scheduled. You know the domain better than the recruiters do. You should know who will be in the testing room (besides you). You should make the trade-offs when there’s a question about how closely someone meets the selection criteria.
Remember, we’re all human, even your participants. These steps will help you respect the human who is helping you help your company design better experiences for customers.
Nice article, Dana. I’d add an assumption to your recommendations on the screener: that you are using an internal recruiter. To get rid of multiple choice questions, you have to have a recruiter that is very knowledgeable about the participant behavior and demographics that you need. The recruiter also must not “lead” the participants into the correct answer, which is difficult to prevent in people who are not trained in interview techniques. External recruiters do not necessarily have good interviewing skills, they will typically not have little knowledge of your project, and they have a conflict of interest (the less time it takes to recruit someone, the more money they make). So multiple-choice questions, while not perfect, are very difficult to get rid of. However, this article does present a compelling case for utilizing internal recruiting whenever possible.
Thank you for sharing your experience with us!
I think we can all agree that dealing with participants is a sensitive matter. I strongly agree that we should make sure participants feel that they help us and that we are not testing them. Recently I participated in a usability test for a website and there were some moments during the test session that I felt I was judged for my choices (while trying to accomplish some tasks). I felt that mainly due to the way the questionnaire was structured. So apart from making the participant feeling comfortable at the beginning of the usability test we should make sure that the tools we use point to that direction too.
You make a really good point. I am not *assuming* an internal recruiter for the open-ended interviews, but I can imagine that’s the situation it is most likely to occur in. (I’m a consultant and my recruiting consultant does interviews rather than multiple-choice questionnaires.) I’ve seen teams that work with outside agencies be successful with the open-ended approach, but it does take a few times of working together before it can be really smooth. They had to train and coordinate.
In the end, it’s about tradeoffs. Do you have time to do the recruiting? If you don’t have a lot of time or resources to do your own recruiting, are you willing to take what you get from most agencies.
In regards to your 5 points to see if we are dissing our participants, you are so right! It is great to be reminded that we work with “participants” once in a while. When UX professionals have conducted many usability session, sometimes we see more usability tests as a factory job, a factory line: participant comes in, we perform the study, participant gets paid, we deliver results.
As for recruiting participants, this is a tough job by itself. Usually, UX professionals don’t have enough time to setup a usability test, and the time allocated to recruit participants is very short, therefore, we can’t use open ended questions, because it means that we have to read all the answer and analyze each one of them. What we do instead, we work with multiple choice questions, but we give a weight to each of the answers, that way, even if the participant is smart enough to figure out the screening, they answers will sum-up a different number compare to other participants, that way we can select the ones with the highest score. Not the best method, but something that works for us when we are running short on time.
I like the idea of weighting the items in a questionnaire. That could definitely resolve some of the issues with multiple-choice answers.
The point I’m trying to emphasize is that we user researchers should be taking the time to do the interviewing (or find someone inside who can) for a few reasons. First, you get better data because it is more likely that the participant fits the profile. Second, you get more data because you learn things in the process of recruiting this way that you don’t learn in a typical usability test session — or if you do, it’s too late to use it. Finally, participants are much more likely to show up because you’ve established a relationship that they’re invested in. It is the beginning of that practice of treating participants like humans rather than data points.
I get that there are trade offs. Outsourcing the recruiting frees up the researcher in a dozen ways. Maybe there’s someone on your team who could help. Teams I know who recruit the way I’m suggesting just get phenomenal results that they know will be valid because they are confident that they have the right user in the room.
How are you? I love this website, I feel like my people are here. Although I have bee listening to your advice for 10 years it’s still fresh. It seems that you are still making genuine insights infused with your frank sense of humor.
Cheers + Alis
Loved the article. It’s good to have a tidy checklist to help remind us, and our clients, of the value of our participants.
Another tip to share: if you’re working with a recruiter, ask their opinion about how hard or easy it will be to recruit the participants you want. That can be a valuable piece of information about whatever it is you want to test. If recruiting is hard, could be that the precise participants don’t really exist, or it could be that they’re not as interested in the product/site/whatever that you are offering as you hoped they would be.
thanks for this. and i am glad to see that we´re treating our participants the right way. But throwing away the screener is in my eyes not such a good idea. in our projects this document is very important to argue with the research department of our clients who normaly are salespeople. it´s also important to check the quality of the participants before the test starts. what we have to keep in mind is to get to the right criteria. not only market segment, age, income…also the behavior. here i am on your side.
Thank you for this thought-provoking article. I think the way we treat our participants reflects the way the company treats their customers. I really like your use of the term “partners”. In my recent article I proposed the concept of “experience partners” as a whole new way of thinking about our customers as partners in holistic product experiences. I think it dovetails nicely into your point about respecting participants:
I’ve seen other practitioners be a little looser than time than I am, so I appreciate seeing you make note of being careful of participants’ time and not running over. Because of the “brand experience” they have by participating in studies, I strive to treat participants just a notch higher than clients, which helps assure I don’t miss a detail. In our introductions in the lobby, I always ask participants how their travel was getting to our facility, and how well the instructions worked for them. This serves a two-fold benefit: get usability feedback on our facility directions, and warm them up to providing candid feedback. I also ask, “When do you need to be back at the office?” or “When do you need to be to your next appointment?” so I can assure them before we get started that I’m closely timing our session and can be trusted to get them out on time. When I do this, I RARELY see anyone concerned about the time, thus freeing their mind to be fully involved in the study.
You seem to be talking about two topics here, treating the user with respect and recruiting. I like the way you’ve combined them, but there’s more than one way to accomplish both goals. I, for example, like to work very closely with my recruiter. Sounds like I’ve probably had more luck doing that than most people. It does require work though. I guess I prefer that, though, as it typically doesn’t require as much work for each test.
As for treating the user with respect, my opinion is that if this isn’t 100% obvious to you and you’re not aware of this all the time, you’re in the wrong profession. I find every user that comes in the door absolutely fascinating. I’m not sure I’ve ever had a user who didn’t teach me things. To tell you the truth, I’m in it for the users. If anyone’s not, I’m not sure this is the right job for you.
Dana, thank you for clearing up the nomenclature issues regarding participants vs subjects. Every time I hear “user testing” it just irks me. We are not testing users, we are testing a prototype or product. So, it is always “usability testing”, not “user testing”. Also, the subject is always the product or prototype we are testing, not the person who is helping us test it, they are participants! I know.. you said this but I just want to enforce it even more.
A reference I would highly recommend is chapter 18 of “A Practical Guide to Usability Testing” by Joe Dumas and Janice Redish. The chapter is called “Caring for Test Participants”. The chapter really helps to explain the participant’s perspective while taking part in a study and it is a great way to do what we are supposed to be doing every day..put ourselves in the user’s shoes. There are some great practical tips in the chapter for how to sequence what takes place, what to say and what not to say, how to deal with different sensitive situations that might come up. The entire book is actually fantastic.
First, thanks for all the great comments! I’m delighted that people care about this topic as much as I do. (Unfortunately, the reviewers for UPA didn’t see value in having a session about recruiting at the conference.)
To everyone who has issues with my suggestion that you move away from using a screener:
I get that it is uncomfortable to move away from using a questionnaire because a) you’re used to thinking of recruiting that way; b) you feel you can’t trust your recruiter to work without one; and c) it seems like more work for you to do recruiting if you don’t codify the recruit with multiple-choice answers.
My first suggestion is that you work on training your recruiter. Teams I know who work with the same recruiter over time (either an in-house person or an agency) find that they can help that person or group learn about the product, the users, and the methods — and that doing so helps the recruiter make better decisions for you. I’ve written about this a couple of times recently on my blog. Check these two posts:
Would love to continue the conversation…
Great article. It can be frustrating when some people fail to recognize that your participants are your customers and how valuable their feedback is. Like some people mentioned, I always try to build a rapport with participants, make them feel comfortable and talking freely. Sometimes having a bunch of observers around during a session can put a participant on edge, the moderator’s job is to work around that and make the participant feel comfortable regardless. That being said, the moderator also has to work around the participant’s desire to please the moderator.
When it comes to recruiting, I truly believe that behavior is what really counts and getting to that is part of how the screener is written. In the cases in which the researcher can’t do the recruiting him or herself, it’s important to work with the recruiter closely, write the screener with open-ended questions targeting specific key behaviors as well as close-ended demographics questions, and go over the research goals, participant criteria and screener questions with the recruiter. Once you have a good relationship with a goor recruiter, it really makes your life easy.
Comments are closed.