The Right Way to Do Lean Research

Posted by

StartX, a nonprofit startup accelerator, recently devoted an entire day to the role of design in early-stage companies. One panel included Laura Klein, Todd Zaki-Warfel, Christina Wodtke, and Mike Long.

Each panelist had made their mark on how design is done in start-ups: Laura wrote the influential O’Reilly book on UX for Lean Startups, and Todd penned the bestselling Rosenfeld Media Prototyping book. Christina has been cross-teaching design to entrepreneurs and entrepreneurship to designers at institutions such as California College for the Arts, General Assembly, Copenhagen Institute of Interaction Design, and Stanford. Mike founded an influential Lean UX community in San Francisco.  

Although the conversation ranged widely, they kept coming back to research: the heart of the lean build-measure-learn cycle. As the hour-long panel drew to a close, Christina jumped up and scribbled on the board the key themes of the conversation: right questions, right people, right test, right place, right attitude and right documentation.

Below is Laura Klein expounds on these key themes of lean research. Boxes and Arrows is grateful for her time.

Right questions: Make sure you know what you need to know

Too many people just “do research” or “talk to customers” without having a plan for what they want to learn. What they end up with is a mass of information with no way of parsing it.

Sure, you can learn things just by chatting with your users, but too often what you’ll get is a combination of bug reports, random observations, feature suggestions, and other bits and bobs that will be very difficult to act on.

A better approach is to think about what you’re interested in learning ahead of time and plan the questions that you want to ask. For example, if you need to know about a particular user behavior, come up with a set of questions that is designed to elicit information about that behavior. If you’re interested in learning about the usage of a new feature, ask research participants to show you how they use the feature.

The biggest benefit to planning your research and writing questions ahead of time is that you’ll need to talk to far fewer people to learn something actionable. It will be quicker and easier to learn what you need to know, make a design change, and then test that change, since you will see patterns much more quickly when you ask everyone the same set of questions.

Right people: Talk to people like your users

Let’s say you’re building a brand new product. You want to get everybody’s opinion about it, right? Wrong! You want to get the opinions of people who might actually use the product, and nobody else.

Why? Well, it’s pretty obvious if you think about it. If you’re building a product for astronauts, you almost certainly don’t want to know whether I like the product. I’m not an astronaut. If you make any changes to your product based on anything I say, there is still no conceivable way that I’m going to buy your product. I am not your user.

Yet, this happens over and over. Founders solicit feedback about their product from friends, family, investors…pretty much anybody they can get their hands on. What they get is a mashup of conflicting advice, none of it from the people who are at all likely to buy the product. And all the time you spend building things for people who aren’t your customer is time you’re not spending building things for people who are your customer.

So, stop wasting your time talking to people who are never going to buy your product.

Right test/methodology: Sometimes prototypes, sometimes Wizard of Oz

Figuring out the right type of test means understanding what you want to learn.

For example, if you want to learn more about your user–their problems, their habits, the context in which they’ll use your product–you’re very likely to do some sort of ethnographic research. You’ll want to run a contextual inquiry or an observational study of some sort.

If, on the other hand, you want to learn about your product–whether it’s usable, whether the features are discoverable, whether parts of it are incredibly confusing–you’ll want to do some sort of usability testing. You might do task based usability testing, where you give the user specific tasks to perform, or you might try observational testing, where you simply watch people interact with your product.

There is another type of testing that is not quite as well understood, and that’s validation testing. Sometimes I like to call it “finding out if your idea is stupid” testing. This type of testing could take many forms, but the goal is always to validate (or invalidate) an idea or assumption. For example, you might test whether people want a particular feature with a fake door. Or you might learn whether a particular feature is useful with a concierge test. Or you could gauge whether you’re likely to have a big enough market with audience building. Or you could test to see whether your messaging is clear with a five second test.

All of these approaches are useful, but the trick is to pick the right one for your particular stage of product development. A five second test won’t do you any good if what you want to learn is whether your user is primarily mobile. A concierge test doesn’t make sense for many simple consumer applications. Whatever method you use, make sure that the results will give you the insights you need in order to take your product to the next level.

Right place: When do you go onsite?

If you talk to serious researchers, they will often tell you that you’ll never get good data without being in the same room with your subject. You’ll learn so much more being able to see the context in which your participant is using the product, they’ll tell you.

And they’re right. You do learn more. You also spend more. Kind of a lot more, in some cases.

So, what do you do if you don’t have an infinite budget? What do you do if you have users on multiple continents? What do you do if, in short, you are a typical startup trying to make decisions about a product before going out of business. You do what people have been doing since the dawn of time: You compromise.

Part of deciding whether or not to do remote research has to do with the difficulty of the remote research and what you need to learn. For example, it’s much harder at the moment to do remote research on mobile products, not just because there isn’t great screen sharing software but also because mobile products are often used while…well, mobile. If you simply can’t do an in person observation though, consider doing something like a diary study or tracking behaviors through analytics and then doing a follow up phone interview with the user.

Other types of research, on the other hand, are pretty trivial to do remotely. Something like straightforward, task based, web usability testing is almost as effective through screensharing as it is in person. In some cases, it can be more effective, because it allows the participant to use her own computer while still allowing you to record the session.

Also, consider if you’re truly choosing between remote testing and in-person testing. If you don’t have the budget to travel to different countries to test international users, you may be choosing between remote testing and no testing at all. I’ll take suboptimal remote testing over nothing any day of the week.

Choosing whether your testing is going to be remote, in person, or in a lab setting all comes down to your individual circumstances. Sure, it would be better if we could do all of our testing in the perfect conditions. But don’t be afraid to take 80% of the benefit for 20% of the cost and time.

Right attitude: Listen, don’t sell

I feel very strongly that the person making product decisions should be the person who is in charge of research. This could mean a designer, a product owner, an entrepreneur, or an engineer. Whatever your title, if you’re responsible for deciding what to make next, you should be the one responsible for understanding your user’s needs.

Unfortunately, people who don’t have a lot of experience with research often struggle with getting feedback. The most common problem I see when entrepreneurs talk to users is the seemingly overwhelming desire to pitch. I get it. You love this idea. You’ve probably spent the last year pitching it to anybody who would listen to you. You’ve been in and out of VC offices, trying to sell them on your brilliant solution.

Now stop it. Research isn’t about selling. It’s about learning. Somehow, you’re going to have to change your mode from “telling people your product is amazing” to “learning more about your user and her needs.”

The other problem I see all the time is defensiveness. I know, I know. It’s hard to just sit there and listen to someone tell you your baby is ugly. But wouldn’t you really rather hear that its ugly before you spend several million dollars on building a really ugly baby?

If you open yourself up to the possibility that your idea may be flawed, you have a chance of fixing the things that don’t work. Then your baby will be pretty, and everybody will want to buy it. Ok, the metaphor gets a little creepy, but the point is that you should stop being so defensive.

Right documentation: Record!

You should be taking all of this down. Specifically, you should be recording whatever you can. Obviously, you need to get permission if you’re going to record people, but if that’s at all possible, do it.

The main reason recording is so important is so that you can be more present while interviewing. If you’re not busy writing everything down, you can spend time actually having a conversation with the participant. It makes for a better experience for everybody.

If you can’t get everything on video, or really even if you can, it’s also good to have someone in the room with you taking extensive notes. You’re not going for a transcript, necessarily, but just having somebody record what was said and what was done can be immensely helpful in analyzing the sessions later.

Another important tactic for remembering what was said is the post-session debrief. After conducting the interview or observation, spend 15 minutes with any other observers and write down the top five or ten take-aways. Do it independently. Then, compare notes with the other observers and see if you all learned the same things from the session. You may be surprised at how often other people will have a different understanding of the same interview.

~~

Boxes and Arrows thanks Laura for sharing these insights with our readers! If you want to learn more about fast and effective research, we strongly recommend her book UX for Lean Startups: Faster, Smarter User Experience Research and Design and her talk “Beyond Landing Pages” from the 2013 Lean Startup Conference.

13 comments

  1. Instead of writing code, humans do the things a computer would do, i.e. shop for you and make appointments. That way you can refine your offering without changing code. Unlike Wizard of Oz testing, the end users know it’s humans. Watch Laura’s video for more.

  2. Pingback: The Sunday Report
  3. The other thing to note about concierge testing is that it’s generative research while Wizard of Oz is evaluative testing.

    Meaning that you need to have a set hypothesis to execute a Wizard of Oz test and you’ll come out of the test with a Yes/No answer. It worked, or it didn’t.

    With a concierge test you typically leave the test with 10 more ideas on how you might solve this problem that when you went in. You also don’t need to have an iron clad way of solving the problem. You can figure it out as you go, much as any consultant would.

  4. The problem with recording is that it doesn’t communicate you are really listening to the person in a way that taking notes in front of them does. Also I find most people are much less candid when you ask “Do you mind if I record this?” At least in business (B2B) settings. Also note taking forces you to pay attention and enables you to summarize at the end of the session. The recording can make for a backstop if you miss something but I find that it’s more effective to have two people take part and trade-off the role of primary interviewer and note taker.

    I have some notes on how to interview at http://www.skmurphy.com/blog/2011/10/19/tips-for-b2b-customer-development-interviews/

    But I don’t do user experience research so you may have to take my observation with a grain of salt:
    http://www.skmurphy.com/blog/2014/09/05/user-experience-research-vs-customer-discovery/

  5. I really liked this part of the article, with a nice and interesting topics have helped a lot of people who do not challenge things people should know. you need more publicize this so many people who know about it are rare for people to know this. success for you !!!

Comments are closed.