Clicking Fast and Slow

Through social psychology and cognitive science, we now know a great deal about our own frailties in the way that we seek, use, and understand information and data. On the web, user interface design may work to either exacerbate or counteract these biases. This article will give a brief overview of the science then look at possible ways that design and implementation can be employed to support better judgements.

Fast and slow cognitive systems: How we think

If you are even remotely interested in psychology, you should read (if you haven’t already) Daniel Kahneman’s master work “Thinking Fast and Slow.”1 In it, he brings together a mass of findings from his own and others’ research into human psychology.

The central thesis is that there are two distinct cognitive systems: a fast, heuristic-based and parallel system, good at pattern recognition and “gut reaction” judgements, and a slower, serial, and deliberative system which engages more of the processing power of the brain.

We can sometimes be too reliant on the “fast” system, leading us to make errors in distinguishing signal from noise. We may incorrectly accept hypotheses on a topic, and we can be quite bad at  judging probabilities. In some cases we overestimate the extent of our own ability to exert control over events.

The way of the web: What we’re confronted with

We are increasingly accustomed to using socially-oriented web applications, and many social features are high on the requirements lists of new web projects. Because of this, we need to be more aware of the way people use social interface cues and how or when these can support good decision-making. What we do know is that overreliance on some cues may lead to suboptimal outcomes.

Social and informational biases

Work with ecommerce ratings and reviews have noted the “bandwagon” effect, where any item with a large number of reviews tends to be preferred, often when there is little knowledge of where the positive reviews come from.2 A similar phenomenon is the “Matthew” effect (“whoever has, shall be given more”), where items or users with a large number of up-votes will tend to attract more up-votes, regardless of the quality of the item itself.3

Coupled with this is an “authority” effect, when any apparent cue as to authenticity or expertise on the part of the publisher is quickly accepted as a cue to credibility. But users may be poor at distinguishing genuine from phony authority cues, and both types may be overridden by the stronger bandwagon effect.

A further informational bias known as the “filter bubble” phenomenon has been much publicized and can be examined through user behavior or simple link patterns. Studies of linking between partisan political blogs, for instance, may show few links between the blogs of different political parties. The same patterns are true in a host of topic areas. Our very portals into information, such as the first page of a Google search, may only present the most prevalent media view on a topic and lack the balance of alternative but widely-held views.4

Extending credibility and capability through the UI (Correcting for “fast” cognitive bias)

Some interesting projects have started to look at interface “nudges” which may encourage good information practice on the part of the user. One example is the use of real-time usage data (“x other users have been  viewing this for xx seconds”), which may–through harnessing social identity–extend the period with which users interact with an item of content, as there is clear evidence of others’ behavior.

Another finding from interface research is that the way the user’s progress is presented can influence his willingness to entertain different hypotheses or reject currently held hypotheses.5

Screen grab from ConsiderIt showing empty arguments
Screen grab from ConsiderIt showing empty arguments

The mechanism at work here may be similar to that found in a study of the deliberative online application ConsiderIt. Here, there was a suggestion that users will seek balance when their progress is clearly indicated to have neglected a particular side of a debate–human nature abhors an empty box!6

In online reviews, much work is going on to detect and remove spammers and gamers and provide better quality heuristic cues. Amazon now shows verified reviews; any way that the qualification of a reviewer can be validated helps prevent the review count from misleading.

Screen grab showing an Amazon review.
Screen grab showing an Amazon review.

To improve quality in in collaborative filtering systems, it is important to understand that early postings have a temporal advantage. Later postings may be more considered, argued, and evidence-based but fail to make the big time due never gaining collective attention and the early upvotes.

In any sort of collaborative resource, ways to highlight good quality new entries and rapid risers are important, whether this is done algorithmically or through interface cues.  It may also be important to encourage users to contribute to seemingly “old” items, thereby keeping them fresh or taking account of new developments/alternatives. On Stack Overflow, for instance, badges exist to encourage users to contribute to old threads:

Screen grab from Stack Overflow showing a call to action.
Screen grab from Stack Overflow showing a call to action.

 

Designing smarter rather than simpler

We know that well-presented content and organized design makes information appear more credible. Unfortunately, this can also be true when the content itself is of low quality.

Actual interaction time and engagement may increase when information is actually slightly harder to decipher or digest easily. This suggests that simplification of content is not always desirable if we are designing for understanding over and above mere speedy consumption.

Sometimes, perhaps out of the fear of high bounce rates, we might be ignoring the fact that maybe we can afford to lose a percentage of users if those that stick are motivated to really engage with our content. In this case, the level of detail to support this deeper interaction needs to be there.

Familiarity breeds understanding

Transparency about the social and technical mechanics of an interface is very important. “Black boxing” user reputation or content scoring, for instance, makes it hard for us to judge how useful it should be to decision making. Hinting and help can be used to educate users into the mechanics behind the interface. In the Amazon example above, for instance, a verified purchase is defined separately, but not linked to the label in the review itself.

Where there is abuse of a system, users should be able to understand why and how it is happening and undo anything that they may have inadvertently done to invite it. In the case of the “like farming” dark pattern on Facebook, it needed a third party to explain how to undo rogue likes, information that should have been available to all users.

There is already evidence that expert users become more savvy in their judgement through experience. Studies of Twitter profiles have, for instance, noted a “Goldilocks” effect, where excessively high or low follower/following numbers are treated with suspicion, but numbers more in the middle are seen as more convincing.7 Users have come to associate such profiles with more meaningful and valued content.

In conclusion: Do make me think, sometimes

In dealing with information overload, we have evolved a set of useful social and algorithmic interface design patterns. We now need to understand how these can be tweaked or applied more selectively to improve the quality of the user experience and the quality of the interaction outcomes themselves. Where possible, the power of heuristics may be harnessed to guide the user rapidly from a to b. But in some cases, this is undesirable and we should look instead at how to involve some more of the greater deliberative power of the mind.

Do you have examples of interface innovations that are designed either to encourage “slow” engagement and deeper consideration of content, or to improve on the quality of any “fast” heuristic cues? Let me know through the comments.

References

1 Kahneman D. Thinking, fast and slow. 1st ed. New York: Farrar, Straus and Giroux; 2011.

2 Sundar SS, Xu Q, Oeldorf-Hirsch A. Authority vs. peer: how interface cues influence users. CHI New York, NY, USA: ACM; 2009.

3 Paul SA, Hong L, Chi EH. Who is Authoritative? Understanding Reputation Mechanisms in Quora. 2012 http://arxiv.org/abs/1204.3724.

4 Simpson TW. Evaluating Google as an Epistemic Tool. Metaphilosophy 2012;43(4):426-445.

5 Jianu R, Laidlaw D. An evaluation of how small user interface changes can improve scientists’ analytic strategies. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems New York, NY, USA: ACM; 2012.

6 Kriplean T, Morgan J, Freelon,D., Borning,A., Bennett L. Supporting Reflective Public Thought with ConsiderIt. CSCW 2012; 2012; .

7 Westerman D, Spence PR, Van Der Heide B. A social network as information: The effect of system generated reports of connectedness on credibility on Twitter. Computers in Human Behavior 2012; 1;28(1):199-206.

The Story’s the Thing

This is an excerpt from “UX Storytellers”:http://uxstorytellers.blogspot.com. If you enjoy it, consider getting the kindle edition of UX Storytellers – Connecting the Dots with all the stories!

Here’s something I believe in: stories are what make us human. Opposable thumbs? Other animals have those. Ability to use tools? Ditto. Even language is not exclusive to human beings.

From my amateur reading of science, the story behind our stories goes something like this: the human brain evolved with an uncanny knack to recognize and create patterns; and through some strange twist of natural selection, gradually over millions of years, our brains started turning the incredible power of that pattern-making machinery on ourselves, until we became self-aware.

Aware of ourselves—our own faces, bodies, journeys, homes, children, tools, and everything else around us. Over eons, we went from being creatures that lived in each moment as it came and went, to protagonists in our own myths. Everything in our midst became the material for making stories, strands of moments woven into tapestries that we call things like “nation”, “family,” “love” or “discovery.”

And “design.” Because design is, ultimately, a story we make. And designing is an act of weaving a new story into an existing fabric in such a way that it makes it stronger, better, or at least more interesting, and hopefully more delightful.

 

An Origin Story

My identity as an information architect happened accidentally, and gradually. I just kept doing things I liked, that people were willing to pay me for, until I woke up one day and realized I had a career. And the things I liked doing were invariably surrounded by people’s stories.

One of the earliest jobs I had out of college (after trying my hand at carpet cleaning, waiting tables and telemarketing) was as an office manager in a medical office. It was 1990, and this office of five or six providers was running entirely on a phone, a copier and an electric typewriter. No computer in sight. Every bill, insurance claim, or patient report had to be typed anew … as if the 80s had never happened. I talked the owner into getting a computer and a database management package—a sort of Erector set for database application design that I’d seen at a Mac user group a year before—so I could make the office more efficient.

It would’ve been pretty easy to create a quick application with a minimal user interface, if I were the only one using it. But the owner also had a couple of people helping in the office part-time who needed to use the system too—people who had never even used a computer before. Did I mention this was 1990?

So I had a challenge: how to make it work so that total computer newbies could use it? It was frustrating, fascinating, and probably the single most important experience of my career, because it was a crucible for acknowledging the importance of understanding the user.

To understand the people who were to use the application, I had to talk to them, get a sense of what they’d done before and what sort of forms they had used in the past. What sorts of technology? What terminology was going to make sense for them? How do they tend to learn—by written instruction or hands-on activity, by rote or through improvisation? I learned these things by watching and conversing. Eventually I had enough of a sense of those “users” that I had a full story in my head about how they came to the experience of this particular application, in this particular place.

I wasn’t conscious of this at the time; and I was working completely by intuition. I would’ve done a better job if I’d had the experience, methods and tools I’ve picked up since. But looking back, the experience itself has become a story I tell myself when I need a rudder to remind me of my direction as a designer so that, even when I have nothing else to go on, if I just watch, listen and absorb the stories of the people for whom I’m designing, my design will generally head in the right direction.

 

An Architecture Story

Much later, about ten years ago, I was working at a web design agency, and our client was an organization that acted as a sort of confederation of research scientists, universities and technology corporations. The organization funneled research money from “investor” companies to the scientists and their students in the universities, and in return the companies received “pre-competitive” research and dibs on some of the brightest graduates.

Their website had evolved like so many in those days—having started from a few linked documents, it had grown by the addition of ad-hoc sections and content created in response to member requests and complaints, until it had become a horribly unwieldy mass of links and text. We had been called in to clean it up and organize it. That sounded straightforward enough. But when we started interviewing its users, we found people who were unhappy with the organization and its community in general—scientists who had become more entrenched in their own sub-disciplines, and divisions between those managing the community and those merely dwelling there. Not to mention the natural enmity between academics and business leaders.

We realized that the web site had become a visible instantiation of that discord: a messy tangle of priorities in tension. A new information architecture would mean more than just making things more “findable.” It meant trying to make a digital place that structurally encouraged mutual understanding. In essence, a more hospitable online home for people with different backgrounds, priorities and personalities. It was a chance to create a system of linked contexts—an information architecture—that could help to heal a professional community, and in turn strengthen the organization founded to support it.

That project provided an insight that has forever shaped how I understand the practice of information architecture: the web isn’t just a collection of linked content, it’s a habitat. And the structures of habitable digital places have to be informed by the stories of their inhabitants.

 

A Survival Story

Much more recently, I had the opportunity to work with a non-profit organization whose mission was to educate people about breast cancer, as well as provide an online forum for them to share and learn from one another. When interviewing the site’s users, it soon became clear how important these people’s stories were to them. They would tell the tale of their cancer, or the cancer of a loved one, and in each case the story was one of interrupted expectation—a major change of direction in what they assumed to be the storyline of their lives.
I learned that this website was merely one thread in a great swath of fabric that the site would never, ultimately, touch. But the site was most valuable to these people when it supported the other threads, buttressed them, added texture where it was needed, especially when it helped fill in the gaps of their stories: How did I get cancer? What do my test results mean? What treatment should I choose? What can I eat when getting chemo? How do I tell my children?

They wanted information, certainly. Articles full of facts and helpful explanations. And the site did that very well by translating medical research and jargon into information people could use. But even more than the packaged articles of information, so many people wanted—needed—to share their stories with others, and find other stories that mirrored their own. The most valuable learning these people discovered tended to be what they found in the forum conversations, because it wasn’t merely clinical, sterile fact, but knowledge emerging organically from the personal stories, rich in context, written by other people like them.

One woman in particular lived on an island in the Caribbean, and had to fly to the mainland for treatment. There were no support groups around her home, and few friends or family. But she found a community on this website; one that would cheer her on when she was going to be away for tests, console her or help her research alternatives if the news was bad, and celebrate when news was good. She made a couple of very close friends through the site, and carried on relationships with them even after her cancer had been beaten into submission.

Here were stories that had taken hard detours, but had found each other in the wilderness and had become intertwined, strengthening one another on the new, unexpected journey.

This work, more than any other I’d done before, taught me that stories aren’t merely an extra layer we add to binary logic and raw data. In fact, it’s reversed—the stories are the foundations of our lives, and the data, the information, is the artificial abstraction. Information is just the dusty mirror we use to reflect upon ourselves, merely a tool for self-awareness.

It was through listening to the whole stories as they were told by these digital inhabitants that I learned about their needs, behaviors and goals. A survey might have given me hard data I could’ve turned into pie charts and histograms, but it would’ve been out of context, no matter how authoritative in a board room.

And it was in hearing their stories that I recognized, no matter how great my work or the work of our design team might be, we would only be bit players in these people’s lives. Each of them happens to be the protagonist in their own drama, with its own soundtrack, scenery, rising and falling action, rhyme and rhythm. What we made had to fit the contours of their lives, their emotional states, and their conversations with doctors and loved ones.

 

The Moral of the Story

Design has to be humble and respectful to the presence of the user’s story, because it’s the only one that person has. Stories can’t be broken down into logical parts and reconstituted without losing precious context and background. Even though breaking the story down into parts is often necessary for technological design, the story lives only if we serve as witness to the whole person, with a memory of his or her story as it came from that person’s mouth, in that person’s actions.

Keeping the story alive keeps the whole idea of the person alive. Whether we use “personas” or “scenarios” or task analysis or systems thinking, the ultimate aim should be to listen to, understand and remember the stories, precisely because the stories are the beating heart of why we’re designing anything at all.

So, now, when I’m working on more mundane projects that don’t touch people in quite the same way as some of the others I’ve done, I still try to remember that even for the most everyday task, something I design still has to take into account the experience of the whole person using the product of my work. That, after all, is what we should mean when we say “user experience”—that we seek first to listen to, observe and understand the experience of the people for whom we design. We honor them in what we make, when we honor their stories.

The Stranger’s Long Neck


Show Time: 33 minutes 42 seconds

Download mp3 (audio only)
Download m4a (with visuals, requires iTunes, Quicktime, or similar)

iTunes     Del.icio.us     Boxes and Arrows Podcast theme music generously provided by Bumper Tunes

Gerry McGovern has recently published The Stranger's Long Neck - How to Deliver What Your Customers Really Want.

Ireland’s Gerry McGovern shares a few of the key ideas in his recent publication The Stranger’s Long Neck – How to Deliver What Your Customers Really Want. Mr. McGovern, who will be teaching a Masterclass series in Canada on the importance of task management this November, discusses several of the key findings in his new book, including:

Trading with strangers

– The customer is a stranger. On the Web, the customer isn’t king—they’re dictator. When they come to your website, they have a small set of tasks (long neck) that really matter to them. If they can’t complete these top tasks quickly, they leave.
– There is an existential challenge going on right now between organization-centric and customer-centric thinking. Customer-centric thinking is winning.

From Long Tail to Dead Zone

– The Long Tail theory says that the Web allows you to sell more of less, that we are seeing the decline of the blockbuster and the rise of the niche.
– The Long Tail is often a Dead Zone of extremely low demand and hard to find, poor quality products.

The rise of the Long Neck

– The Web is exploding with quantity but quality is still relatively finite. Quality is the ‘long neck’; the small set of stuff that really matters to the customer.
– Understanding and managing the long neck has never been more important.
– Remember that the customer’s long neck—what really matters to the customer—is rarely the organization’s long neck —what really matters to the organization.

A secret method for understanding your customers

– A unique voting method that identifies your customers’ long neck.
– Developed over 10 years, with over 50,000 customers voting in multiple languages and countries.
– Used by the BBC, Tetra Pak, IKEA, Schlumberger, Wells Fargo, Microsoft, Cisco, OECD, Vanguard, Rolls-Royce, US Internal Revenue Service, etc.

Organization thinking versus customer thinking

– Case study that shows how car company managers think differently about how customers buy cars to how customers themselves think.
– Explanation of how to frame the task identification question.

Deliver what customers want—not what you want

– Case study of Microsoft Pinpoint, a website to help businesses find approved Microsoft IT vendors and consultants.
– What’s the top task of US small and medium businesses when it comes to IT? Security.

Measuring success: Back to basics

– Why traditional web metrics such as page views, number of visitors, etc., are often misleading
– Observation-based technique to measure online behaviour.
– The key metrics of task measurement: completion rate, disaster rate, completion time

Carrying out a task measurement

– The benefits of remote measurement
– How to run an actual measurement session

This podcast has been sponsored by:


Publishers of world class content for students, researchers, and practitioners in the UX and HCI fields. To learn more visit http://www.mkp.com/hci


From concepts to rich prototypes and detailed specifications, all in one tool. Get your free 30-day trial at www.axure.com

The design behind the design
Boxes & Arrows: Since 2001, Boxes & Arrows has been a peer-written journal promoting contributors who want to provoke thinking, push limits, and teach a few things along the way.

Contribute as an editor or author, and get your ideas out there. boxesandarrows.com/about/participate

Research Logistics

With more companies today putting a stronger emphasis on gaining a deeper understanding of their customer, it’s not unusual for us to be called in for a project to find that our clients don’t have a lot of experience with research and don’t know what to expect. This article is for every designer, architect, manager, engineer, and stakeholder who wants to know more about research and is intended to provide you with the most critical tools for interacting with researchers and understanding how the work that we do can make your job easier.

This article will also outline what to expect from researchers and some ways to recognize when you’re working with a good one. These are indicators, not standards, based on what we’ve found to be effective. There are many ways to do research and every research study is different so it doesn’t mean that a researcher is incompetent if he or she doesn’t conform to these indicators. One sign of a strong researcher is that he or she will educate you throughout the process so that you know what to expect. With that in mind this article is ultimately intended to provide a useful starting point.

Recruiting

One of the most critical and time-consuming elements of test preparation is defining the right target audience and recruiting participants. Participant recruiting is usually conducted by professional recruiters who typically consult databases of potential participants. Sometimes researchers will do the recruiting themselves, but it’s usually more cost effective to use a specialist.

Recruiting will almost always take two weeks or more depending on the number of participants and the type of research, so make sure that you get started early enough for the recruiter to have enough time to find the appropriate participants for the study. Recruiting for phone interviews may take slightly less time and any kind of home visit will likely take longer (ethnography or contextual interview). Your researcher should be able to provide you with an estimate at the time of initial engagement.

A week for recruiting tends to be difficult and any less than that is pretty much unthinkable. Short-changing the recruiting could result in participants that don’t properly fit the target market segment, don’t provide quality feedback, or just don’t show up at all. All of these can have a negative impact on the data. Even if it is possible to get participants faster, it’s usually better to take the time to ensure that you are getting the right people. Your researcher should know all of this and recruiting participants is where he or she will start after getting a basic understanding of your product and schedule.

A recruiter will need a screener to get started. A screener is a description of the target user with open and close-ended questions about the participant that will help the recruiter to select the right people. What you can do to smooth the process along is to have a prepared concept of your target user. This does not need to be a full market research report—just an outline of the types of users that will use your product.

Your researcher should dig deep with questions that include more than demographic information by asking behavioral questions. Behavioral questions can include such topics as TV watching behavior, purchasing behavior, internet use, etc. Typically behavioral questions will give you a stronger understanding of those who are being recruited than demographics alone. These are important elements of market segmentation that are sometimes organized into profiles called personas.

Personas are useful because they create a consistent concept of the intended market segment that can guide the design process through multiple iterations. Personas can also be adjusted following deeper discovery research, such as in-depth interviews, as more information about the intended user comes to light. Within a few days, the researcher should present a screener that includes behavioral questions as well as demographics.

Scheduling

When creating a schedule for data collection, the researcher should know that you cannot run participants back to back. It’s generally not feasible to squeeze in 8 one-hour sessions in a single day, because of all of the activity that must occur between sessions. In an eight hour day, a researcher can perform four (maybe five) one-hour sessions but any more than that will take more time. Here are the reasons why:

One-hour sessions rarely go exactly one hour, some are shorter and quite a few will run longer. This can be due to a variety of reasons such as the product malfunctioning, the participant arriving late, or the participant providing lots of feedback. My rule of thumb is to allocate 50% of the session length as a buffer between sessions to allow for overrun, not including time needed to set up for the next session.

For sessions at an office or lab, some participants will arrive 10-20 minutes early, at which time they will need to use the restroom, sign NDAs and consent forms, and generally get comfortable. Comfortable participants give useful feedback, while uncomfortable participants tend to clam up and provide short, unemotional responses.
The researcher needs to set up and get ready. For usability or experience testing, the test will need to be reset, notes and documents need to be filed and new ones prepared. For any kind of home or location visit, the researcher will need to pack up all equipment and travel to the new location and set up equipment again.

Thus for every one-hour usability or experience testing session, there’s forty-five minutes to an hour of buffer and setup time. Home visits can take much longer.

Test Plan

A test plan should take no more than a week to develop and the researcher should give it to you for review and approval before being finalized. The test plan should specify the research and business goals associated with the project. During this period the researcher will need a significant amount of time with the product, either with a prototype or available concepts, while writing and checking the test plan. The better the researcher understands the intended final product, the more valuable the information he or she can get from the participant.

For usability or experience testing, the researcher will test the tasks with the product prior to a pilot test. He or she will need to make sure that there are no glitches, no unexpected areas under construction, and nothing giving away future tasks when performing each of the tasks with the product. With that in mind, it’s important to give the researcher a stable product or prototype and avoid drastic changes to the product prior to the test.

You should receive a well-written and organized test plan that details each research question and how it will be addressed. For usability testing this will include a list of tasks, what each task is intended to examine, approximate wording for the task (avoiding leading language), and detail on how each task will be scored or evaluated. For discovery research, it will include a list of topics to be addressed such as processes, environment and context, and expected pain points and needs.

Data Collection

When the data collection starts, it’s important to let the moderator work. During this time, the participant should feel comfortable enough to open up and provide honest feedback. In order to do this, it’s important to try to minimize observer impact during the testing session.

If you don’t have a separate place to watch the session (e.g. behind a two-way mirror or through a video feed), don’t make it obvious that you are paying close attention. Think about bringing in a laptop during the session to make it look like you’re doing other work. One way of doing this is telling the participant that you are also a researcher but you’re just going to be taking notes.

When you’re observing, remain objective and don’t make judgments based on one or two participants. It’s not uncommon to see a couple participants have a completely opposite reaction to a product compared to ten other participants. The researcher’s job is to sort through all the noise and report the real trends in the research. Take what you see with a grain of salt and listen to your researcher.

At the same time, it’s important to try to observe as many sessions as possible and give your researcher feedback between sessions if there are certain aspects of the user experience you want to know more about. The researcher should put the participant at ease and extract a great deal of information, including details that might have been overlooked or emotions that the person experiences. Different researchers will tend to achieve this in different ways as everyone has their own style, but you’ll notice by paying attention to the participant and seeing if they feel relaxed or nervous throughout testing.

Findings

Frequently, stakeholders will want to make immediate changes to a design, product, or prototype and won’t have the time to wait for the researcher’s final report. People have schedules that need to be met so it’s understandable that a project can’t always wait for the final report but the researcher should be able to provide you with quick findings within 24 hours of the last session.

For usability research, these quick findings should consist of a couple of short paragraphs including problems in the interface, possible solutions to these problems, and participants’ general reactions to the product, its look and feel, and expected usage. For ethnography or other forms of discovery research quick findings will tend to consist of expected usage of the product, expected value, high and low value features, and general trends about the intended user. Quick findings aren’t comprehensive and come before the researcher can get a complete look at the data, but it will provide you with the overall themes from the study.

When you do get the final report, make sure you take a look at it. It will tell you two things:
* Detailed findings regarding the interface, product, features, and intended user
* The quality and clarity of the report will tell you quite a bit about the quality of your researcher.

There’s one other thing to keep in mind when you are processing the findings from a usability test. The participants will tend to focus on the more obvious problems with a product or interface. There could be other, smaller or more abstract problems that are not identified in the first pass of usability testing. It’s usually a good idea to perform another test on the product after making changes to ensure that the changes you made were effective and identify any additional issues.

Summary

In summary, here are the most important points for non-researchers to know about the research process:
* Recruiting will almost always take two weeks or more.
* For every one-hour usability or experience testing session, there’s forty-five minutes to an hour of buffer and setup time, home visits can take much longer.
* The researcher will need a significant amount of time with the product (prototypes or concepts) while writing and checking the test plan.
* Try to minimize your impact during the testing session.
* Remain objective and don’t make judgments based on one or two participants.
* Ask your researcher to provide you with quick findings within 24 hours of the last session.

Any comments, feedback, or suggestions are very much appreciated.