Monitoring User Experience Through Product Usage Metrics

Introduction

User experience (UX) teams have many types of data at their disposal to ascertain the quality of a digital product’s user experience. Traditionally, these sources have focused on direct customer feedback through methods such as interviews and usability studies, as well as surveys[1] and in-product feedback mechanisms. Beyond survey methodologies, however, it can be time-consuming to create a recurring channel of in-depth UX insights through these traditional UX research methods because they require time to conduct, analyze, and create reports of findings.

Continue reading Monitoring User Experience Through Product Usage Metrics

How to Avoid UX Burnout

As I watched the app go live in across the various app stores I felt exhausted.

The steps leading up to the launch had been intense, involving multiple stakeholders, scores of different user personas, and innumerable iteration cycles spread across a multitude of design teams. We shipped the project on time and shared high-fives all around, but after the dust had settled, I realized how truly tired each step of this project had made me.

After the launch, I was all UX’ed out. Even the sight of a Post-It note felt exhausting. Attributing the fatigue to creative block, I planned to take a few days off to recharge. But because my version of “recharge” also means “process everything,” I also decided to write an article for creatives about how to deal with this kind of block.

Continue reading How to Avoid UX Burnout

User Research With Small Business Owners: Best Practices and Considerations

The majority of our work at Google has involved conducting user research with small business owners: the small guys that are typically defined by governmental organizations as having 100 or fewer employees, and that make up the majority of businesses worldwide.

Given the many hurdles small businesses face, designing tools and services to help them succeed has been an immensely rewarding experience. That said, the experience has brought a long list of challenges, including those that come with small business owners being constantly on-call and strapped for time; when it comes to user research, the common response from small business owners and employees is, “Ain’t nobody got time for that!”

To help you overcome common challenges we’ve faced, here are a few tips for conducting successful qualitative user research studies with small businesses.

Recruiting

Recruiting tip #1: Give yourself an extra week, and then some

It generally takes more time to recruit for research projects with small businesses than what’s typical for consumer studies. There are several reasons why this is the case.

Existing user research participant pools tend to be light on small business representation, meaning recruiting for your project may have to start completely from scratch. Also, it can take time to track down the appropriate person at a small business to talk to—are you trying to reach the owner, the accountant, customer service staff, or…?  

Finally, small businesses are accustomed to companies trying to sell them new offerings or get them to sign up for product pilots, and many have been scammed in signing up for “free” pilots or services that end up turning into a perpetual sales pitch. Because of this, the chances of a small business owner or employee saying “No!” to participating in your user research is especially high.

Recruiting tip #2: Make sure you’re crystal clear on what type of business you want to recruit

There’s quite a bit of variation in terms of business environment, priorities, strategies and other factors across different types of businesses. Accidentally overlooking important criteria could be detrimental to a study.

For example, do you want to talk to a certain type of business, such as professional service, service area, or brick-and-mortar? Does it matter if your study participants are from B2B vs. B2C companies? What about online vs. offline businesses? Additional points of consideration include number of employees, business goals (e.g., does the business want to grow?), and revenue.

If you’re not sure if you’ve overlooked important criteria, ask for feedback from product managers, marketing professionals, and other user researchers who may have relevant information. It can also be helpful to see how entities such as the Small Business Administration categorize business types.

Recruiting tip #3: Make sure you’re crystal clear on whom you want to interview

When conducting research with small business owners, it’s common to assume that the business owner is involved in most decisions, but that’s often not the case.

Is it actually the business owner you’re interested in speaking to? Or, do you need to talk to someone who’s responsible for a specific task, such as someone who managing online marketing decisions or handles the company’s financials? The larger the company, the higher the chances are of the company owner delegating responsibilities.

We typically ensure we’re speaking to the right type person by asking screening questions specific to roles and responsibilities (see examples at the end of this article).

Recruiting tip #4: Avoid hobbyists disguised as business owners

It’s common for hobbyists—for example, people who casually sell certain services or offerings for personal enjoyment—to sign up for user research involving small business owners. On the surface they pass many of the screening criteria, but in reality their motivations and behaviors are quite different from a full-time business owner or employee of a business. We typically screen out hobbyists via recruiting screening surveys by asking if potential study participants spend at least 30 hours per week in their role as business owner or employee of the business.

Recruiting tip #5: Recruit extra participants

When conducting research with consumers, we always recruit one extra participant in the event there is a no-show. When conducting research with small businesses, we’ll increase that number to two or more. Given how unpredictable the small business environment  can be, we’ve found that the chances of last-minute cancellations or no-shows is much higher with small business owners and employees than it is with consumers.

Incentives

Incentive tip #1: Provide incentives other than cash

While incentives are a nice gesture, cash or gift cards are not a huge motivator, as they aren’t viewed as a worthwhile tradeoff for inconveniences that come with stepping away from running a business in order to participate in an interview.

What is motivating is providing small business owners and their employees with information and tips on how to run the business successfully: things like offering free accounting software, coaching on social media best practices, and personal access to a member of the support team for assistance. Another approach is to offer 15 minutes after the interview for free coaching and/or advice on a topic that makes sense given the study focus.

The small business community is tightly knit, and small businesses are often invested in each other’s success. Because of this, another option is to frame the study as an opportunity to improve offerings for all small businesses.

Even better, small businesses owners and employees love the opportunity to share feedback on tools and services they routinely use to run their business. If the product you’re testing or exploring touches upon tools and services already in use, it can be motivating to frame user interviews as an opportunity to shape the future of the offering being reviewed.

Finally, consider offering small business owners and employees the opportunity to participate in an exclusive Trusted Testers community, which provides the option to share feedback, receive “insiders” information and tips, and interact with and learn from other small businesses. We’ve found this option can be especially motivating for engaging in user research.

Interviewing

Interview tip #1: Consider in-person interviews

It can be hard for small business owners and employees to take time away from the business to participate in research that might be conducted at your lab or office. Likewise, for remote interviews, small business owners and employees don’t always have convenient access to needed technology at their place of business.

For these reasons, we’ve found that small businesses are much more likely to participate in user research if interviews take place at their place of business. This way they can tend to the business during interviews if needed and don’t have to waste valuable time setting up technology to participate in the interviews.

Also, conducting in-person interviews provides context often needed to understand complicated processes and workflows that business owners and their employees face.

Interview tip #2: Be flexible with scheduling

We’re also always especially flexible with scheduling when conducting research with small business owners and employees. In addition to leaving extra time between interviews, we usually also leave an interview slot open in the event we have to move the schedule around suddenly. We’re also mindful of offering early morning or late evening interview times, especially if the verticals we’re focused are service oriented (restaurants, spas, etc); trying to conduct a field visit during peak hours can be really intrusive for these types of businesses.

Interview tip #3: Be prepared for last-minute changes

The world of small business owners and their employees can be unpredictable, which is why we always schedule extra, backup participants for research. We’ve run into countless situations where a research participant cancels an interview at the last second on account of unexpected business or emergencies.

It’s also common for small business owners or employees to request location changes at the last second. For example, one time I (Chelsey) was scheduled to interview a business owner at his home (which is where he ran the business). He called five minutes before the interview explaining he wanted to be respectful of his roommates and asking if we could meet elsewhere. Good thing I had scoped out the area before this happened and had a nearby coffee shop in mind where we could talk!

Interview tip #4: Emphasize participant expertise early

When interviewing small business owners and employees, it’s common for them to want to seize the opportunity to get insider information or training on whatever topic is being explored. When this happens at the start of an interview, the interviewer becomes the expert for the remainder of the conversation which can prevent an open, honest dialogue.

To establish the participant as the expert early on in the conversation, there are a few things we’ll typically do. For starters, we always state that the goal of the study is to learn from the participant.

Next, we’ll ask the participant to give a tour of the business (if a site visit) and to explain what the day-to-day looks like in running it. During the tour and/or day-to-day explanation, we’ll call out pieces of information that are new to us and ask a few follow-up questions. This strategy usually does the trick in placing the research participant in expert mode and researchers as the student.

Interview tip #5: Bring extra NDAs  

I (Chelsey) will never forget, in kicking off an interview with a business owner in India, when I unexpectedly discovered several family members waiting to enthusiastically participate in the conversation!

The reality is that running a small business—whether in India, the US, or elsewhere—is rarely a solo operation. Consequently, we’ve found it’s common for family, friends and employees to be asked by interviewees to join interviews. This isn’t a bad thing. In fact, in many situations it’s a wonderful surprise that can lead a an engaging, insightful conversation.

Because of how frequent this scenario can be, we now always make sure to bring extra copies of NDAs.

Reporting

Reporting tip #1: Provide context

When socializing your findings to a product team, building empathy for the participants and their challenges is key. Reporting on consumer insights is relatively easy because most of us face similar challenges in our daily lives and we can easily identify with the participants. However, small business owners and employees face challenges that are less relatable.

Therefore, it’s important to create a narrative that includes the context of the merchant’s business and business practices. For example, what vertical do they work in? What does their day to day look like? How has their business evolved? What do they feel their customers’ needs are, and how does that in turn translate to their use of your product?

Additionally, keep in mind that your product won’t exist in a vacuum. Small business merchants are experimental, and are willing to try out numerous tools and services until they find one that meets most (usually not all) of their needs. Small business owners also value integration and may find creative ways to DIY integration that doesn’t already exist. It’s therefore not unusual that small business research may occasionally graze the edges of competitive analysis.

When crafting your report, create a story around the participants — what are their challenges and successes? How do they feel about their customers? How does (or could) your product fit into their business processes? Finally, video recordings and direct quotes are highly impactful and help emphasize the person behind the findings.

Reporting tip #2: Limit, but embrace, variation

Because small businesses are so varied in terms of vertical, structure, and practices, it takes a careful eye to draw unified or cohesive themes across what can sometimes seem like disparate participants.

Often in user research there is an impulse to sweep outliers under the rug. However, in small business research it can actually be helpful to call out and explain the outliers. They may represent an edge case that your team has an opportunity to address, or they might reveal something new about a vertical, business, or merchant type.  

Of course, as we mentioned earlier, it’s important to clearly define your intended participant group. Even with a clearly definition of who you want to talk to, you can expect to see a healthy amount of variation among your study participants.

Concluding thoughts

Small businesses have different pressures and motivations than consumers that are important to consider in setting up a successful user study with business owners and those who help run small businesses. To get the most out of your time and theirs, study up on what might relieve these pressures and speak to motivations, and adjust your recruiting, incentives, and interview techniques accordingly.

Sample screening questions

Which of the following best describes the business where you work? Please select one.

Food and dining (e.g., restaurant, bar, food truck, grocery store) 1
Retail and shopping (e.g., clothing boutique, online merchandise store) 2
Beauty and fitness (e.g., nail salon, gym, hair salon, spa) 3
Medical and health (e.g., doctor, dentist, massage therapist, counselor) 4
Travel and lodging (e.g., hotel, travel agency, taxi, gas station) 5
Consulting services (e.g., management consulting, business strategy) 6
Legal services (e.g., lawyer, paralegal, bail bondsman) 7
Home services and construction (e.g, contractor, HVAC, plumber, cleaning services) 8
Finance and banking (e.g., accounting, insurance, financial planner, investor, banker) 9
Education (e.g., tutoring, music lessons, public or private school, daycare, university) 10
Entertainment (e.g., movie theatre, sports venue, comedy club, bowling alley) 11
Art / design (e.g., art dealers, antique restoration, photographer) 13
Automotive services (e.g., auto repairs, car sales) 14
Marketing services (e.g., advertising, marketing, journalism, PR) 15
Other 16

Thinking about the next 12 months, which of the following are overall goals for the business you own or work for? Select all that apply.

Acquire new customers 1
Conduct more business with existing customers 2
Target specific customer segments 3
Improve operational efficiency/capabilities 4
Expand to more locations 5
Develop new products/services 6
Offer training or development for my employees 7
Invest in improvements to physical locations (e.g., new paint, interior remodeling, etc.) 8
Maintain current business performance 9
Acquire competitors 10
Other 11
None of the above 99

 

How does your business operate? Please select all that apply.

You have a physical business location that customers visit (e.g., store, salon, restaurant, hotel, doctor’s office etc.) 1
Your business serves customers at their locations (e.g., taxi driver, realtor, locksmith, wedding photographer, plumber) 2
Your customers can purchase products and services from any location, online or by phone 3
Other 4

Which of the following best describes your role in your business?

Owner 1
Employee 2
Other 3

Which of the following are you responsible for at the business you own or work for? Please

select all that apply.

Hiring employees 1
Managing employees 2
Business planning/Strategy 3
Marketing/Promotions 4
Finance/Accounting 5
Sales/Customer Service 6
Legal 7
IT 8
Other 9
None of the above   99

Which of the following best describes your current employment status? Please one.

Work full-time (30 or more hours per week) 1
Work part-time (fewer than 30 hours per week) 2
Not employed 4
Student 5
Retired 6

Unleash Your Visual Superpower!

From start-ups to banks, design has never been more central to business. Yet at conference after conference, I meet designers at firms talking about their struggle for influence. Why is that fabled “seat at the table” so hard to find, and how can designers get a chair?

A superhero, partially wireframed and partially illustrated.
Designers have the magical ability to visualize the future.

Designers yearn for a world where companies depend on their ideas but usually work in a world where design is just one voice. In-house designers often have to advocate for design priorities versus new features or technical change. Agency designers can create great visions that fail to be executed. Is design just a service, or can designers* lead?

*Meaning anyone who provides the vision for a product, whether it be in code, wireframes, comps, prototypes, or cocktail napkins.

Does a designer just make pictures?

In years of presentations working at an agency, I learned to sense the tension building before a design was revealed. At a certain point, clients stop listening to the strategy—they just want to get to the pictures.

But does that mean that designers should just make pictures and leave the strategy to others? No. No one wants to be a mere stylist, but it can be hard to lead when you feel typecast as a pixel pusher.

Communication is the core skill of a designer—every other ability depends on it.

The best designers transcend the gap between strategy and execution. They know that communication is the core skill of a designer: We don’t make the thing; we show how it should be made. But rather than seeing picture-making as a mere stepping stone toward strategy, we can use it as our “super power” to lead.

I have the good fortune to work with a talented illustrator. She uses the same tools I have, but when she touches them, drawings appear. It’s like a super power! That is the same impression people have of a good designer. But how do you lead with pictures and not simply push pixels? Let’s a closer look at unlocking the designer super power.

Pictures are power

Imagine getting two ideas for a new product. One person described the idea in an elevator pitch; another showed you a mock up. Which one would you listen to? Which seemed like they made more effort and had explored the idea further?

A good visualization gives the designer influence that goes beyond rank or title.  Pictures expand the collective imagination, making abstract ideas tangible and build-able.  

A junior designer I once worked with made a comp of a new dashboard idea in her spare time and emailed it out. It bounced around the organization and got taken up by the president as their future vision. Was it fully thought out? No. Did it instigate change? Definitely.  

Anyone can write bullet points; few can communicate what an experience will feel like.

Designers complain about executives meddling with designs, but seen another way, designers have access to leaders most other roles never get.  The only time a database admin gets executive attention is when the web site crashes. The best designers take advantage of the opportunity to engage in a strategic discussion.

Design itself is a product

Many groups in an organization have a thing that they “own” that gives them leverage in decisions. Developers own the code, business owns the proposition, yet design is considered a “service.” One opportunity for designers is to “own” the future, to use their unique skills to document and maintain the future state of the product.

We all sketch before we design, but too often the sketches go in a drawer when the project starts, often never to be seen again. What if designers were responsible for delivering the short term design documentation AND keeping the long-term vision alive?

“If a picture is worth 1,000 words, a prototype is worth 1,000 meetings.” —saying at Ideo

One team I’ve worked with helps bridge this gap on strategic projects by maintaining an “experience roadmap.” The roadmap is a collection of prototypes showing what each release will look like. The team also keeps the design vision up to date, updating it as they learn from customers. Agile teams can lose the long term direction by focusing on small sprints. These designers influence the strategy by showing how each release moves toward the ultimate vision.

A superhero supervises releases
An experience roadmap shows how a product can evolve—and whether it has weak points along the way.

Sort of like version control for something that doesn’t exist yet, the roadmap highlights the main “branch,” shows the variations explored, and documents the business and design decisions.

The roadmap doesn’t just benefit the project team or the designer. When you work at a large firm with many teams, communication is essential. As one business partner said, “I could spend all my time just keeping up with that everyone else is doing.” Creating design documents that solve problems for the whole team increases your influence.

Incubate ideas with visuals

I’ll let you in on a business secret: Most business plans are only sketches; even their creators aren’t half as confident as they want to be. They’d like to engage a designer, but many don’t think they can afford it. UX ideology can hurt also. Hearing that design means a “four step user-centered design process with a team of five people for six months” can convince people to skip design right when it is most helpful: the beginning.  

Working with half-baked ideas could be a nightmare project (I know several designers who have vowed to never work with start ups again), but it’s also an opportunity for designers to lead. The key is being willing to collaborate and get outside of our comfort zone.

“Designers can be solitary people who emerge from their work with the answers that will fix everything. Wrong.”  –Bradford Shellhammer

Designers all over complain about being brought in too late to a project. To break this conflict between early access and full process, our group developed a process to help executives visualize their ideas. Nicknamed “FutureMap,” it is creative session to document an idea in its earliest stage.

We gather a small group of partners to hash through the idea on whiteboards with a designer listening in and working live on the projector in the background. In a few hours, we have enough to communicate the idea and build excitement. We also build relationships. When the project is ready, you know that designer is going to have a seat at the table.

A superhero guides a meeting.
Live visualization sessions quickly get ideas out of people’s heads, show conflicting needs, and bring people together.

Always be closing

In the heat of a project, most designers focus on the product itself, but every product will have to be promoted and sold. Whether the customer is an individual consumer or an internal partner, your success depends on communicating what a product is and why someone should care.

Embracing marketing is a powerful way designers can lead product strategy. In the earliest stage, before any code is written, only designers can get customer feedback. Lean startups like to talk about minmum viable product as the smallest amount of effort to validate an assumption, but it is odd that they jump right into coding. The fastest way is almost always making pictures. Every designer already has the tools on their computer to answer product questions by engaging with the customer.

Even before you design the full product, design how you are going to communicate the value.

  • Use InDesign to make a brochure. Get feedback in a train station.
  • Use iMovie to make a fake TV ad. Get feedback online.
  • Use Dreamweaver to make a home page. Measure clicks on Google Analytics.

Some may feel that marketing is not part of user experience, but talking to real customers is the only way to get honest feedback. If you can’t sell a feature, maybe you shouldn’t build it. This has the potential to focus the product and save hundreds of hours of development. If you do it right, you will end helping to drive the product strategy.

A user researcher I work with became a leader when she demonstrated that you could usability test a value proposition. She made A/B variations of a marketing page, each with showing different features. Online testing  tools like Loop11 or usertesting.com make it easy to show a page to a couple hundred people and see who clicks the link. You can then follow up with questions on how well they understood the offer.

Making pictures helps us think

There is an old joke about politician logic:

  1. There is a problem. We must do something
  2. [X] is something
  3. Therefore, we must do [X]

Sadly, this isn’t limited to politicians. One of the less appreciated audiences of making pictures is the team itself. Any group of people working closely under pressure faces the invisible risk of tunnel vision or group think. Who hasn’t been surprised when a design idea, that seemed obviously right, stumps users in testing?

The best way to avoid it is to not have an idea. Have two ideas. Think of it as A/B testing throughout the design cycle. It’s one of the core ideas of our team’s UXD process; any concept phase must produce at least two options that can be tested.  We do this to make sure we fully develop each instead of compromising two ideas.  The most common conflict is between the expert user and the novice. Melding their needs is the best way to create bad design. Instead, design each and test. Who knows—maybe the experts enjoy the simplicity of the novice design!

There is a psychological angle to this. Many designers struggle when they see a specific design as “their baby.” Having multiple children, so to speak, helps us fall in love with problem, not the solution. This keeps us focused on leading the team toward the best solution and away from defending a flawed design.

Defeating “designer kryptonite”

There is one last aspect of your super power: You will lose it if you stop using it. I interviewed a smart candidate recently. He had great things to say about Scrum, meeting delivery deadlines, business metrics… but his visuals were no different than any business analyst. He had lost the creative spirit.

Designers on long-term projects risk exposure to “designer kryptonite”—the thousand tiny compromises due to politics, budget, legal, and our old friend “time to market.”

A superhero is threatened by kryptonite.
Designers need to refresh their powers, or lose them

Putting your creative self out there is draining. Design that inspires your company requires we reignite the creative passion from time to time. Conferences can help, but sometimes it is as simple as having a supportive event to remind us why we became designers in the first place.

Our team runs design challenges every few months. These are open entry design competitions where any designer can sketch a design without the usual requirements. Doing pure design in a social environment is great fun and produces great work. More than once, these radical visions inspired our partners to kick off a project and take it to market.

Lead by (visual) example

People usually think leadership means just telling everyone what to do, but a more effective style is servant leadership. By solving other people’s problems, by making them successful, you become a great leader. Like user centered design itself, designers have the ability to use their skills to help the people on their team.

Yes, this is more effort, but it is the kind of effort we got into design for. You don’t need to be bitten by a radioactive spider or come from another planet. What special powers do you have? How can you use them to help others and lead?

 

Illustrations by Laura Fish.

How to Determine When Customer Feedback Is Actionable

One of the riskiest assumptions for any new product or feature is that customers actually want it.

Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.

To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.

This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.

Quantify risk tolerance

You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.

Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.

To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.

A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.

Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.

A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.

Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.

Matrix visualising Type I and Type II errors as described in text.

 

But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.

To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.

In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.

In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.

Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.

Focus on impact

Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.

Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”

This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:

In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.

Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.

Sample size constraints

One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.

That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek  sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.

A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.

Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.

Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.

Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.

After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.

Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.

In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.

For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.

But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.

An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.

Expand capacity for learning

It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.

In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.

Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.

One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.

We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.

Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”

So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.

The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.

Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.

This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.

Divide no more

The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.