10 Practical Tips for Increasing the Impact of Your Research Insights

by:   |  Posted on

User experience (UX) researchers tasked with improving customer-facing products face many challenges on a daily basis—perhaps none more daunting than translating their research insights into positive change. This article presents 10 tips I have learned over the course of my career to help UX researchers increase the impact of their research insights in applied settings. These tips are intended primarily for in-house research teams, but they may apply to consultancies as well.

Continue reading 10 Practical Tips for Increasing the Impact of Your Research Insights

When Words Are Not Enough

by:   |  Posted on

The frequently-raised objection when it comes to quality research, UX research included, is that the conclusions are drawn based on the participants’ declarations. However, there exist some methods which allow one to grasp the real behaviors of participants, and they can be easily implemented into the research scenario.

During exploratory research, the respondents are often unable to articulate their needs or opinions. In turn, when it comes to usability tests or satisfaction surveys, it very often happens that the respondents’ answers are limited to vague opinions which, without being further explored by the moderator, don’t bring in much data.

Very often, they hide their opinions, because something is “not quite right” to say, something makes them feel ashamed, or their behaviors are controlled by mechanisms which they don’t even perceive—because who would admit to having certain prejudices or not fully socially-accepted desires?

Then how does one find out the real opinions of respondents?

Continue reading When Words Are Not Enough

User Research With Small Business Owners: Best Practices and Considerations

by:   |  Posted on

The majority of our work at Google has involved conducting user research with small business owners: the small guys that are typically defined by governmental organizations as having 100 or fewer employees, and that make up the majority of businesses worldwide.

Given the many hurdles small businesses face, designing tools and services to help them succeed has been an immensely rewarding experience. That said, the experience has brought a long list of challenges, including those that come with small business owners being constantly on-call and strapped for time; when it comes to user research, the common response from small business owners and employees is, “Ain’t nobody got time for that!”

To help you overcome common challenges we’ve faced, here are a few tips for conducting successful qualitative user research studies with small businesses.

Continue reading User Research With Small Business Owners: Best Practices and Considerations

How to Determine When Customer Feedback Is Actionable

by:   |  Posted on

One of the riskiest assumptions for any new product or feature is that customers actually want it.

Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.

To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.

This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap. Continue reading How to Determine When Customer Feedback Is Actionable

Your Guide to Online Research and Testing Tools

by:   |  Posted on

The success of every business depends on how the business will meet their customers’ needs. To do that, it is important to optimize your offer, the website, and your selling methods so your customer is satisfied. The fields of online marketing, conversion rate optimization, and user experience design have a wide range of online tools that can guide you through this process smoothly. Many companies use only one or two tools that they are familiar with, but that might not be enough to gather important data necessary for improvement. To help you better understand when and which tool is valuable to use, I created a framework that can help in your assessment. Once you broaden your horizons, it will be easier to choose the set of tools aligned to your business’s needs. Continue reading Your Guide to Online Research and Testing Tools

Online Surveys On a Shoestring: Tips and Tricks

by: and   |  Posted on

Design research has always been about qualitative techniques. Increasingly, our clients ask us to add a “quant part” to projects, often without much or any additional budget. Luckily for us, there are plenty of tools available to conduct online surveys, from simple ones like Google Forms and SurveyMonkey to more elaborate ones like Qualtrics and Key Survey.

Whichever tool you choose, there are certain pitfalls in conducting quantitative research on a shoestring budget. Based on our own experience, we’ve compiled a set of tips and tricks to help avoid some common ones, as well as make your online survey more effective.

We’ve organized our thoughts around three survey phases: writing questions, finding respondents, and cleaning up data.

Continue reading Online Surveys On a Shoestring: Tips and Tricks

Creativity Must Guide the Data-Driven Design Process

by:   |  Posted on

Collecting data about design is easy in the digital world. We no longer have to conduct in-person experiments to track pedestrians’ behavior in an airport terminal or the movement of eyeballs across a page. New digital technologies allow us to easily measure almost anything, and apps, social media platforms, websites, and email programs come with built-in tools to track data.

And, as of late, data-driven design has become increasingly popular. As a designer, you no longer need to convince your clients of your design’s “elegance,” “simplicity,” or “beauty.” Instead of those subjective measures, you can give them data: click-through and abandonment rates, statistics on the number of installs, retention and referral counts, user paths, cohort analyses, A/B comparisons, and countless other analytical riches.

After you’ve mesmerized your clients with numbers, you can draw a few graphs on a whiteboard and begin claiming causalities. Those bad numbers? They’re showing up because of what you told the client was wrong with the old design. And the good numbers? They’re showing up because of the new and improved design.

But what if it’s not because of the design? What if it’s just a coincidence?

There are two problems with the present trend toward data-driven design: using the wrong data, and using data at the wrong time.

Continue reading Creativity Must Guide the Data-Driven Design Process

A Beginner’s Guide to Web Site Optimization—Part 3

by:   |  Posted on

Web site optimization has become an essential capability in today’s conversion-driven web teams. In Part 1 of this series, we introduced the topic as well as discussed key goals and philosophies. In Part 2, I presented a detailed and customizable process. In this final article, we’ll cover communication planning and how to select the appropriate team and tools to do the job.

Communication

For many organizations, communicating the status of your optimization tests is an essential practice. Imagine if your team has just launched an A/B test on your company’s homepage, only to learn that another team had just released new code the previous day that had changed the homepage design entirely. Or imagine if a customer support agent was trying to help users through the website’s forgot password flow, unaware that the customer was seeing a different version due to an A/B test that your team was running.

Continue reading A Beginner’s Guide to Web Site Optimization—Part 3

A Beginner’s Guide to Web Site Optimization—Part 2

by:   |  Posted on

In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process.

Optimization process

Establishing a well-formed, formal optimization process is beneficial for the following reasons.

  1. It organizes the workflow and sets clear expectations for completion.
  2. Establishes quality control standards to reduce bugs/errors.
  3. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process.

Continue reading A Beginner’s Guide to Web Site Optimization—Part 2

A Beginner’s Guide to Web Site Optimization—Part 1

by:   |  Posted on

Web site optimization, commonly known as A/B testing, has become an expected competency among many web teams, yet there are few comprehensive and unbiased books, articles, or training opportunities aimed at individuals trying to create this capability within their organization.

In this series, I’ll present a detailed, practical guide on how to build, fine-tune, and evolve an optimization program. Part 1 will cover some basics: definitions, goals and philosophies. In Part 2, I’ll dive into a detailed process discussion covering topics such as deciding what to test, writing optimization plans, and best practices when running tests. Part 3 will finish up with communication planning, team composition, and tool selection. Let’s get started!

The basics: What is web site optimization?

Web site optimization is an experimental method for testing which designs work best for your site. The basic process is simple:

  1. Create a few different design options, or variations, of a page/section of your website.
  2. Split up your web site traffic so that each visitor to the page sees either your current version (the control group) or one of these new variations.
  3. Keep track of which version performs better based on specific performance metrics.

The performance metrics are chosen to directly reflect your site’s business goals and might include things like how many product purchases were made on your site (a sales goal), how many people signed up for the company newsletter (an engagement goal), or how many people watched a self-help video in your FAQ section (a customer service goal). Performance metrics are often referred to as conversion rates, which equals the percentage of visitors who performed the action being tested compared to the total number of visitors to that page.

Optimization can be thought of as one component in the web site development ecosystem. Within optimization, the basic process is to analyze data, create and run tests, then implement the winners of those tests.

Visual of where optimzation fits in site development
Optimization can be thought of as one component in the website development ecosystem.

 

A/B vs. multivariate

There are two basic types of optimization tests: A/B tests (also known as an A/B/N tests) and multivariate tests.

A/B tests

In an A/B test, you run two or more fixed design variations against each other. The variations might differ in only one individual element (such as the color of a button or swapping out an image for a video) or in many elements all at once (such as changing the entire page layout and design, changing a long form into a step-by-step wizard, etc…).

Three buttons for testing, each with different copy.
Example 1: A simple A/B/N test trying to determine which of three different button texts drives more clicks.

 

 

 

Visuals showing page content in different layouts.
Example 2: An A/B test showing large variations in both page layout and content.

 

In general, A/B tests are simpler to design and analyze and also return faster results since they usually contain fewer variations than multivariate tests. They seem to constitute the vast majority of manual testing that occurs these days.

Multivariate tests

Multivariate tests vary two or more attributes on a page and test which combination works best. The key difference between A/B and multivariate tests is that the latter are designed to tease apart how two or more dimensions of a design interact with each other and lead to that design’s success. In the example below, the team is trying to figure out what combination of button text and color will get the most clicks.

Buttons with both different copy and different colors
Example 1: A simple multivariate test with 2 dimensions (button color and button text) and 3 variations on each dimension.

The simplest form of multivariate testing is called the full-factorial method, which involves testing every combination of factors against each other, as in the example above. The biggest drawback of these tests is that they generally take longer to get statistically significant results since you are splitting the same amount of site traffic between more variations than A/B tests.

Other fractional factorial methods use statistics to try and interpolate the results of certain combinations, thereby reducing the traffic needed to test every single variation. Many of today’s optimization tools allow you to play around with these different multivariate methods; just keep in mind that fractional factorial methods are often complex, named after deceased Japanese mathematicians, and require a degree in statistics to fully comprehend. Use at your own risk.

Why do we test? Goals, benefits, and rationale

There are many benefits of moving your organization to a more data-driven culture. Optimization establishes a metrics-based system for determining design success vs. failure, thereby allowing your team to learn with each test. No longer will people argue ad nauseum over design details. Cast away the chains of the HiPPO effect—in which the Highest Paid Person in the Office determines what goes on your site. Once you have established a clear set of goals and the appropriate metrics for measuring those goals, the data should speak as the deciding voice.

Optimization can also drastically improve your organization’s product innovation process by allowing you to test new product ideas at scale and quickly figure out which are good and which should be scrapped. In his article “How We Determine Product Success” John Ciancutti of Netflix describes it this way:

“Innovation involves a lot of failure. If we’re never failing, we aren’t trying for something out on the edge from where we are today. In this regard, failure is perfectly acceptable at Netflix. This wouldn’t be the case if we were operating a nuclear power plant or manufacturing cars. The only real failure that’s unacceptable at Netflix is the failure to innovate.

So if you’re going to fail, fail cheaply. And know when you’ve failed vs. when you’ve gotten it right.”

Top three testing philosophies

1. Rigorously focus on metrics

I personally don’t subscribe to the philosophy that you should test every single change on your site. However, I do believe that every organization’s web strategies should be grounded in measurable goals that are mapped directly to your business goals.

For example, if management tells you that the web site should “offer the best customer service,” your job is to then determine which metrics adequately represent that conceptual goal. Maybe it can be represented by the total number of help tickets or emails answered from your site combined with a web customer satisfaction rating or the average user rating of individual question/answer pairs in your FAQ section. As Galileo supposedly said, “Measure what is measurable, and make measurable what is not so.”

Additionally, your site’s foundational architecture should allow, to the fullest extent possible, the measurement of true conversions and not simply indicators (often referred to as macro vs micro conversions). For example, if your ecommerce site is only capable of measuring order submissions (or worse yet, leads), make it your first order of business to be able to track that order submission through to a true paid sale. Then ensure that your team always has an eye on these true conversions in addition to any intermediate steps and secondary website goals.  There are many benefits of measuring micro conversion rates, but the work must be done to map them to a tangible macro conversion or you run the risk of optimizing for a false conversion goal.

2. Nobody really knows what will win

I firmly believe that even the experts can’t consistently predict the outcome of optimization tests with even close to 100% accuracy. This is, after all, the whole point of testing. Someone with good intuition and experience will probably have a higher win rate than others, but for any individual test, anyone can be right. With this in mind, don’t let certain members of the team bully others into design submission. When it doubt, test it out.

3. Favor a “small-but-frequent” release strategy

In other words, err on the side of only changing one thing at a time, but perform the changes frequently. This strategy will allow you to pinpoint exactly which changes are affecting your site’s conversion rates. Let’s look at the earlier A/B test example to illustrate this point.

Visuals showing page content in different layouts.
An A/B test showing large variations in both page layout and content.

Let’s imagine that your new marketing director decides that your company should completely overhaul the homepage. After a few months of work, the team launches the new “3-column” design (above-right). Listening to the optimization voice inside your head, you decide to run an A/B test, continuing to show the old design to just 10% of the site visitors and the new design to the remaining 90%.

To your team’s dismay, the old design actually outperforms the new one. What should you do? It would be difficult to simply scrap the new design in its entirety, since it was a project that came directly from your boss and the entire team worked so hard on it. There are most likely a number of elements of the new design that actually perform better than the original, but because you launched so many changes all at once, it is difficult to separate the good from the bad.

A better strategy would have been to have constantly optimized different aspects of the page in small but frequent tests to gradually evolve towards a new version. This process, in combination with other research methods, would provide your team with a better foundation for performing site changes. As Jared Spool argued in his article The Quiet Death of the Major Relaunch, “the best sites have replaced this process of revolution with a new process of subtle evolution. Entire redesigns have quietly faded away with continuous improvements taking their place.”

Conclusion

By now you should have a strong understanding of optimization basics and may have started your own healthy internal dialogue related to philosophies and rationale. In the next article, we’ll talk about more tactical concerns, specifically, the optimization process.