User experience (UX) researchers tasked with improving customer-facing products face many challenges on a daily basis—perhaps none more daunting than translating their research insights into positive change. This article presents 10 tips I have learned over the course of my career to help UX researchers increase the impact of their research insights in applied settings. These tips are intended primarily for in-house research teams, but they may apply to consultancies as well.
Benjamin Franklin once said: “Tell me and I forget; teach me and I may remember; involve me and I learn.”
At the SAP Design & Co-Innovation Center (DCC), we frequently organize the so-called “Method Mondays,” a regular one-hour meeting series in which the team members share, practice, and test different methods.
In this article, I would like to share the five methods with you that work best for us—they’re worth trying!
The majority of our work at Google has involved conducting user research with small business owners: the small guys that are typically defined by governmental organizations as having 100 or fewer employees, and that make up the majority of businesses worldwide.
Given the many hurdles small businesses face, designing tools and services to help them succeed has been an immensely rewarding experience. That said, the experience has brought a long list of challenges, including those that come with small business owners being constantly on-call and strapped for time; when it comes to user research, the common response from small business owners and employees is, “Ain’t nobody got time for that!”
To help you overcome common challenges we’ve faced, here are a few tips for conducting successful qualitative user research studies with small businesses.
From start-ups to banks, design has never been more central to business. Yet at conference after conference, I meet designers at firms talking about their struggle for influence. Why is that fabled “seat at the table” so hard to find, and how can designers get a chair?
Designers yearn for a world where companies depend on their ideas but usually work in a world where design is just one voice. In-house designers often have to advocate for design priorities versus new features or technical change. Agency designers can create great visions that fail to be executed. Is design just a service, or can designers* lead?
*Meaning anyone who provides the vision for a product, whether it be in code, wireframes, comps, prototypes, or cocktail napkins.
One of the riskiest assumptions for any new product or feature is that customers actually want it.
Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.
To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.
This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap. Continue reading How to Determine When Customer Feedback Is Actionable