A Beginner’s Guide to Web Site Optimization—Part 3

by:   |  Posted on

Web site optimization has become an essential capability in today’s conversion-driven web teams. In Part 1 of this series, we introduced the topic as well as discussed key goals and philosophies. In Part 2, I presented a detailed and customizable process. In this final article, we’ll cover communication planning and how to select the appropriate team and tools to do the job.

Communication

For many organizations, communicating the status of your optimization tests is an essential practice. Imagine if your team has just launched an A/B test on your company’s homepage, only to learn that another team had just released new code the previous day that had changed the homepage design entirely. Or imagine if a customer support agent was trying to help users through the website’s forgot password flow, unaware that the customer was seeing a different version due to an A/B test that your team was running.

Continue reading A Beginner’s Guide to Web Site Optimization—Part 3

A Beginner’s Guide to Web Site Optimization—Part 2

by:   |  Posted on

In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process.

Optimization process

Establishing a well-formed, formal optimization process is beneficial for the following reasons.

  1. It organizes the workflow and sets clear expectations for completion.
  2. Establishes quality control standards to reduce bugs/errors.
  3. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process.

Continue reading A Beginner’s Guide to Web Site Optimization—Part 2

A Beginner’s Guide to Web Site Optimization—Part 1

by:   |  Posted on

Web site optimization, commonly known as A/B testing, has become an expected competency among many web teams, yet there are few comprehensive and unbiased books, articles, or training opportunities aimed at individuals trying to create this capability within their organization.

In this series, I’ll present a detailed, practical guide on how to build, fine-tune, and evolve an optimization program. Part 1 will cover some basics: definitions, goals and philosophies. In Part 2, I’ll dive into a detailed process discussion covering topics such as deciding what to test, writing optimization plans, and best practices when running tests. Part 3 will finish up with communication planning, team composition, and tool selection. Let’s get started!

The basics: What is web site optimization?

Web site optimization is an experimental method for testing which designs work best for your site. The basic process is simple:

  1. Create a few different design options, or variations, of a page/section of your website.
  2. Split up your web site traffic so that each visitor to the page sees either your current version (the control group) or one of these new variations.
  3. Keep track of which version performs better based on specific performance metrics.

The performance metrics are chosen to directly reflect your site’s business goals and might include things like how many product purchases were made on your site (a sales goal), how many people signed up for the company newsletter (an engagement goal), or how many people watched a self-help video in your FAQ section (a customer service goal). Performance metrics are often referred to as conversion rates, which equals the percentage of visitors who performed the action being tested compared to the total number of visitors to that page.

Optimization can be thought of as one component in the web site development ecosystem. Within optimization, the basic process is to analyze data, create and run tests, then implement the winners of those tests.

Visual of where optimzation fits in site development
Optimization can be thought of as one component in the website development ecosystem.

 

A/B vs. multivariate

There are two basic types of optimization tests: A/B tests (also known as an A/B/N tests) and multivariate tests.

A/B tests

In an A/B test, you run two or more fixed design variations against each other. The variations might differ in only one individual element (such as the color of a button or swapping out an image for a video) or in many elements all at once (such as changing the entire page layout and design, changing a long form into a step-by-step wizard, etc…).

Three buttons for testing, each with different copy.
Example 1: A simple A/B/N test trying to determine which of three different button texts drives more clicks.

 

 

 

Visuals showing page content in different layouts.
Example 2: An A/B test showing large variations in both page layout and content.

 

In general, A/B tests are simpler to design and analyze and also return faster results since they usually contain fewer variations than multivariate tests. They seem to constitute the vast majority of manual testing that occurs these days.

Multivariate tests

Multivariate tests vary two or more attributes on a page and test which combination works best. The key difference between A/B and multivariate tests is that the latter are designed to tease apart how two or more dimensions of a design interact with each other and lead to that design’s success. In the example below, the team is trying to figure out what combination of button text and color will get the most clicks.

Buttons with both different copy and different colors
Example 1: A simple multivariate test with 2 dimensions (button color and button text) and 3 variations on each dimension.

The simplest form of multivariate testing is called the full-factorial method, which involves testing every combination of factors against each other, as in the example above. The biggest drawback of these tests is that they generally take longer to get statistically significant results since you are splitting the same amount of site traffic between more variations than A/B tests.

Other fractional factorial methods use statistics to try and interpolate the results of certain combinations, thereby reducing the traffic needed to test every single variation. Many of today’s optimization tools allow you to play around with these different multivariate methods; just keep in mind that fractional factorial methods are often complex, named after deceased Japanese mathematicians, and require a degree in statistics to fully comprehend. Use at your own risk.

Why do we test? Goals, benefits, and rationale

There are many benefits of moving your organization to a more data-driven culture. Optimization establishes a metrics-based system for determining design success vs. failure, thereby allowing your team to learn with each test. No longer will people argue ad nauseum over design details. Cast away the chains of the HiPPO effect—in which the Highest Paid Person in the Office determines what goes on your site. Once you have established a clear set of goals and the appropriate metrics for measuring those goals, the data should speak as the deciding voice.

Optimization can also drastically improve your organization’s product innovation process by allowing you to test new product ideas at scale and quickly figure out which are good and which should be scrapped. In his article “How We Determine Product Success” John Ciancutti of Netflix describes it this way:

“Innovation involves a lot of failure. If we’re never failing, we aren’t trying for something out on the edge from where we are today. In this regard, failure is perfectly acceptable at Netflix. This wouldn’t be the case if we were operating a nuclear power plant or manufacturing cars. The only real failure that’s unacceptable at Netflix is the failure to innovate.

So if you’re going to fail, fail cheaply. And know when you’ve failed vs. when you’ve gotten it right.”

Top three testing philosophies

1. Rigorously focus on metrics

I personally don’t subscribe to the philosophy that you should test every single change on your site. However, I do believe that every organization’s web strategies should be grounded in measurable goals that are mapped directly to your business goals.

For example, if management tells you that the web site should “offer the best customer service,” your job is to then determine which metrics adequately represent that conceptual goal. Maybe it can be represented by the total number of help tickets or emails answered from your site combined with a web customer satisfaction rating or the average user rating of individual question/answer pairs in your FAQ section. As Galileo supposedly said, “Measure what is measurable, and make measurable what is not so.”

Additionally, your site’s foundational architecture should allow, to the fullest extent possible, the measurement of true conversions and not simply indicators (often referred to as macro vs micro conversions). For example, if your ecommerce site is only capable of measuring order submissions (or worse yet, leads), make it your first order of business to be able to track that order submission through to a true paid sale. Then ensure that your team always has an eye on these true conversions in addition to any intermediate steps and secondary website goals.  There are many benefits of measuring micro conversion rates, but the work must be done to map them to a tangible macro conversion or you run the risk of optimizing for a false conversion goal.

2. Nobody really knows what will win

I firmly believe that even the experts can’t consistently predict the outcome of optimization tests with even close to 100% accuracy. This is, after all, the whole point of testing. Someone with good intuition and experience will probably have a higher win rate than others, but for any individual test, anyone can be right. With this in mind, don’t let certain members of the team bully others into design submission. When it doubt, test it out.

3. Favor a “small-but-frequent” release strategy

In other words, err on the side of only changing one thing at a time, but perform the changes frequently. This strategy will allow you to pinpoint exactly which changes are affecting your site’s conversion rates. Let’s look at the earlier A/B test example to illustrate this point.

Visuals showing page content in different layouts.
An A/B test showing large variations in both page layout and content.

Let’s imagine that your new marketing director decides that your company should completely overhaul the homepage. After a few months of work, the team launches the new “3-column” design (above-right). Listening to the optimization voice inside your head, you decide to run an A/B test, continuing to show the old design to just 10% of the site visitors and the new design to the remaining 90%.

To your team’s dismay, the old design actually outperforms the new one. What should you do? It would be difficult to simply scrap the new design in its entirety, since it was a project that came directly from your boss and the entire team worked so hard on it. There are most likely a number of elements of the new design that actually perform better than the original, but because you launched so many changes all at once, it is difficult to separate the good from the bad.

A better strategy would have been to have constantly optimized different aspects of the page in small but frequent tests to gradually evolve towards a new version. This process, in combination with other research methods, would provide your team with a better foundation for performing site changes. As Jared Spool argued in his article The Quiet Death of the Major Relaunch, “the best sites have replaced this process of revolution with a new process of subtle evolution. Entire redesigns have quietly faded away with continuous improvements taking their place.”

Conclusion

By now you should have a strong understanding of optimization basics and may have started your own healthy internal dialogue related to philosophies and rationale. In the next article, we’ll talk about more tactical concerns, specifically, the optimization process.

UX Researcher: A User’s Manual

by:   |  Posted on

This article is a guide on what to expect, and how to get the most from your UX researcher–a user manual, if you will.

You will invest a lot in your researcher and you deserve the greatest return. You should have high expectations for this critical component of your UX team, and following the recommendations presented in this article will help maximize your return.

A long and prosperous future

Congratulations on hiring a user experience design researcher!  When maintained correctly, a full time researcher will give you many years of strategic insight and validation, eliciting oohs and ahs from jealous shops that have chosen to forgo a researcher and cheers from your many satisfied clients. There are many benefits of having a researcher on staff, which include:

  • Making insights through on-site observation
  • Validating business hypotheses through customer research
  • Discovering usability issues through user testing
  • Initiating new projects in an effort to constantly expand their interests and skills

First, let’s spend a minute discussing the return component of return on investment. Incorporating user research into your product ensures its usability. According to Forrester (2009, pg. 2), product experience is what creates value and establishes power in the marketplace. Specifically, they found companies providing a superior user experience led to:

  • 14.4% more customers willing to purchase their product
  • 15.8% fewer customers willing to consider doing business with a competitor
  • 16.6% more customers likely to recommend their product or services

Investing in a UX researcher is a critical part of ensuring you provide your users with the superior experience Forrester notes as being such a critical differentiator. Everything covered in the following article applies to teams of researchers as well as those in a department of one.

Expectations

You should have high expectations for the quality and quantity of your researcher’s work. She should be a main contributor to your organization, a team player, and someone you look to for new ideas and fresh perspectives on long-standing issues. Your researcher’s unique background in asking questions and finding solutions, as well as the fact that she is likely spending ample time listening to your clients, provides her with insight she can provide your team on how to move forward with addressing various issues.

You might be saying anyone can accomplish the tasks in the paragraph above. You’re correct. I’m pointing out you should expect this from your researcher fresh out of the box, no questions asked.

You might have hired your researcher with specific duties in mind; however, you should expect her to want to know what others are working on, to be a part of the bigger picture, and to ask for feedback allowing her to become more proficient at what she does.

The following are some of the key expectations you should have for your researcher.

Asking questions

Asking the right questions is a basic expectation. Don’t laugh. This is harder than it looks. Asking questions involves the preliminary step of listening to understand what the issue actually is. Not everyone can do this.

Solving a problem isn’t as simple as asking the question you want answered.

For example, your overarching question might be “Does this website work well?” You could ask 1,000 people this question, and you wouldn’t know much after counting the “yes” and “no” responses.

What you need to know is “what about this site works, what doesn’t, and why?” Responses to these questions can be obtained in a variety of ways, allowing solutions to be identified. You can rely on your researcher to determine the most appropriate questions to ask in situations like this.

Researchers spend years listening to professors, clients, peers, and stakeholders to identify core issues to solve as well as what questions will provide data to find a solution. When meeting with project staff from a recent client, don’t assume your researcher isn’t engaged if she is quiet. It is likely she is observing verbal and physical interactions in the room as she designs a plan of attack.

Navigating relevant literature

Most likely, other researchers have published findings from studies related to what your researcher will examine. Your researcher should easily navigate and compile reports and studies from the body of knowledge in UX, HCI, and other relevant fields. The fact that someone else has explored questions similar to those of a project you’re asking your researcher to tackle helps shape their thinking on how to move forward, using existing resources to their fullest potential.

Literature can serve to inspire your researcher. For example, studies of ecommerce sites suggest trust is a key factor in determining users’ purchasing behavior. If you have a client developing a site meant to provide information, not selling a product, how might trust be developed? Your researcher can use findings from ecommerce studies to shape her questions and study design and then potentially publish a report contributing to the field, beyond the needs of your client.

Using the right method

Asking the right questions and reading up on relevant literature leads to the next critical expectation for your researcher: Using the right method.

UX research is more than usability testing. Your researcher knows methods shouldn’t dictate the questions asked, but the opposite: Your methods should be tailored to get relevant data for the questions to be asked.

Picking a method is hard work, this is why you need a researcher in the first place, they have the training and experience needed to select the right method for the question being asked. Use your researcher to do this. Your researcher carries a toolbox of methods. They might have preferences, or be more comfortable with certain methods, but they should not be a one-method pony. Some researchers are on a constant quest to define or refine new methods to answer questions. These can be exciting models to work with–the sports cars of UX researchers–willing to push the pedal to the metal to see where things go.

Regardless of the amount of planning, you often find yourself in a situation less than the ideal one written up in a methods textbook. Adapting to on-the-ground scenarios is something to expect from your researcher. Whether it’s using her smartphone to record an interview when her digital voice recorder dies, or adjusting on the fly when a busy client decides they only have 45 minutes to complete a 90-minute interview, your researcher should walk away from each scenario maximizing her ability to be flexible and still collect relevant data.

Translating findings

You’ve asked the right questions and selected the right method to collect data; now your researcher should serve as a translator for the application of research findings. Study results can be confusing if not interpreted appropriately. This includes verbal and written reports tailored to the experience and expectations of your audience. Your researcher should embrace the opportunity and challenge presented by making the results of her labor relevant to her peers.

Silo-busting

Researchers should come with the ability to break down silos, serving as ambassadors internally and externally, across teams and projects. Researchers are often deployed with surgical precision at specific intervals in a project timeline. This means your researcher might be actively involved in five or six projects simultaneously, giving her a breadth of insights. Few others within your organization are as able to communicate on the goals and achievements of multiple projects as she is. If findings from one study being conducted for client A would impact a recommendation for client G, your researcher should ensure everyone working with client G is aware of this.

Academia: A land far, far away

To make the best use of your researcher, it’s important to know where they come from. Especially if she is one of the PhD models, she was likely assembled in a far away land called “Academia.”

In Academia, your researcher gained or honed some of her most useful attributes: critical thinking; exposure to broad topics; research methods, both quantitative and qualitative; analyzing, interpreting, and presenting results; and connections with fellow researchers and academics.

Academia is the land of publish or perish. There are plenty of opportunities to give presentations to groups, write papers, teach courses, and create visual displays of data for various projects. This experience should leave your researcher well polished at speaking and presenting research in various formats well before they land at your front door. Although not all researchers are the best orators in the room, they should all be highly proficient at tailoring the message to their audience.

Additionally, your researcher has navigated an unbelievable amount of bureaucracy to escape Academia with a degree. She comes with the skills of diplomacy, patience, interpreting technical documents, and correctly filling out these documents under duress. This contributes to refining her ability to successfully reach the finish line and receive the prize. Your researcher is a doer and a finisher!

There are some things done in Academia, however, that don’t translate as well in the “real world.”

Academics have a unique language beyond the jargon typically found in professional fields. An example of research-ese is the statement, “I don’t think the items in this scale are valid at measuring the factor they purport to” translates to, “We might not be asking the right questions on this survey.”

Using obscure words–sometimes in different languages–becomes second nature to those moving through Academia. It is perfectly acceptable to tell your researcher she isn’t speaking your language. She should be able to translate for you; you just need to be clear when this is necessary.

Academia instills an unrealistic sense of time, as well. Your researcher may have spent one, two, or more years working on a single research project while earning her degree. Anyone that’s spent time in the real world knows you are lucky to have a timeline of one or two months to complete a study and, more realistically, about three weeks.

Adjusting the timeline for conducting a study is something you can expect your researcher to come to grips with rather quickly. You might see smoke coming out of her ears as gears that have been set to snail’s pace spin at hyper speed, but trust me, the adjustment will happen.

Be clear about your expectations for timelines at the beginning of a project, particularly if your researcher is fresh out of Academia.

The attributes instilled by Academia have become ingrained in your researcher. Enjoy them while you provide coaching to help her adapt to your business’s requirements. Experiences in Academia are part of what makes your researcher quirky, unique, and invaluable to your organization.

As time passes, she will become more polished, especially if you provide her with explicit feedback on what she is doing well and what she can do to improve. Patience is key when helping your researcher transition from Academia; if you exercise it, you will find the results quite rewarding.

Care and maintenance

Addressing the following will ensure your researcher stays running at optimal conditions.

Continuous learning opportunities

Researchers have an inherent love of learning. Why else would someone voluntarily go to 20th grade? Your researcher probably believes “everyone is a lifelong learner.”

It’s critical to offer educational opportunities and training. You must allot time and money for her to attend classes and seminars on topics ranging from research methods, to statistical analysis, to how to visualize data.

You should offer these opportunities to all of your staff; learning opportunities are key for ensuring a high level of morale throughout your organization. These opportunities aren’t always costly. Many organizations offer free or low cost webinars lasting the time of a reasonable lunch break.

Membership in professional organizations

Professional organizations allow your researcher opportunities to keep a pulse on the current state of their field. Professional organizations often host events and distribute publications promoting professional development and networking among professionals.

You should provide your researcher funds to join a professional organization; however, there are organizations that do not charge a fee to join. For example, I am a member and current Vice Chair for PhillyCHI the ACM chartered professional organization serving Philadelphia and the Delaware Valley region. There’s no charge to join, and monthly events are free for anyone to attend.

I suggest encouraging your researcher to attend meetings and allowing her time to serve as a volunteer or board member of professional organizations. There are numerous legitimate professional organizations at local, national, and international levels affiliated with ACM, IxDA, UXPA, and more.

Attending conferences and workshops

There’s a subconscious desire for researchers to congregate to drink beer and exchange ideas. Attending conferences allows researchers to meet peers from around the world and across topics, to learn the state of the art in their field.

Your researcher is most likely aware of the various local UX organizations such as ACM SIGCHI and UXPA sponsored groups, UX book clubs, and other UX meetups. Many of these groups offer workshops and one day events that are low or no cost (Thanks sponsors!). So, if you need convincing on the value of attending conferences, you can dip your toe in the water without blowing the budget. There’s also no shortage of national and international UX conferences that would satisfy your researcher’s needs. You can start with this list compiled by usertesting.com.

Besides getting a chance to feed off the ideas of others, interacting with professionals in her field, and allowing her to show off her work, there is another way of getting value from having your researcher attend conferences:

At Intuitive Company, staff give presentations on any conference they attend using company funds. This promotes the value of attending conferences to your staff, with the added benefit of allowing your researcher to present information to their peers, something most researchers already enjoy doing.

Reading

This was mentioned in expectations, but allowing your researcher time to read is your responsibility. She is one of those rare birds that actually recharge their batteries when reading, particularly when it relates to her research and practice interests.

Here’s a secret: You benefit from your researcher’s desire and ability to read! By allowing your researcher to read, you are actually allowing her to work, so long as you structure it correctly. For example, tell her you want her to conduct a literature review; therefore you are giving permission to read while at the same time setting up the expectation that there will be a usable product as the outcome of her reading. A literature review on a relevant topic can inform future research you engage in as well as design recommendations you make.

Win-win.

If you still can’t fathom giving your researcher time to read on the job, you should at least provide her with a book budget to purchase some of the must reads in UX.

Publishing and presenting

What good would research, professional development, conference attending, and reading do if your researcher couldn’t share her newfound knowledge with others?

Academia has hammered the need for dissemination into the fiber of your researcher’s being. Allowing time for writing and presenting is another area of maintenance that is your responsibility. You should encourage her to present at conferences and publish articles, blog posts, and white papers on relevant topics.

This is a way for her and your organization to build a strong brand in the communities you work in. For example, having your researcher cited as an expert on responsive design because she’s published on the topic is something you can include in future proposals and presentations you make to potential clients.

Conclusion

The success of your researcher is a two-way street. If you’ve already begun the journey with your researcher, this article might have highlighted expectations or maintenance that you’ve overlooked. If so, it isn’t too late to implement change; she can handle that as easily as a dead recorder, and you can enhance the relationship you have with her. If you haven’t started the journey, the advice provided can help ensure you get the most from your well maintained researcher for years to come.

What would you add or change to this manual based on your experience?

Additional resources

Forrester Report on best practices in UX (2009): https://www.adobe.com/enterprise/pdfs/Forrester_Best_Prac_In_User_Exp.pdf

Sandy Greene of Intuitive Company on evolving a creative workplace: http://boxesandarrows.wpengine.com/author/sgreene/

Five Things They Didn’t Teach Me in School About Being a User Researcher

by:   |  Posted on

Graduate school taught me the basics of conducting user research, but it taught me little about what it’s like working as a user researcher in the wild. I don’t blame my school for this. There’s little publicly-available career information for user researchers, in large part because companies are still experimenting with how to best make use of our talents.

That said, in the midst of companies experimenting with how to maximize user researchers, there are a few things I’ve learned specific to the role of user researcher that have held true across the diverse companies I’ve worked for. Some of these learnings were a bit of a surprise early on my my career, and I hope in sharing them I’ll save a few from making career mistakes I made in the past for lack of knowing better.

There’s a ton of variation in what user researchers do.

In my career, I’ve encountered user researchers with drastically varying roles and skillsets: many who focus solely on usability, a few who act as hybrid designers and researchers, some that are specialists in ethnography, and yet others who are experts in quantitative research. I’ve also spoken with a few who are hybrid market/user researchers, and I know of one tech company that is training user researchers to own certain product management responsibilities.

If you take a moment to write down all of the titles you’ve encountered for people who do user research work, my guess is that it will be a long one. My list includes user experience researcher, product researcher, design researcher, consumer insights analyst, qualitative researcher, quantitative researcher, usability analyst, ethnographer, data scientist, and customer experience researcher. Sometimes companies choose one title over another for specific reasons, but most of the time they’ll use a title simply because of tradition, politics, or lack of knowing the difference.

At one company I once worked for, my title was user researcher, but I was really a usability analyst, spending 80% of my time conducting rapid iterative testing and evaluation (RITE) studies. When I accepted the job at that company, I assumed–based on my title–that I’d be involved in iterative research and more strategic, exploratory work. I quickly learned that the title was misleading and should have been usability analyst.

What does this all mean for your career?

For starters, it means you should do a ton of experimentation while in school or early on in your career to understand what type of user research you enjoy and excel at most. It also means that it’s incredibly important to ask questions about the job description during an interview to make sure you’re not making faulty assumptions, based on a title, about the work you’d be doing.

Decisions influence data as much as data influences decisions.

I used to think the more data the better applied to most situations, something I’ve recently heard referred to as “metrics fetishism.” I’ve now observed many situations in which people use data as a crutch, end up making mistakes by interpreting “objective” data incorrectly, or become paralyzed by too much data.

The truth is that there are limitations to every type of data, qualitative and quantitative. Even data lauded by some as completely objective–for example, data from website logs or surveys–oftentimes includes a layer of subjectiveness.

At the beginning and end of any research project there are decisions to be made. What method should I use? What questions should I ask and how exactly should they be asked? Which metrics do we want to focus on? What data should we exclude? Is it OK to aggregate some data? What baselines should we compare to? These decisions should themselves be grounded in data and experience as much as possible, but they will almost always involve some subjectivity and intuition.

I’ll never forget one situation in which a team I worked with refused to address obvious issues and explore solutions without first surveying users for feedback (in large part because of politics). In this situation, the issues were so obvious that we should have felt comfortable using our expertise to address them. Because we didn’t trust making decisions without data in this case, we delayed fixing the issues, and our competitors gained a huge advantage. There’s obviously a lot more detail to this story, but you get the point: In this circumstance, I learned that relying on data as a crutch can be harmful.

What does this mean for your career?

Our job as user researchers is not only to deliver insights via data, but also to make sure people understand the limitations of data and when it should and shouldn’t be used. For this reason, a successful user researcher is one who’s comfortable saying “no” when research requests aren’t appropriate, in addition to explaining the limitations of research conducted. This is easier said than done, especially as a new user researcher, but I promise it becomes easier with practice.

You’re not a DVR.

Coming out of school, I thought my job as a user researcher was solely to report the facts: 5 out of 8 users failed this task, 50% gave the experience a score of satisfactory, and the like. I was to remain completely objective at all times and to deliver massive reports with as much supporting evidence as I could find.

I now think it’s old-school for user researchers to not have an opinion informed by research findings. Little is accomplished when a user researcher simply summarizes data; that’s what video recordings and log data are for. Instead, what’s impactful is when researchers help their teams prioritize findings and translate them into actionable terms. This process requires having an opinion, oftentimes filling in holes where data isn’t available or is ambiguous.

One project I supported early in my career involved a large ethnography. Six user researchers conducted over 60 hours of interviews with target users throughout the United States. Once all of the interviews were completed, we composed a report with over 100 PowerPoint slides and hours of video footage, summarizing all that was learned without making any concrete recommendations or prioritizing findings. Ultimately we received feedback that our report was mostly ignored because no one had time to read through it and it wasn’t clear how to respond to it. Not feedback you want to receive as a user researcher!

What does this mean for your career?

The most impactful user researchers I’ve encountered in my career take research insights one step further by connecting the dots between learnings and design and product requirements. You might never be at the same depth of product understanding as your fellow product managers and designers, but it’s important to know enough about their domains to translate your work into actionable terms.

Having an opinion is a scary thought for a lot of user researchers because it’s not always possible to remain 100% objective in bridging the gap between research insights and design and product decisions. But remember that there’s often always limitations and a subjective layer to data, so always remaining 100% objective just isn’t realistic to begin with.

Little is accomplished when data is simply regurgitated; our biggest impact is contributing to the conversation by providing actionable insights and recommendations that helps decision makers question their assumptions and biases.

Relationships aren’t optional, they’re essential.

As a student, my success was often measured by how hard I worked relative to others, resulting in a competitive environment. I continued the competitive behavior I learned in school when I first started working as a user researcher; I put my nose to the grindstone and gave little thought to relationships with my colleagues. What I quickly learned, however, is that taking time to establish coworker relationships is just as important as conducting sound research.

Work shouldn’t be a popularity contest, right? Right–but solid coworker relationships make it easier to include colleagues in the research process, transforming user research into the shared process it should be. And trust me, work is way more fun and meaningful if you enjoy your coworkers!

What does this mean for your career?

Take the time to get to know your coworkers on a personal level, offer unsolicited help, share a laugh, and take interest in the work that your colleagues do. I could share a personal example here, but instead let me refer you to Dale Carnegie’s book How to Win Friends and Influence People. Also check out Tomer Sharon’s book It’s Our Research.

Expect change–and make your own happiness within it.

Change is a constant for UX’ers. I’m on my eighth manager as a user researcher, and in my career I’ve been managed by user researchers, designers, product managers, and even someone with the title of VP of Strategic Planning. I’ve also been through four reorganizations and a layoff.

What does this mean for your career?

Change can be stressful, but when embraced and expected, you’ll find that there are benefits to change. For example, change can provide needed refreshment and new challenges after a period of stagnation. Change can also save you from a difficult project or a bad manager.

I remember a conversation with a UX leader in which he shared he once quit a job because he couldn’t get along with a peer who just didn’t get the user experience process. A few months after he quit, the peer was fired. If only he had stuck around for a while.

The U.S. Navy SEALs have a saying: “Get comfortable being uncomfortable,” which refers to the importance of remaining focused on the objective at hand in the middle of ongoing change. Our objective as user researchers is to conduct research for the purpose of improving products and experiences for people. Everything else is secondary–don’t get distracted.

For more detailed recommendations on how to deal with change as a user research, I highly recommend watching Andrea Lindman’s talk “Adapting to Change: UX Research in an Ever-Changing Business Environment.”

Concluding thoughts

I’ve been happy to see in the past two years that the user experience community has stepped up in making career advice more readily available (we could do even better, though). For user researchers wanting advice beyond what I’ve shared in this article, here are four of my favorite resources:

  • Judd Antin’s talk in which he covers many opportunities and challenges of doing user research: http://vimeo.com/77110204.
  • You in UX, an online career conference for user experience professionals.
  • Tomer Sharon’s book It’s Our Research.
  • A special issue of UXPA’s UX Magazine, with the theme of UX careers.

The Right Way to Do Lean Research

by:   |  Posted on

StartX, a nonprofit startup accelerator, recently devoted an entire day to the role of design in early-stage companies. One panel included Laura Klein, Todd Zaki-Warfel, Christina Wodtke, and Mike Long.

Each panelist had made their mark on how design is done in start-ups: Laura wrote the influential O’Reilly book on UX for Lean Startups, and Todd penned the bestselling Rosenfeld Media Prototyping book. Christina has been cross-teaching design to entrepreneurs and entrepreneurship to designers at institutions such as California College for the Arts, General Assembly, Copenhagen Institute of Interaction Design, and Stanford. Mike founded an influential Lean UX community in San Francisco.  

Although the conversation ranged widely, they kept coming back to research: the heart of the lean build-measure-learn cycle. As the hour-long panel drew to a close, Christina jumped up and scribbled on the board the key themes of the conversation: right questions, right people, right test, right place, right attitude and right documentation.

Below is Laura Klein expounds on these key themes of lean research. Boxes and Arrows is grateful for her time.

Right questions: Make sure you know what you need to know

Too many people just “do research” or “talk to customers” without having a plan for what they want to learn. What they end up with is a mass of information with no way of parsing it.

Sure, you can learn things just by chatting with your users, but too often what you’ll get is a combination of bug reports, random observations, feature suggestions, and other bits and bobs that will be very difficult to act on.

A better approach is to think about what you’re interested in learning ahead of time and plan the questions that you want to ask. For example, if you need to know about a particular user behavior, come up with a set of questions that is designed to elicit information about that behavior. If you’re interested in learning about the usage of a new feature, ask research participants to show you how they use the feature.

The biggest benefit to planning your research and writing questions ahead of time is that you’ll need to talk to far fewer people to learn something actionable. It will be quicker and easier to learn what you need to know, make a design change, and then test that change, since you will see patterns much more quickly when you ask everyone the same set of questions.

Right people: Talk to people like your users

Let’s say you’re building a brand new product. You want to get everybody’s opinion about it, right? Wrong! You want to get the opinions of people who might actually use the product, and nobody else.

Why? Well, it’s pretty obvious if you think about it. If you’re building a product for astronauts, you almost certainly don’t want to know whether I like the product. I’m not an astronaut. If you make any changes to your product based on anything I say, there is still no conceivable way that I’m going to buy your product. I am not your user.

Yet, this happens over and over. Founders solicit feedback about their product from friends, family, investors…pretty much anybody they can get their hands on. What they get is a mashup of conflicting advice, none of it from the people who are at all likely to buy the product. And all the time you spend building things for people who aren’t your customer is time you’re not spending building things for people who are your customer.

So, stop wasting your time talking to people who are never going to buy your product.

Right test/methodology: Sometimes prototypes, sometimes Wizard of Oz

Figuring out the right type of test means understanding what you want to learn.

For example, if you want to learn more about your user–their problems, their habits, the context in which they’ll use your product–you’re very likely to do some sort of ethnographic research. You’ll want to run a contextual inquiry or an observational study of some sort.

If, on the other hand, you want to learn about your product–whether it’s usable, whether the features are discoverable, whether parts of it are incredibly confusing–you’ll want to do some sort of usability testing. You might do task based usability testing, where you give the user specific tasks to perform, or you might try observational testing, where you simply watch people interact with your product.

There is another type of testing that is not quite as well understood, and that’s validation testing. Sometimes I like to call it “finding out if your idea is stupid” testing. This type of testing could take many forms, but the goal is always to validate (or invalidate) an idea or assumption. For example, you might test whether people want a particular feature with a fake door. Or you might learn whether a particular feature is useful with a concierge test. Or you could gauge whether you’re likely to have a big enough market with audience building. Or you could test to see whether your messaging is clear with a five second test.

All of these approaches are useful, but the trick is to pick the right one for your particular stage of product development. A five second test won’t do you any good if what you want to learn is whether your user is primarily mobile. A concierge test doesn’t make sense for many simple consumer applications. Whatever method you use, make sure that the results will give you the insights you need in order to take your product to the next level.

Right place: When do you go onsite?

If you talk to serious researchers, they will often tell you that you’ll never get good data without being in the same room with your subject. You’ll learn so much more being able to see the context in which your participant is using the product, they’ll tell you.

And they’re right. You do learn more. You also spend more. Kind of a lot more, in some cases.

So, what do you do if you don’t have an infinite budget? What do you do if you have users on multiple continents? What do you do if, in short, you are a typical startup trying to make decisions about a product before going out of business. You do what people have been doing since the dawn of time: You compromise.

Part of deciding whether or not to do remote research has to do with the difficulty of the remote research and what you need to learn. For example, it’s much harder at the moment to do remote research on mobile products, not just because there isn’t great screen sharing software but also because mobile products are often used while…well, mobile. If you simply can’t do an in person observation though, consider doing something like a diary study or tracking behaviors through analytics and then doing a follow up phone interview with the user.

Other types of research, on the other hand, are pretty trivial to do remotely. Something like straightforward, task based, web usability testing is almost as effective through screensharing as it is in person. In some cases, it can be more effective, because it allows the participant to use her own computer while still allowing you to record the session.

Also, consider if you’re truly choosing between remote testing and in-person testing. If you don’t have the budget to travel to different countries to test international users, you may be choosing between remote testing and no testing at all. I’ll take suboptimal remote testing over nothing any day of the week.

Choosing whether your testing is going to be remote, in person, or in a lab setting all comes down to your individual circumstances. Sure, it would be better if we could do all of our testing in the perfect conditions. But don’t be afraid to take 80% of the benefit for 20% of the cost and time.

Right attitude: Listen, don’t sell

I feel very strongly that the person making product decisions should be the person who is in charge of research. This could mean a designer, a product owner, an entrepreneur, or an engineer. Whatever your title, if you’re responsible for deciding what to make next, you should be the one responsible for understanding your user’s needs.

Unfortunately, people who don’t have a lot of experience with research often struggle with getting feedback. The most common problem I see when entrepreneurs talk to users is the seemingly overwhelming desire to pitch. I get it. You love this idea. You’ve probably spent the last year pitching it to anybody who would listen to you. You’ve been in and out of VC offices, trying to sell them on your brilliant solution.

Now stop it. Research isn’t about selling. It’s about learning. Somehow, you’re going to have to change your mode from “telling people your product is amazing” to “learning more about your user and her needs.”

The other problem I see all the time is defensiveness. I know, I know. It’s hard to just sit there and listen to someone tell you your baby is ugly. But wouldn’t you really rather hear that its ugly before you spend several million dollars on building a really ugly baby?

If you open yourself up to the possibility that your idea may be flawed, you have a chance of fixing the things that don’t work. Then your baby will be pretty, and everybody will want to buy it. Ok, the metaphor gets a little creepy, but the point is that you should stop being so defensive.

Right documentation: Record!

You should be taking all of this down. Specifically, you should be recording whatever you can. Obviously, you need to get permission if you’re going to record people, but if that’s at all possible, do it.

The main reason recording is so important is so that you can be more present while interviewing. If you’re not busy writing everything down, you can spend time actually having a conversation with the participant. It makes for a better experience for everybody.

If you can’t get everything on video, or really even if you can, it’s also good to have someone in the room with you taking extensive notes. You’re not going for a transcript, necessarily, but just having somebody record what was said and what was done can be immensely helpful in analyzing the sessions later.

Another important tactic for remembering what was said is the post-session debrief. After conducting the interview or observation, spend 15 minutes with any other observers and write down the top five or ten take-aways. Do it independently. Then, compare notes with the other observers and see if you all learned the same things from the session. You may be surprised at how often other people will have a different understanding of the same interview.

~~

Boxes and Arrows thanks Laura for sharing these insights with our readers! If you want to learn more about fast and effective research, we strongly recommend her book UX for Lean Startups: Faster, Smarter User Experience Research and Design and her talk “Beyond Landing Pages” from the 2013 Lean Startup Conference.

User Experience Research at Scale

by:   |  Posted on

An important part of any user experience department should be a consistent outreach effort to users both familiar and unfamiliar. Yet, it is hard to both establish and sustain a continued voice amongst the business of our schedules.

Recruiting, screening, and scheduling daily or weekly one-on-one walkthroughs can be daunting for someone in a small department having more than just user research responsibilities, and the investment of time eventually outweighs the returns as both the number of participants and size of the company grow.

This article is targeted at user experience practitioners at small- to mid-size companies who want to incorporate a component of user research into their workflow.

It first outlines a point of advocacy around why it is important to build user research into a company’s ethos from the very start and states why relying upon standard analytics packages are not enough. The article then addresses some of the challenges around being able to automate, scale, document, and share these efforts as your user base (hopefully) increases.

Finally, the article goes on to propose a methodology that allows for an adjustable balance between a department’s user research and product design and highlights the evolution of trends, best practices, and common avoidances found within the user research industry, especially as they relate to SaaS-based products.

Why conduct usability sessions?

User research is imperative to the success and prioritization of any software application–or any product, for that matter. Research should be established as an ongoing cycle, one that is woven into the fabric of the company, and should never drop-off nor be simply ‘tacked on’ as acceptance testing after launch. By establishing a constant stream of non-biased opinions and open lines of communication which are immune to politics and ever-shifting strategies, research keeps design and development efforts grounded in what should already be the application’s first priority–the user.

A primary benefit in working with SasS products is that you’re able to gain feedback in real-time when any feature is changed. You don’t have to worry about obsolete versions, or download packages–web-based software enables you to change directions quickly. Combining an ongoing research effort with popular software development methods such as agile or waterfall allows for immediate response when issues with an application’s usability are found.

Different from analytics

SaaS are unique in that there is not the same type of tracking needed in-product. Metrics such as page views or bounce-rates are largely irrelevant, because the user could be spending their entire session on configuring functions of a single feature on a single page.

For example, for our application here at Loggly, the user views an average of ~2 pages (predominantly login and then search) and spends on average 8x as long on search then any other page. Progression is made within the page-level functions, not among multiple pages within the application’s structure.

Javascript-heavy applications don’t have the same URL and tree structure content-heavy sites are built around but instead make calls to different states of the application from within the same page.

Say your analytics package gives an indication that something is wrong with the setup flow or configuration screen, but you don’t yet have a good concept of at what point in the process the users are getting stuck.

Perhaps a button might be getting click after click because it is confusing and unresponsive, not because its useful. Trying to solve this exclusively with an analytics package will pale in comparison to the feedback you’ll get from a single, candid user who hits the wall. As discussed later in this article, with screensharing, you’re able see the context in which the user is trying to achieve a specific task, defining the ‘why’ in their confusing becomes more apparent than just the ‘what’ are they clicking on.

Determining a testing audience

The first component of defining any research effort should be to define who you want to talk to. Ideally, you’ll be able to have a mix of both new users and veterans that are able to provide a well-rounded feedback loop on both initial impressions of your application as well as historical perspective on evolution and found shortcomings after repeated use, but not all companies have this luxury.

Once in the door

Focus first on the initial steps the user has to take when interacting with your application. It seems obvious, but if these are not fulfilled with maximum efficiency, the user will never progress into more advanced features.

Increasing the effectiveness of the flow through set-up, configuration, and properly defining a measure of activation will pay dividends to all areas of the application. This should be a metric that is tested, measured, and monitored closely, as it functions as a type of internal bounce rate. Ensuring that the top of the stream for the majority of application users is sound will guarantee improved usage further down the road to the deeper, buried interactions.

These advanced features should be also be tracked and measured with the correlation that starts to paint a profile of conversion. Some companies define conversion as free-to-paid; others do so in a more viral sense–conversion being defined as someone who has shared on social media or similar.

As you start itemize these important features, you’ll get a better sense of the usage profile for where you’re trying to point the user to. For example, adding a listing record, or perhaps customizing a page–these might match a profile for someone who is primed for repeat visitation, someone who has created utility and a lasting connection, and ultimately ready to convert.

Avoiding overlap

If there is a focus on recruiting participants who are newly signed-up users, then you’ll likely overlap with outbound sales efforts. Because your company’s sales and marketing funnel tries as hard as possible to convert trial users to paid, or paid to upgrade, the company’s priority will likely be on conversion, not on research.

Further, if a researcher tries to outreach for usability surveys at this point, from the user’s perspective (especially those deemed potential high-value customers) it would mean different prompts for different conversations with different people from various groups within your company, all competing for spots on their calendar. This gives a very hectic and frenetic impression of your company and should be avoided.

In the case of a SaaS product, sometimes the sales team has already made contact with potential customers, and many of these sales discussions involve demonstrations around populated, best-case scenarios (which showcase the full features) of your product.

As a result, you may find the participant has been able to ‘peek behind the curtain’ through watching the sales team provide these demonstrations, giving them an unfair advantage as to how much he / she knows before trying to finally use the product themselves. For the inexperienced user, your goal is to capture the genuine instinct of the uninitiated, not those who have seen the ‘happy path’ and are trying to trace back the steps to get to that fully-populated view.

To make sure you’re not bumping heads with the sales and conversion team, ask if you can take their castoffs–the customers they don’t think will convert. You can pull these from their CRM application and automate personalized emails asking for their time. I’ll outline this method in further detail in the section following, because it pertains to the veteran users as well.

Photo of people in a conference exhibit hall.
Conferences are a great way to survey new and existing users.

As described in a previous post, guerrilla testing at conferences is a great way of fulfilling what gets seen and what parts of the interface or concept get ignored. These participants are great providers of honest, unbiased feedback and haven’t been exposed to the product other than some initial impressions of the concept.

Desiring the messy room

But what about the users that have been using your product for months now, those who have skin in the game, have already put their sweat and dollars behind customization of their experience? Surveying these participants allows us to see where they’ve found both utility and what areas need be expanded upon. Surveying only the uninitiated won’t provide feedback on any nagging functional roadblocks, those which are found only after repeated use. These are the participants that will provide the most useful feedback, sessions where you can observe the environment that they’ve created for themselves, the ‘messy room.’

Making an observational research analogy, a messy room is more telling of the occupants’ personality than an empty one. Given your limitations, how has the participant been forced to find workarounds? Despite these workarounds, they’ve continued to use the product, in despite of how we’ve expected them to use it–and these two can be contrastingly very different.

Online feedback form for Loggly UK.
Example of a feedback form, initiated via email.
User is able to schedule a 1:1 screensharing session on the confirmation page.

Automated recruitment

Find your friendly marketing representative/sales engineer at your company (or just roll your own) and discuss with them the best way to integrate a user experience outreach email into the company’s post-funnel strategy. For example, post-funnel would be after their trial periods have long since expired and the user is either comfortable in their freemium state or fully paid up.

As mentioned earlier, you can also harvest leads from the top of the funnel in the discarded CRM leads. However, you’ll likely have a greater percentage of sessions with users that are misfires–those indifferent or only just poking around the app, with not yet a full understanding of what it might do. Thankfully, the opt-in approach for participation filters this out for the most part.

Focusing again on the recruitment of the veteran, experienced users, another, more complex scenario would be to trigger this UX outreach email once a specific set of features have been initiated–giving off the desired signature of an advanced, informed user.

Going from purely legacy-based perspective, six months of paid, active use should be enough time to establish a relationship with a piece of software, whether they love or hate it. If there exists enough insight into the analytics side of the sales process, it would behoove you to also make sure that the user has had a minimum number of logins across these six months (or however long you’ll allow the users to mature).

Outreach emails triggered through the CRM should empower the recipient to make the experience of the product better, both for themselves and their fellow customers. Netflix does a great job of this by continually asking about the streaming quality or any delays around arrival times of their product.

I also recommend asking the users a couple of quantitative and qualitative questions, as this metric something you should be doing for your greater UX efforts already. These questions follow the guidelines of general SUS (System Usability Survey) practices that have been around for decades. Make the questions general enough so that they can be re-used and compared going forward, without fear of needing the change the goalposts when features or company priorities change.

Screen grab of the user's desktop.
A peek into an active user’s work environment.

When engineering this survey, be sure to track which tier of customer is filling out these surveys, because both their experience and expectations could be wildly different. Remember also to capture the user’s email address as a hidden field so you can cross reference against any CRM or analytics packages that are already identifying existing customers.

Setting boundaries

It depends on the complexities of your product, but typically 20-30 minutes is enough time to cover at least the main areas of function. Any longer, and you might encounter people not wanting to fit in an entire hour block into their schedule. If these recorded sessions are kept to just a half-hour, I find that a $25 is sufficient compensation for this duration of time, but your results may certainly vary.

In any type session, do iterate that this is neither a sales, nor a support call. You’re researching on how to make the product better. However, you should be comfortable on how to avoid (or sometimes suggest) workarounds to optimize the participant’s experience, giving them greater value of use.

Tools of the trade

For implementation of the questionnaire, I hacked the HTML / CSS from a Google Form to exist as self-hosted page but still pushing results through the matching form and input IDs to the extensible Google Spreadsheet.

There are a few tutorials that explain how to retain your branding while using Google’s services. I went through the trouble so I can share the URL of either the form or the raw results with anyone, without the need to create an account or login. As we discuss the sharing component of these user research efforts, this will become more important. Although closed systems like SurveyMonkey or Wufoo are easy to get up and running, the extensibility or a raw, hosted result set does not compare.

Insert a prompt at the end of the questionnaire for the user to participate in a compensated user research survey, linking to a scheduling applications such as Calend.ly. This application has been indispensable for opt-in mass scheduling like this. The features of gCal syncing, timezone conversion, daily session capping, email reminders, and custom messaging all are imperative to a public-facing scheduling board. Anyone can grab a 30-minute time slot from your calendar with just your custom URL, embeddable at the end of your questionnaire.

To really scale this user research effort to the point where it can be automated, you cannot spend the time trying to negotiating mutually-available times, converting time zones and following up with confirmations. Calend.ly allows for you to cap the number of participants who can grab blocks of your time, so you can set a maximum number of sessions per day, preventing a complete overload of bookings in your schedule.

As a part of the scheduling flow within Calend.ly, a customizable input field asks the participant for their Skype handle in order to screen share together, and I’d advise for the practitioner to create a separate Skype account for this usability effort. With every session participant, you’ll begin to add and add more seemingly random contacts, any semblance of organization and purity for your personal contact list will be gone.

Screen grab of Calend.ly booking utility.
Calend.ly booking utility – a publicly-accessible reservation system.

Calend.ly booking utility – a publicly-accessible reservation system.

Once the user is on the Skype call, ask for permission to record the call and make sure that you give a disclaimer that their information will be kept private and shared with no one outside the company. You might also add ahead of time that any support questions that come up, you’ll be happy to direct to the proper technicians.

Permissions granted, be sure to re-iterate to the participant the purpose and goal of the call, and provide them with a license to say whatever they want, good or bad–you want to hear it. Your feelings won’t be hurt if they have frustrations or complaints about certain approaches or features of your product.

For recording the call, there are plenty of options out there, but I find that SnagIt is a good tool to capture video, especially given the resolution and dimension of the screen share tends to change based upon the participant’s monitor size. When compressing the output, a slow frame rate of 5/10 fps should suffice, saving you considerable file size when having to manage these large recordings.

Tagging annotations

When you’re walking the participant through the paces of the survey, be sure to annotate the time started and any high/lowlights you see along the way. While in front of your desktop, a basic note-taking utility application (or even pad and paper) should suffice. This will allow you to go back after the survey is finished and pull quotes for use elsewhere, such as powerpoint presentations or similar.

I always try to write a running diary of the transcript and a summary at the end just to cover what areas of the application we explored, as well as a quick summary of what feedback we gathered. Summarizing the typed transcript and posting the relative recorded video files should take no more than 10 minutes, which will still keep your total per-participant (including processing) time to under an hour each, certainly manageable as a part of your greater schedule.

Share the love (or hate)

I want to make sure that these sessions are able to be referred to by the executive and product management team for use in their prioritization strategy. Setting up an instance of MAMP / WordPress on a local box (we’re using one of the Mac Minis that power a dashboard display) which allows me to pass around the link internally and not have to deal with some of the issues around large video file sizes being uploaded, as well as alleviate any permissions concerns with these sessions being out in the wild.

Screen grab of the session archive interface.
Our UX session archive, with hundreds of recorded and tagged sessions.

Also important is to tag these posts attached to these files when you upload them. This allows faster indexing when trying to find evidence around a certain feature or function. Insert your written summary into the post content, and you’ll be able to better search on memorable quotes that might have been written down.

These resources can be very good for motivation internally, especially among the engineers who don’t often get to see people using the product they continually pour themselves into. They’ll also resonate with the product team, who will see first-hand what’s needed to re-prioritize for the next sprint.

After awhile, you’ll start to get a great library of clips that you can draw knowledge from. There’s also a certain satisfaction to seeing the evolution of the product in the interface through these screengrabs. That which was shown as confusing at one time may now be fixed!

Follow-up

Fulfillment of a participant compensation can be done through Amazon or other online retailers; you can wire a gift card through an email address, which you’ll be able to scrape as a hidden field from the spreadsheet of user inputs. Keep a running list of those that you’ve reached out to and contacted for responses.

You might also incorporate contacts met during sessions described in the Guerrilla Usability Testing at Conferences article, so you’ll be able to follow up when attending the next year’s conference to recruit again. After enough participants and feedback, think about establishing a customer experience council that you can follow up on with specific requests and outreach, even for quick vetting of opinions.

Conclusion

This article first outlined the strategies and motivation behind the research, advocating creating an automated workflow of continually-scheduled screenshares with customers, rather than trying to recruit participants individually. This methodology was then broken down to distinct steps of recruitment via email, gathering quantitative and qualitative feedback, and automating an opt-in booking of the sessions themselves. Finally, this article went on to discuss how to best leverage and organize this content internally, so that all might benefit from your process.

User research is imperative to the success and prioritization of any software application (or any product, for that matter). Yet, too often we forget to consume or own product. Whether it be server log management as I’ve chosen, or apartment listing or ecommerce purchases, shake off complacency and try to spend 30-mins a week trying to accomplish typical user tasks from start-to-finish.

Also make it a point to conduct some of these sessions among those you work alongside; you’ll be surprised what you can find just by the simple repetition with a fresh set of eyes and ears. The research process and its dependencies does not have to be as intricate as the one listed above.

 

When your company starts to incorporate user opinion into a design and development workflow, it will begin to pay out dividends, both in the perceived usability of your application as well as the gathered metrics of user satisfaction.

 

Honing Your Research Skills Through Ad-hoc Contextual Inquiry

by:   |  Posted on

It’s common in our field to hear that we don’t get enough time to regularly practice all the types of research available to us, and that’s often true, given tight project deadlines and limited resources. But one form of user research–contextual inquiry–can be practiced regularly just by watching people use the things around them and asking a few questions.

I started thinking about this after a recent experience returning a rental car to a national brand at the Phoenix, Arizona, airport.

My experience was something like this: I pulled into the appropriate lane and an attendant came up to get the rental papers and send me on my way. But, as soon as he started, someone farther up the lane called loudly to him saying he’d been waiting longer. The attendant looked at me, said “sorry,” and ran ahead to attend to the other customer.

A few seconds later a second attendant came up, took my papers, and jumped into the car to check it in. She was using an app on an tablet that was attached to a large case with a battery pack, which she carried over her shoulder. She started quickly tapping buttons, but I noticed she kept navigating back to the previous screen to tap another button.

Curious being that I am, I asked her if she had to go back and forth like that a lot. She said “yes, I keep hitting the wrong thing and have to go back.”

Continue reading Honing Your Research Skills Through Ad-hoc Contextual Inquiry

Three Ways to Improve Your Design Research with Wordle

by:   |  Posted on

“Above all else show the data.”
–Edward Tufte

Survey responses. Product reviews. Keyword searches. Forums. As UX practitioners, we commonly scour troves of qualitative data for customer insight. But can we go faster than line-by-line analysis? Moreover, how can we provide semantic analysis to project stakeholders?

Enter Wordle. If you haven’t played with it yet, Wordle is a free Java application that generates visual word clouds. It can provide a compelling snapshot of user feedback for analysis or presentation.

Using Wordle for content strategy

Wordle excels at comparing company and customer language. Here’s an example featuring one of Apple’s crown jewels, the iPad. This text comes from the official iPad Air web page. After common words are removed and stemmed:

iPad Air Wordle

Apple paints a portrait of exceptional “design” with great “performance” for running “apps.” Emotive adjectives like “incredible,” “new,” and “Smart [Cover]” are thrown in for good measure. Now compare this to customer reviews on Amazon.com:

image02

To paraphrase Jakob Nielsen, systems should speak the user’s language. And in this case, customers speak more about the iPad’s “screen” and “fast[er]” processor than anything else. Apps don’t even enter the conversation.

A split test on the Apple website might be warranted. Apple could consider talking less about apps, because users may consider them a commodity by now. Also, customer lingo should replace engineering terms. People don’t view a “display,” they look at a “screen.” They also can’t appreciate “performance” in a vacuum. What they do appreciate is that the iPad Air is “faster” than other tablets.

What does your company or clients say in its “About Us,” “Products,” or “Services” web pages? How does it compare to any user discussions?

Using Wordle in comparative analysis

Wordle can also characterize competing products. For example, take Axure and Balsamiq, two popular wireframing applications. Here are visualizations of recent forum posts from each website. (Again, popular words removed or stemmed.)

Axure Wordle

Balsamiq Wordle

Each customer base employs a distinct dialect. In the first word cloud, Axure users speak programmatically about panels (Axure’s building blocks), widgets, and adaptive design. In the Balsamiq cloud, conversation revolves more simply around assets, text, and projects.

These word clouds also illustrate product features. Axure supports adaptive wireframes; Balsamiq does not. Balsamiq supports Google Drive; Axure does not. Consider using Wordle when you want a stronger and more immediate visual presentation than, say, a standard content inventory.

Beyond comparative analysis, Wordle also surfaces feature requests. The Balsamiq cloud contains the term “iPad” from users clamoring for a tablet version. When reviewing your own Wordle creations, scan for keywords outside your product’s existing features. You may find opportunities for new use cases this way.

Using Wordle in iterative design

Finally, Wordle can compare word clouds over time. This is helpful when you’re interested in trends between time intervals or product releases.

Here’s a word cloud generated from recent Google Play reviews. The application of interest is Temple Run, a game with over 100 million downloads:

Temple Run Wordle

As you can see, players gush about the game. It’s hard to imagine better feedback.

Now let’s look at Temple Run 2, the sequel:

Temple Run sequel Wordle

Still good, but the phrase “please fix” clearly suggests technical problems. A user researcher might examine the reviews to identify specific bugs. When comparing word clouds over time, it’s important to note new keywords (or phrases) like this. These changes represent new vectors of user sentiment.

Wordle can also be tested at fixed time intervals, not just software versions. Sometimes user tastes and preferences evolve without any prompting.

Summary

Wordle is a heuristic tool that visualizes plaintext and RSS feeds. This can be quite convenient for UX practitioners to evaluate customer feedback. When seen by clients and stakeholders, the immediacy of a word cloud is more compelling than a typical PowerPoint list. However, keep the following in mind when you use Wordle:

  • Case sensitivity. You must normalize your words to lower (or upper) case.
  • Stemming. You must stem any significant words in your text blocks.
  • Accuracy. You can’t get statistical confidence from Wordle. However, it essentially offers unlimited text input. Try copying as much text into Wordle as possible for best results.
  • Negative phrases. Wordle won’t distinguish positive and negative phrasing. “Good” and “not good” will count as two instances of the word “good.”

That’s it. I hope this has been helpful for imagining text visualizations in your work. Good luck and happy Wordling.

Guerrilla Usability at Conferences

by:   |  Posted on

Does your company have display booths at trade shows and conferences? Typically, these are marketing-dominated efforts, but if you make the case to travel, working the booth can be used for user research. Here’s how I’ve done it.

Positioning and justification

At times it can be a hard internal sell to justify the costs and diversions to take your one- or two-person show on the road, all the while piggybacking off of another department’s efforts. Yet, standing on your feet for 12 hours a day doubles as a high-intensity, ‘product booth-camp.’ Say what you will about sales folk, but they are well trained on knowing how to (or finding someone who can) answer any question that comes their way. As an in-house UX professional, the more I can technically understand about our SaaS product, the more context I can have about our user’s needs.

I’ve found that having prospective customers participate in a usability session is a great way to show that we were taking the time to invest in them and their opinions of the product. As a result, there have been specific features that have been rolled into our application during the next sprint, which were proposed as small sound bites of feedback during these sessions. It shows we were listening, and makes a great justification for a follow-up phone call.

Recruiting and screening

To recruit, I scan Twitter to find those who tweet that they are excited about attending the upcoming conference. I cross-reference the Twitter handles to the names in LinkedIn to see if, based on job title and industry, they would be good participants.

I reach out to them to see if they’d be willing to sign up for a slot, proposing times between presentation sessions or before/after lunch to not conflict with their conference attendance.

Because the expo halls are generally open the entire day, even if there is no one booked on the calendar in specific spots, I also grab people just milling about to keep the sessions going. If you do this, be sure to quickly do a visual scan of their badge, as you can get a good sense of what they do and what knowledge they might have by where they work.

Booking

For the time bookings, I find that Calendly.com is a flexible, free, user-friendly way to book time slots with random people, using just a URL with no account sign-ups needed. In addition to custom time buckets (18 minutes, anyone?), Calendly also provides the option of a buffer increment after every session, so I can take notes and regroup.

Screen shot of a calendar with appointments booked.
Pick a time, (most) anytime.

Calendly does a good job of reminding participants when to show up and how find me–all the important things, including integrating well with all the major calendaring applications.

Come conference time, I have a slate of appointments along with contact information and reminders when they were coming. Couldn’t be easier. If expo hall hours change, I can easily message participants to let them know of the reschedule.

Duration

In a normal, controlled setting, I would typically want to go a full hour with a participant to properly delve into the subject matter and go through a number of different tasks and scenarios. “Pick a few and grade on a curve,” as Neilsen once said.

However, with the participant’s attention scattered given the sensory overload of the conference floor, anything more than 20 minutes gets to feel too long. At conferences, you’re going for quantity over quality. An advantage to this staccato method is when you find a vein of usability that you want to continue to explore in further depth and detail, there’s likely another participant right around the corner (either scheduled or random) to confirm or refute that notion.

Script and tone

The main challenge of this technique is that you’re not supposed to ‘sell’ in the role of testing moderator but rather to guide and respond. I wear many hats when working a booth; when not conducting these sessions, I sell the product alongside marketing.

As a result, 90% of the conversations in the booth are indeed sales, and switching roles so quickly is sometimes hard. I try to check myself when the testing script bleeds into ‘did you know that there are these features…’, because after 3+ days and what feels like a thousand conversations, I tend to put my conversations on a programmed sales loop, letting my brain rest a bit by going off of a script.

A pre-written task list helps keep me on point as a moderator. However, with the variety in participant group, I use the script much more as a guide than a mandate.

As with any usability session, I let the participants veer into whatever area of the app interest them the most and try to bring them back to the main road ever so subtly. With so many participants in such a short period of time, sometimes these unintended diversions became part of the next participant’s testing script, as it is easy to quickly validate or refute any prior assumptions.

Tools

Following the ‘guerrilla gorilla’ theme of this article, I use Silverback for my recording sessions. Silverback is a lightweight UX research tool that is low cost and works very well.

At one event, without my Bluetooth remote to use Silverback’s built-in marker/highlights, I paired an iPhone with an app called HippoRemote. Meant initially to provide ‘layback’ DVR/TV functionality, Hippo can also be written with custom macros to allow you to develop third-party libraries.

In the case of integrating with Silverback, this meant Hippo marked the start of new tasks, highlights of sound bytes, and starting/stopping recording–all the things that the Apple Remote should have done natively.

Despite some of the challenges in peripherals, Silverback is absolutely the right tool for the job. It’s lightweight, organized, and marks tasks and highlights efficiently.

Screen grab of the Silverback UI
Silverback UI

I recommend a clip-on microphone or directional mic given the background noise from the conference floor. Any kind of isolation that you can do for the participant’s voice will save you time in the long run, because you won’t have to try to scrub the audio in post-processing. Moving the sessions to somewhere quiet is a hard proposition, as the center of activity is where the impromptu recruitment tends to occur.

Wi-Fi

As a data-intensive SaaS product, the biggest challenge comes when trying to use the conference wi-fi. With the attendees swamping access points, there is no guarantee that I can pair the testing laptop and the iPhone used for marking, because they both need to be on the same network router for integration with with Silverback.

An ad-hoc network for the Mac won’t work, because I still need web access to use the application. Using my mobile phone as an access point has bandwidth constraints, and choppy downloads are not a good reflection on the speed of our application.

Unfortunately, then, every session begins with an apology on how slow the application is performing due to the shared conference wi-fi. A high-speed, private access point or a hardline into your booth cures all of these issues and would be worth the temporary investment for sales demonstrations and usability sessions alike.

Summary

There are a few adaptations we, as usability professionals, have to make from a traditional sit-down, two-sided-glass setting. Conference booth testing is a much more informal process, with an emphasis on improvisation and repetition. Some of the tools and methods used in guerilla testing certainly are not as proven or stable, but the potential recruitment numbers outweighs the inconveniences of a non-controlled setting.

From an educational standpoint, being inside the booth for days at a time will raise your knowledge-level considerably. You’ll hear again and again the type of questions and responsive dialog that prospective customers have around the product, and you’ll start to recognize the pain points coming from the industry.

After a half-dozen conferences, you’ll start to understand the differences in the average participant. In the case of the technology-centric attendees, some conferences provide a recruitment base of high-level generalists, with others being much executionally closer to the ground and detail-oriented. I tend to tailor my scripts accordingly, focusing on principles and concepts with the generalists, and accomplishment of specific tasks with the more programmatic participant.

One good thing about working for Loggly o’er here in the startup world is the ability to create paths and practices where there were none before. Pairing with the marketing team, using a portion of the presentation table to recruit participants off the expo hall floor, and sitting them down for a quick walkthrough of the product is a great way to become inspired about what you do and who you’re working for. As someone who still gets excited to travel, meet new people, and play off crowds, these sessions are always a highlight for me to conduct guerilla usability in front of my customers, peers, and my co-workers.