How to Determine When Customer Feedback Is Actionable

Written by: Naira Musallam

One of the riskiest assumptions for any new product or feature is that customers actually want it.

Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.

To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.

This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.

Quantify risk tolerance

You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.

Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.

To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.

A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.

Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.

A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.

Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.

Matrix visualising Type I and Type II errors as described in text.

 

But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.

To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.

In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.

In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.

Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.

Focus on impact

Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.

Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”

This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:

In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.

Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.

Sample size constraints

One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.

That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek  sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.

A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.

Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.

Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.

Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.

After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.

Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.

In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.

For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.

But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.

An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.

Expand capacity for learning

It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.

In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.

Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.

One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.

We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.

Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”

So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.

The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.

Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.

This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.

Divide no more

The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.

Online Surveys On a Shoestring: Tips and Tricks

Written by: Gabriel Biller

Design research has always been about qualitative techniques. Increasingly, our clients ask us to add a “quant part” to projects, often without much or any additional budget. Luckily for us, there are plenty of tools available to conduct online surveys, from simple ones like Google Forms and SurveyMonkey to more elaborate ones like Qualtrics and Key Survey.

Whichever tool you choose, there are certain pitfalls in conducting quantitative research on a shoestring budget. Based on our own experience, we’ve compiled a set of tips and tricks to help avoid some common ones, as well as make your online survey more effective.

We’ve organized our thoughts around three survey phases: writing questions, finding respondents, and cleaning up data.

Writing questions

Writing a good questionnaire is both art and science, and we strongly encourage you to learn how to do it. Most of our tips here are relevant to all surveys, but particularly important for the low-budget ones. Having respondents who are compensated only a little, if at all, makes the need for good survey writing practices even more important.

Ask (dis)qualifying questions first

A sacred rule of surveys is to not waste people’s time. If there are terminating criteria, gather those up front and disqualify respondents as quickly as you can if they do not meet the profile. It is also more sensitive to terminate them with a message “Thank you for your time, but we already have enough respondents like you” rather than “Sorry, but you do not qualify for this survey.”

Keep it short

Little compensation means that respondents will drop out at higher rates. Only focus on what is truly important to your research questions. Ask yourself how exactly the information you collect will contribute to your research. If the answer is “not sure,” don’t ask.

For example, it’s common to ask about a level of education or income, but if comparing data across different levels of education or income is not essential to your analysis, don’t waste everyone’s time asking the questions. If your client insists on having “nice to know” answers, insist on allocating more budget to pay the respondents for extra work.

Keep it simple

Keep your target audience in mind and be a normal human being in framing your questions. Your client may insist on slipping in industry jargon and argue that “everyone knows what it is.” It is your job to make the survey speak the language of the respondents, not the client.

For example, in a survey about cameras, we changed the industry term “lifelogging” to a longer, but simpler phrase “capturing daily routines, such as commute, meals, household activities, and social interactions.”

Keep it engaging

People in real life don’t casually say, “I am somewhat satisfied” or “the idea is appealing to me.” To make your survey not only simple but also engaging, consider using more natural language for response choices.

For example, instead of using standard Likert-scale “strongly disagree” to “strongly agree” responses to the statement “This idea appeals to me” in a concept testing survey, we offered a scale “No, thanks” – “Meh” – “It’s okay” – “It’s pretty cool” – “It’s amazing.” We don’t know for sure if our respondents found this approach more engaging (we certainly hope so), but our client showed a deeper emotional response to the results.

Finding respondents

Online survey tools differ in how much help they provide with recruiting respondents, but most common tools will assist in finding the sample you need, if the profile is relatively generic or simple. For true “next to nothing” surveys, we’ve used Amazon Mechanical Turk (mTurk), SurveyMonkey Audience, and our own social networks for recruiting.

Be aware of quality

Cheap recruiting may easily result in low quality data. While low-budget surveys will always be vulnerable to quality concerns, there are mechanisms to ensure that you keep your quality bar high.

First of all, know what motivates your respondents. Amazon mTurk commonly pays $1 for the so-called “Human Intelligence Task” that may include taking an entire survey. In other words, someone is earning as little as $4 an hour if they complete four 15-minute surveys. As such, some mTurk Workers may try to cheat the system and complete multiple surveys for which they may not be qualified.

SurveyMonkey, on the other hand, claims that their Audience service delivers better quality, since the respondents are not motivated by money. Instead of compensating respondents, SurveyMonkey makes a small donation to the charity of their choice, thus lowering the risk of people being motivated to cheat for money.

Use social media

If you don’t need thousands of respondents and your sample is pretty generic, the best resource can be your social network. For surveys with fewer than 300 respondents, we’ve had great success with tapping into our collective social network of Artefact’s members, friends, and family. Write a request and ask your colleagues to post it on their networks. Of course, volunteers still need to match the profile. When we send an announcement, we include a very brief description of who we look for and send volunteers to a qualifying survey. This approach costs little but yields high-quality results.

We don’t pay our social connections for surveys, but many will be motivated to help a friend and will be very excited to hear about the outcomes. Share with them what you can as a “thank you” token.

For example, we used social network recruiting in early stages of Purple development. When we revealed the product months later, we posted a “thank you” link to the article to our social networks. Surprisingly even for us, many remembered the survey they took and were grateful to see the outcomes of their contribution.

Over-recruit

If you are trying to hit a certain sample size for “good” data, you need to over-recruit to remove the “bad” data. No survey is perfect and all can benefit from over-recruiting, but it’s almost a must for low-budget surveys. There are no rules, but we suggest over-recruiting by at least 20% to hit the sample size you need at the end. Since the whole survey costs you little, over-recruiting will equally cost little.

Cleaning up data

Cleaning up your data is another essential step of any survey that is particularly important for the one on a tight budget. A few simple tricks can increase the quality of responses, particularly if you use public recruiting resources. When choosing a survey tool, check what mechanisms are available for you to clean up your data.

Throw out duplicates

As mentioned earlier, some people may be motivated to complete the same survey multiple times and even under multiple profiles. We’ve spotted this when working with mTurk respondents by checking their Worker IDs. We had multiple cases when the same IDs were used to complete a survey multiple times. We ended up throwing away all responses associated with the “faulty IDs” and gained more confidence in our data at the end.

Check response time

With SurveyMonkey, you can calculate the time spent on the survey using the StartTime and EndTime data. We benchmarked the average time of the survey by piloting the survey in the office. This can be used as a pretty robust fool-proof mechanism.

If the benchmark time is eight minutes and you have surveys completed in three, you may question how carefully respondents were reading the questions. We flag such outliers as suspect and don’t include them in our analysis.

Add a dummy question

Dummy questions help filter out the respondent quickly answering survey questions at random. Dummy questions require the respondent to read carefully and then respond. People who click and type at random might answer it correctly, but it is unlikely. If the answer is incorrect, this is another flag we use to mark a respondent’s data as suspect.

Low-budget surveys are challenging, but not necessarily bad, and with a few tricks you can make them much more robust. If they are used as an indicative, rather than definitive, mechanism to supplement other design research activities, they can bring “good enough” insights to a project.

Educate your clients about the pros and cons of low-budget surveys and help them make a decision whether or not they want to invest more to get greater confidence in the quantitative results. Setting these expectations up front is critical for the client, but you never know, it could also be a good tool for negotiating a higher survey budget to begin with!

Enhancing the Mind-Meld

Written by: Mark Richman

Which version of the ‘suspended account’ dashboard page do you prefer?

Version A

Version A highlights the address with black text on a soft yellow background.

 

 

 

Version B

Version B does not highlight the service address.

 

 

Perhaps you don’t really care. Each one gets the job done in a clear and obvious way.

However, as the UX architect of the ‘overview’ page for a huge telecom leader, it was my job to tell the team which treatment we’d be using.

I was a freelancer with only four months tenure on this job, and in a company as large, diverse, and complex as this one, four months isn’t a very long time. There are a ton of things to learn—how their teams work, the latest visual standards, expected fidelity of wireframes, and most of all, selecting the ‘current’ interaction standards from a site with thousands of pages, many of which were culled from different companies following acquisitions or created at different points in time. Since I worked off-site, I had limited access to subject matter experts.

Time with the Telecom Giant’s UX leads is scarce, but Nick, my lead on this project , was a great guy with five years at the company, much of it on the Overview page and similar efforts. He and I had spent a lot of phone time going over this effort’s various challenges.

Version A, the yellow note treatment, had been created to highlight the suspended location if the “Home Phone” account covered more than one address. After much team discussion, we realized that this scenario could not occur, but since the new design placed what seemed like the proper emphasis on the ‘Account Suspended’ situation, I was confident that we’d be moving forward with version A.

So, why was I surprised when Nick said we’d “obviously” go with version B?

Whenever I start with a new company, I try to do a mind meld with co-workers to understand their approach, why they made certain decisions, and learn their priorities. Unless I’m certain there is a better way, I don’t want to go in with my UX guns blazing—I want to know whether they’d already considered other solutions, and if so, why they were rejected. This is especially true in a company like Telecom Giant, which takes user experience seriously.

I’d worked so closely with Nick on this project that I thought I knew his reasoning inside out. And when he came to a different conclusion, I wondered whether I’d ever be able to understand the company’s driving forces. If I wasn’t on the same page with someone who had the same job and a similar perspective, with whom I’d spent hours discussing the project, what chance did I have of seeing eye-to-eye with a business owner on the other side of the country or a developer halfway across the world?

Historical perspective

Version A (the yellow note treatment) was created by Ken, a visual designer who had an intimate knowledge of the telco’s design standards. This adhered to other instances where the yellow note was used to highlight an important situation.

Version B was the existing model, which had worked well in a section of the site that had been redesigned a year ago following significant user testing. Because of its success, this section–“Home Usage”–was earmarked as the model for future redesigns.

Once I had a bit of distance from the situation, I realized what the problem was. Although I had worked very closely with Nick, I didn’t have the same understanding of the company’s priorities.

My priorities were:

  • Consistency across the site
  • Accessibility
  • Using the most up to date and compelling interaction and design patterns
  • Modeling redesign efforts on “Home Usage” where possible

Because Nick had a background in visual design, I thought that he would want to use Ken’s design pattern, which seemed both more visually distinct and a better match for the situation. But Nick preferred the Home Usage pattern and may have had good reasons to think so.

First, Home Usage had been thoroughly tested, and since this was an ecommerce site with many hard-to-disentangle components, testing could have provided insight into its success factors, especially if individual components had been tested separately.

Second, by following the existing pattern, we wouldn’t wind up with two different treatments for the same situation. Even though the yellow note treatment might be more prominent, was it significant enough to shoulder the cost of changing the pattern in the existing Home Usage flow?

Now that I knew at least one piece of the puzzle, I wondered how I might have achieved a more complete ‘mind meld’ with Nick, so that we were more closely in sync.

Know your priorities—and check them out

Just being aware of the priorities I was following would have offered me the chance to discuss them directly with Nick. With so much information to take in, I hadn’t thought to clarify my priorities and compare them with my co-workers, but this would have made it easier to sync up.

Other barriers to knowledge transfer

Gabriel Szulanski1 identified three major barriers to internal knowledge transfer within a business. Although these are aimed at firm-wide knowledge, they seem relevant here for individuals as well:

Recipient’s lack of absorptive capacity

Absorptive capacity is defined as a firm’s “ability to recognize the value of new information, assimilate it, and apply it to commercial ends.”2 To encourage this, companies are urged to embrace the value of R&D and continually evaluate new information.

Szulanski notes that such capacity is “largely a function of (the recipient’s) preexisting stock of knowledge.”3 If existing knowledge might help or hinder gathering new information, how might we apply this to an individual?

  • As information load increases, it lessens your ability to understand it and properly place it within a mental framework.
  • While the new company may have hired you for your experience and knowledge, you might need to reevaluate some of that knowledge. For instance, it may be difficult to shed and reframe your priorities to be in sync with the new firm.

Causal ambiguity

Causal ambiguity refers to an inability to precisely articulate the reasons behind a process or capability. According to Szulanski, this exists “when the precise reasons for success or failure in replicating a capability in a new setting cannot be determined.”

How did causal ambiguity affect this transfer? While the site’s Home Usage section was promoted because of its successful testing and rollout, the reasons behind its success were never clear. Success of an ecommerce site depends on many factors, among them navigation, length and content of copy and labels, information density, and the site’s interaction design. Since Home Usage’s advantages had never been broken down into its components, and I hadn’t been there when usability tests were conducted, I could only see it as a black box.

To truly assimilate new knowledge, you need context. If none is provided, you need to know how to go out and get it. Ask about the reasons behind a model site. If possible, read any test reports. Keep asking until you understand and validate your conclusions.

An arduous relationship between the source and the recipient

Finally, knowledge transfer depends on the ease of communication and ‘intimacy’ between the source and recipient. Although my relationship with Nick was close, I worked off-site, which eliminated many informal opportunities for knowledge sharing. I couldn’t ask questions during a chance meeting or ‘ambush’ a manager by waiting for her to emerge from a meeting. Since I didn’t have access to Telecom Giant’s internal messaging system, I was limited to more formal methods such as email or phone calls.

A model for knowledge transfer

Thomas Jones offered this approach to knowledge transfer in a Quora post: “As they say in the Army: ‘an explanation, a demonstration, and a practical application.’ Storytelling, modeling, and task assignment … share your stories, model the behaviors you want to see and assign the tasks required to build competency.”4

Keeping “Home Usage” in mind, the story could be “how we came to follow this model,” the demonstration could be the research paper, and a practical application could be your work, evaluated by your lead.

In conclusion

Your ability to retain new information is essential to your success at a new company. However, your ability to understand the reasons behind the information and place these within a framework are even more important. Some techniques to help you do so are:

  • Be aware of your own design priorities and how they match with the firm’s. Treat the company’s priorities like any user research problem and check them out with your leads and co-workers.
  • To increase your absorptive capacity, evaluate your preconceptions and be prepared to change them.
  • Ask for the reasons behind a ‘model’ design. Read research reports if available.
  • Maximize your contact points. Follow-up emails can target ambiguous responses. If time with the UX leads is scarce, ask your co-workers about their view of priorities, patterns and the reasons behind them.

Further reading

1 Szulanski, G 1996, ‘Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm’, Strategic Management Journal, vol. 17, pp. 27-43.

2 Absorptive capacity. Wikipedia entry.

3 Dierickx, Ingemar and Karel Cool. 1989. “Asset stock accumulation and sustainability of competitive advantage.” Management Science. 35 (December): 1504-1511.

4 “What patterns of behavior have proven to be most helpful in knowledge transfer?” Quora post.

Forms: The Complete Guide—Part 4

Written by: Martin Polley

In which we take a look at selection-dependent inputs, and see that they’re a lot more simple to put together than they look.

Forms. They’re often the bane of users’ online lives. But it doesn’t look like they’re going away any time soon. So its up to us, UX designers, to make them as smooth and easy to use as possible for our users while still reaching the best business outcomes.

If we prototype our forms, we can get them in front of users earlier and get feedback sooner, which we can use to iterate our designs. Previous posts in this series covered form layout and alignment, input types, and grouping and inline help.

In this, the fourth post in this series, we take a look at selection-dependent inputs. (I’ve also seen it referred to as “responsive enabling” and/or “responsive disclosure”). All this means is changing the fields that the user sees based on some selection that they have made or some information that they have provided. For example, if we’re asking the user for their postal address, we might have a field where they can enter (or select) the country that they live in. Then, depending on the country, we can change the other fields to match the standard address format for that country.

Note: “Responsive enabling” refers to having one or more fields that are disabled until the user makes a particular selection, in which case they become enabled, allowing the user to interact with them. “Responsive disclosure,” on the other hand, hides the fields, and makes them appear if the user makes a particular selection. I don’t recall seeing many forms that use responsive enabling of late, but I’m sure it’s useful in some circumstances, so I’ll include an example.

Responsive enabling

I think one of the reasons that this pattern has fallen out of favor in recent years is that it can be distracting. You have all these disabled fields, and they just take up space and draw the user’s attention away from the stuff that matters unless they click on a particular checkbox (which may only be relevant for a small percentage of users).

One benefit is that things stay put. When you use responsive disclosure to make fields appear and disappear, the content below them suddenly jumps downward, which can be jarring and disorienting. But nowadays, it’s easy to animate transitions like this, which can reduce the problem.

Anyway, in some cases, responsive enabling is going to be the right approach (where there aren’t too many disabled fields, and where it would be confusing not to show them), so let’s press on with an example.

In this example, we’re going to let the user opt in to receiving text messages from us to let them know about special offers. You would typically find things like this at the bottom of an order form. We’ll have a field where they can enter their mobile number, which will be disabled until they click the opt-in checkbox.

After you’ve set up a new Foundation project (this post explains how), add this HTML just after the opening <body> tag. (We’re not going to waste time prototyping the whole form–we’ll just prototype the bit we’re interested in.)

Somewhere within the page’s <head> tags, add this <style> block:

The result looks like this:

Our form, with the phone number input disabled

What I’ve done here is not very complicated. Like in the previous posts, I’ve used Foundation’s grid to lay out the form. In the second line of the HTML, I’m just squishing the form down into eight of Foundation’s available twelve columns, and adding a two-column offset to center it on the page.

The first bit of actual content is a paragraph to show where the actual form fields and labels would go if this was a real form.

Then we’ve got a row containing a column that takes up all twelve columns. (small-12 columns means it is twelve columns wide on small screen sizes and anything bigger, i.e., all screen sizes.) This contains the checkbox <input> (with id and name) and its label (which uses for to link it to the checkbox, so that you can click on it to toggle the checkbox too).

After that, there’s another row that contains two columns, for the phone number <input> and its <label>. Notice how this divvies up the available space differently for small and medium (and bigger) screens.

For small screens, we’ve got a three/nine column split, so together they fill up all the available width. But on medium-and-up screens, they are three and six columns wide. This is to prevent the <input> from being ridiculously wide on larger screens—Foundation makes each input 100% of the width of the column that it’s in. (By default, if the number of columns doesn’t add up to twelve, Foundation scoots the last column over so it is right-aligned. I’ve added the end class to stop this happening.)

The label gets the right class to right-align it, and the align class to align it vertically with the <input>. It also gets an ID so we can style it and do stuff to it. And the <input> gets the appropriate type (tel) and the disabled property to make it disabled initially.

The CSS in the <style> block in <head> just applies some styling to the placeholder text that shows where the real form fields would go and makes the “Mobile number” label grey so it looks disabled. (You can add the disabled property to <input>s, but not to <label>s—you have to style these yourself so they’ll look disabled.)

It looks OK so far. The phone number input is disabled, and its label looks disabled too. But it doesn’t get enabled when you check the checkbox. For that, we need a bit of JavaScript. Not much; just a smidge.

At the bottom of the page, right before the closing </body> tag, add this <script> block:

This is pretty similar to what we saw in the last post. All the code is inside the jQuery $(document).ready() function, so it will only get run after the page has finished loading.

Within $(document).ready(), this part of the first line selects the checkbox:

$('#cb')  

that is, the element with the ID of cb, then this part:

.on('change', function(){  

calls the on() function to detect when its state changes. The three lines within the curly braces ({}) specify what happens when this change event is detected:

  • In the first line, this bit:
    $('#mobilelabel')
    

    selects the label for the phone number field, which has an ID of mobilelabel, then this bit:

    .toggleClass('disabled')
    

    calls toggleClass() on it. This adds the disabled class to the element if it doesn’t already have it, and removes it if it does. This is the class that gives it the grey color, so removing it switches it back to its default color, black.

  • The second line declares a variable called checked. Then this part:
    $('#cb')
    

    selects the checkbox again and then checks whether it is checked or not like this:

    .is(':checked')
    

    This gives us an answer of true or false. The ! reverses this value. If it is true, this makes it false, and vice-versa. Then this value gets stored in our checked variable. I’ll explain why we need to reverse the value in a minute.

  • The third line selects the phone number input, which has an ID of mobile, like this:
    $('#mobile')
    

    and calls prop() on it to set the value of its disabled property. And the value we give it is whatever we just stored in our checked variable. This way, if the checkbox is checked, checked gets a value of false, which we use to set disabled to false. (Actually, I used trial and error to know whether I needed to reverse the value or not! I like to think of this kind of thing as being pragmatic.)

    In fact, disabled doesn’t take a value–an element either has the property or it doesn’t. But behind the scenes, jQuery correctly interprets us giving disabled a value of false and simply removes the property.

Now checking the checkbox enables the input, so it looks like this (and unchecking it disables it again):

Our form, with the phone number input now enabled

You can see it in action here.

Responsive disclosure

For responsive disclosure, we’re just going to take the previous example and change a couple of things. Here’s the HTML:

The only changes that I’ve made here are to remove the disabled attribute from the phone number <input> and to remove the disabled class from its <label>. And I’ve added a new ID, mobile_container, to the row <div> that contains them so that I’ll have something to attach behavior to.

The CSS is a bit different too:

Gone is the rule for styling the “disabled” label, and in its place is a rule that hides the row <div> that contains the phone number input and its label by giving it a display of none.

As you can see, when the checkbox is unchecked, the phone number input and its label are nowhere to be seen:

Our form, with the phone number input hidden

To make them appear when you check the checkbox, we need to modify our JavaScript:

This is much simpler than it was before. All we’ve got now inside the function is one line, which toggles the visibility of the <div> that contains the phone number field. Using the toggle() function means that whenever the state of the checkbox changes, we either show or hide the phone number field. We don’t have to check the value of anything, store things in variables, or any of that stuff.

Now when we check the checkbox, the phone number field appears:

Our form, with the phone number input now shown

(You can see a live example here.)

If we had additional content below these elements, it would jump down when the phone number field appears. It doesn’t take up much space, so it doesn’t jump very much. But imagine if what we were making appear was a whole sub-section of the form that takes up half the height of the page. Pushing the following content down by so much can be very disorienting.

It’s worth taking a minute to look at how we can use animation to make changes like this more palatable. We’ll add some more content after the phone number field, then add animation and see how it looks. Add another <div> containing some text below the hidden one, so it looks like this:

Now let’s change our JavaScript to include animation. Luckily, in this case, it is very easy indeed. All we need to do is replace the call to toggle() with a call to slideToggle(), so it looks like this:

Instead of just showing or hiding the element, slideToggle() makes it appear by sliding it down from the top, or hides it by sliding it up. Try it out here.

We can control various aspects of how this animation happens. For example, we can make it faster or slower by putting a value (in milliseconds) in the parentheses, like this:

slideToggle(1000)  

This example shows this slower version in action.

Another thing we can do is control the flow of the animation (what is referred to as its easing). By default, jQuery animations start out slow, speed up in the middle, then slow down again at the end. We can change this by doing something like this:

slideToggle({easing: easeInCubic})

Note that to use additional easings like this one, you need to include the jQuery UI library by adding these two lines near the bottom of the page (before the <script> block that contains our code):

This starts off slow, like before, but speeds up and then doesn’t slow down–it just stops abruptly, as if it had hit a wall. Try it here.

We’re off on a bit of a tangent here, but I’ll just show you one more before we move on to the next thing. There are 32 different easings available in jQuery, and each one gives the animation a slightly different feel.

For example, look what happens when we change the easing to easeOutBounce:

slideToggle({easing: easeOutBounce})

Take a look here. Fun, isn’t it?

More complex scenarios

Sometimes we are faced with more complex scenarios than just disabling or enabling part of a form, or showing and hiding it. Often these more complex scenarios involve replacing one set of inputs with another depending on what the user selects. The example that Luke Wroblewski uses in his book is a “contact me” form, where the user can choose to be contacted via email, phone, SMS, or IM. When the user makes a selection, the form changes to show just the relevant fields (email address for email, phone number for phone, and so on).

There are lots of different ways that a form like this could work. You could have the initial selection as tabs, with the appropriate form fields inside each tab. You could have a drop-down list for email, SMS, etc, with the form fields right below it. You could have radio buttons for the initial selection, with the relevant fields appearing either below the radio button group or below the selected option. You could even go with a progressive-enabling-style design, with a disabled set of fields after each radio button, with the appropriate set of controls being enabled when the user selects one of the options.

(And, of course, there are lots of other scenarios where one user selection changes something else. One that you see pretty often is two drop-down lists, where the first is for selecting a category and the second is for selecting a sub-category, or an item within the selected category. So on a site that lists second-hand cars, you might have one drop-down list for selecting the brand, and a second for selecting the model.)

I could show you how to prototype one of these, but that wouldn’t help at all with any of the others. And if I were to show you how to prototype them all, you would soon lapse into a boredom-induced coma. So what to do?

Is there some common thread can we draw out of all these different scenarios? Maybe there’s some technique can I show you that will be useful in prototyping all of these? Well, there isn’t really. All I can do is give you some pointers, which will hopefully help you to avoid some wasted effort:

Be lazy

There is the way that a proper programmer would do something, and then there are shortcuts. We’re not writing production code here, so it doesn’t matter if we cut corners. What matters is that it walks and quacks like the real thing. Or even just enough like the real thing to be moderately convincing. So going back to the example I just gave of a pair of drop-down lists, the proper way to build something like this would be to populate the first list dynamically using a list of car brands that you request from the server. Then, when the user selects a brand, you would make another server request for the list of models for the brand, and use that to dynamically build up the list.

But for a prototype, you might not even need all the possible car brands and their models. It might be enough to have half a dozen brands, and a few models for each. And instead of dynamically populating each <select> element with the appropriate <option>s, you can put six separate <select>s in yout HTML, one for each brand, hide them all to start with, and just use JavaScript to show the right one depending on the brand that the user selects.

Learn a few key things in JavaScript/jQuery.

You don’t need to know everything there is to know about JavaScript and jQuery to be able to do useful stuff with it. The ones I find myself using most often are:

  • Event handlers. Mostly, this means using jQuery’s on() function to detect when the user does something to some element, and performing some action when they do.
  • Chaining. Every function gives you something when it’s done. (In programming parlance, it “returns” it.) You can then do something with the thing it returns. So if you call the find() function on an element like this:
    $('#myelement').find('li');
    

    find() returns the <li> element that it found (if there was one within #myelement for it to find). So then you can do stuff to the <li> by chaining another function on the end, like this:

    $('#myelement').find('li').hide();
    

    hide() returns the element that it just hid, so you could, if you wanted, tack on yet another function to the chain to do something else to it. (But we won’t.)

  • Simple conditional branching. The if and if ... else constructs let you ask “is this thing true”, and if it is, to perform some action. (else lets you do something else if it isn’t true.)
  • Slightly less simple conditional branching. When you’ve got something that can have multiple different outcomes, each with a different action to be performed, if becomes an overly-complicated way of doing things. The JavaScript switch statement is perfect in these situations. It’s ideal when you’ve got something like a drop-down list with several options, and you want to do a different thing for each selected option.
  • Making stuff appear and disappear. We can use show(), hide(), and toggle(), as we’ve seen, to make stuff appear and disappear. And we can either supply additional arguments to these functions to animate them (and to control things like the speed and direction of the animation) or use special functions like slideUp() and slideDown() (and slideToggle(), which we saw above) to do the same thing. There’s also the animate() function, for when we want full control over every aspect of an animation.

Look stuff up.

The jQuery documentation is pretty good. If you know the name of the function you’re trying to use, it tells you everything you need to know about how to use it. And if you don’t know which function you need, it has category pages that list all the functions for, for example, getting from one element to another, or for inserting, removing, and manipulating elements. (Note that jQuery is an addition to JavaScript. Some things, like if and switch, are part of JavaScript, not jQuery, so you won’t get very far searching for them in the jQuery docs.) And if you Google for the thing you’re wrestling with, you’ll usually end up at Stack Overflow, the Q&A site for programmers. 99.9% of the time, your question will have already been asked and answered. For example, I wanted to know the correct way to find out if a checkbox is checked or not. So I Googled jquery checkbox get value, clicked the first result, and got the information I needed from the top answer. Easy-peasy.

Don’t be afraid of doing it “wrong.”

For anything you want to do, there are going to be several (even many) different ways to do it. Does it matter which one you choose? For a (disposable) prototype, it doesn’t. The only thing that matters is: does it work or not? If it works, it’s “right”. I’m sure a “real” programmer would take one look at most of my prototype code and snicker to himself under his beard. But it serves the purpose for which it is intended, so it doesn’t matter.

Having said that, I do have a couple of examples that you can take a look at. The first shows a different form depending on what you select in a drop-down list. The second changes the contents of one drop-down list depending on what you select in another. Use the View Source option in your browser to see how they work—they don’t do anything particularly clever.

Conclusion

Changing parts of a form depending on what the user selects is pretty powerful. We can use it to hide complexity from the user and make our forms more simple to use. It means we need to learn a bit of JavaScript, but we can do a lot with a little. I think it’s well worth the effort.

Forms: The Complete Guide–Part 3

Written by: Martin Polley

Forms are important—they’re the most common way to get information from our users. But just making wireframes of a form misses a big piece of the picture—what it’s like to interact with it. An HTML prototype of a form, on the other hand, can look and behave just like the real thing.

In the first post, I showed you how to lay out a form and align the labels the way you want, using HTML and Foundation.

In the second post, I showed you all the different input types you can use.

In this post, I’ll show you how to group your inputs and how to provide help to the user while they’re filling out the form.

To make the most of these posts, I strongly encourage you to get your hands dirty. Type in the examples and then check them out in your browser. This gets it into your brain, and your fingers, more effectively than just copying and pasting (or worse, just reading).

Grouping

When you’ve got more than just a few input fields, it makes sense to organize them into logical groups, which you then separate visually. This makes the form less intimidating. It looks more like several small forms than one long one.

Let’s make a shipping details form similar to the one we created in the first post. This time, in addition to the shipping address inputs, we’ll add an extra input for the user’s email address, and we’ll split the phone number and email address out into a separate group.

The most obvious way to do this would be to add a second <fieldset> with its own legend and put the phone and email inputs in there.

Let’s try it. In the index.html file of a new Foundation project (see this post to find out how to set that up), add this:

It’s OK, but the result looks a bit busy (view larger):

Form groups with borders

There’s just a bit too much to process here. We don’t need quite so many visual elements to separate the two groups. We could abandon <fieldset>s and replace the <legend>s with headings, but that would be semantically less correct. It would also affect the form’s accessibility.

A better approach would be to tweak the CSS a bit to reduce the visual clutter. Let’s create a new stylesheet and link it up from index.html.

So open up a new file in your text editor and add this CSS rule to it:

Call the file form.css and save it in the css folder. Then add this line to index.html:

How’s that? (View larger.)

Form groups: no borders

It’s certainly cleaner. We could leave it at that. Or we could do some other things to further emphasize the grouping.

For example, we could put a thin horizontal line (<hr/>) between the groups:

Form groups: horizontal rule

Or we could give each group a background color:

Form groups: background color

Whatever works for you.

Inline help

Sometimes it’s not 100% clear from a field’s label what the user is meant to enter in the field. Maybe it’s something that is simply too long to explain in a short label, so we need some additional text to help the user out.

A well known example is the credit card security code, which is often labeled as something not very helpful, like “CSC” or “CVV.” Even “security code” may not be obvious to some users, so we need to provide the user with an additional explanation somehow.

There are several approaches we can use:

  • We can have a big lump of explanatory text at the top of the form. These often get ignored though.
  • We can supplement the label with additional text (usually in a smaller font size) for fields that need it.
  • We can use the placeholder attribute to give the user an example of the type of information required and its format.
  • We can have some help text that appears automatically when a field (or one of a group of fields) gets focus.
  • We can put an icon next to the label that displays help text when the user clicks on it or hovers over it.

The first one is not usually a good idea, so we won’t bother with it. The second one is technically very easy to do, so we’ll skip that too. And we’ve already seen how to use placeholder.

So let’s tackle the last two.

Automatic inline help

Let’s say you’re designing a form that lets the user order a piece of clothing–a t-shirt, for example. They need to select a size, but you want to direct them to your sizing chart if they’re not sure. So for just this one field, we’re going to have a piece of help text that appears when the size control gets focus.

This is a bit like what eBay does on the password field on its registration form:

Inline help on eBay's registration form

(Actually, in a scenario like ours, it’s more likely that the “form” is just a couple of fields on a product page that allow the user to select size, color, and quantity.)

Our “form”: page structure

Let’s go ahead and build a prototype product page for our t-shirt. We’re going to make it look like this:

T-shirt product page sketch

It’s got a title (the name of the shirt) at the top. Under that, there’s a nice big photo, with additional photos that you can see by clicking the thumbnails on the left (though we’ll fake this part–it’s not important here). And to the right of the photo, we’ve got our form, consisting of inputs for choosing the color, size, and quantity, and an “Add to cart” button.

Only the size selector needs inline help, which we’re going to display to the right of the selector. But when do we we want the help to appear? I think it’s best if we show it both when the selector gets focus, and when the user moves the pointer in the general vicinity of the selector. That way, it will work well on both touch screens and when you’ve got a mouse and keyboard.

The first thing we need to do is get our basic page structure set up. Start out with an empty Foundation index.html file (refer to the instructions in this post for that).

Obviously, in a real design, you’re going to have global navigation, a footer, and all kinds of other stuff on the page. But here, we’re just prototyping the essentials.

For our title, we need a row <div> containing one column that takes up the whole of the available width. (Not the whole width of the page, but the usable area in the middle, which by default in Foundation has a maximum width of 1000 pixels, and which gets shrunk down and rearranged responsively for smaller screens).

To achieve this, we give the <div> containing the heading classes of small-12 and columns. This means that for screen sizes of small and above (i.e., all screen sizes), we want it to take up the whole of the available with (all twelve of the available twelve columns).

So we need to add this to our index.html, right after the opening <body> tag:

Next we need another row <div> that will contain the rest of the content. Within this, we need three column <div>s: one for the thumbnails, one for the big photo, and one for the form:

Here, I’m using placehold.it to generate placeholder images, instead of using real images. (They have instructions over there that explain how to size the image, how to change the text, and so on.)

Here, the small- classes divide up the twelve available columns into three, with widths of one, five, and six columns respectively (for all screen sizes). The <div> for the form is empty at the moment, so we can’t see anything there yet:

T-shirt product page WIP 1

Things look a bit scrunched up, so let’s add a bit of white space. In the last post, I showed you how to put CSS rules in a separate file and link to it from index.html. But for the sake of speed, this time, we’re going to put the rule in a <style> block in the page’s <head>. Within the <head> tags, add this:

This just adds top and bottom margins to the title, and gives all the images a bottom margin.

The actual form

Now for the actual form part of the page. This is also going to need some structure. Because we want to position the inline help to the right of the size selector, what we need to do is divide the form up into four row <div>s: one for each control and its label, and one for the “Add to cart” button.

Within each row, we’ll divide the space up into two equal columns, one for the control, and one for the help (where needed). So let’s add this within that third, empty <div>:

Notice how each row has only one columns <div>, except the second one. This is where the size selector will go—it’s the only one that needs a column to contain the inline help. And the whole thing is enclosed in <form> tags, because, well, it’s a form.

But until we add the actual controls, we can’t see if it’s right. So let’s do that now. Add the <label>s, <select>s, <option>s, the <input>, and the <button> so it looks like this:

Nothing here that we haven’t seen before. (Except I’ve explicitly set the value of the quantity <input> to 1, a sensible default I think.)

And how does it look? So far, so good:

T-shirt product page WIP 2

This looks OK, but the form elements could use a bit of white space between them. So add this rule, again between the <style> tags in <head>:

This just adds bottom padding to each row. Now the only thing we need to add is the inline help itself. Oh, and there’s the little matter of making it appear and disappear when we want. We’ll get to that in a minute.

First let’s add the HTML. Inside that empty <div> (the one right after the size <select>), add this:

This uses Foundation’s panel and callout classes to style it, plus some inline styling to make it a bit more compact. (Any bigger, and it’ll make the row taller, which means things will jump around when we show and hide it.) It also has an aria-live attribute so that screen readers will be aware of the content when we make it suddenly appear on the screen.

This is how it looks now:

T-shirt product page WIP 3

Now we need to hide it, so we just add display: none; to the existing style attribute of the inner <div> (the one with the panel and callout classes), so it looks like this:

Next, we need to show it and hide it. The only way to do this is with JavaScript. This is more like “proper programming”, but don’t be put off by that—I’ll do my best to make it simple and easy to understand.

JavaScript: show help on focus

First, we need somewhere to put our JavaScript code. Normally, you would keep it in a separate file, so you can reuse it in multiple HTML pages. But for simplicity, we’ll add it in a <script> block at the bottom of the page, a bit like what we did with the CSS rules in the <style> block earlier.

Now, jQuery is included with Foundation, so that’s what we’ll be using here. All the code we’ll be writing will go inside jQuery’s $(document).ready() function. This means that whatever we put inside it will only get run once the page has loaded.

To start with, we’ll add some code that shows our inline help <div> when the size <select> gets focus. (This happens when the user clicks or tabs to it. Or when they tap on it on a touch device.) So add this <script> block at the bottom, right before the closing </body> tag:

Do not be alarmed! This may look a bit intimidating, but if we take it piece by piece, you’ll see there’s not a lot to it.

The <script> tags tell the browser that what’s inside is Javascript. The $(document).ready() function, as I explained, tells the browser to run the code inside it only after the page has loaded. The real meat here is what is between the curly braces (i.e., after $(document).ready(function(){ and before });).

Let’s look at the first bit:

$('.field_row input, .field_row select')

This says “Hey, jQuery, find any <input> or <select> that is a child of something with a class of field_row.”

jQuery uses the exact same selectors (and ways of combining them) as CSS. So that’s one less thing to learn. (I covered the very basics of CSS, among other things, in this post.)

So just like in CSS:

  • A period means it’s a class name. (.field_row means any element with a class of field_row.)
  • A hash (#) means it’s an element ID. (#size_selector means the element with the ID size_selector.)
  • Two selectors separate by a space means the second one must be a child of the first for the selector to apply. (div input means any <input> that is a child of a <div>.)
  • A comma means “or”. (input, select means any <input> or <select>.
  • You can combine selectors in other ways. For example, div.field_row means a <div> with a class of field_row.

OK, on to the next part:

.on('focus', function(){

This says “For whatever we just selected, do something when it gets focus.” on() is what’s called a function. A function is just a chunk of code that has a name that you can use to run it (to “call it”, in programming terms). (The period means “Call this function on the thing we selected.”)

on() looks out for an event, and does something when that event occurs. In this case, the event is focus. You can tell on() to look out for other events too, like click or mouseover.

Everything between the last (opening) curly brace after function() and the closing curly brace here:

});

is the code that actually gets run when on() detects that one of the selected elements has been given focus. That’s this line:

$(this).parent().parent().find('.help').show();

What this says is:

  1. Take the element that was clicked (this).
  2. Find its parent element. (In this case, the parent of the <input> or the <select> is the column <div> that contains it.)
  3. Find that element’s parent. (In this case, that’s the row <div> containing the <input>/<select>.)
  4. Within this element, find any element with a class of help. (That’s the inline help <div>.)
  5. Finally, show it.

This technique, where you call one function after another like this, is called chaining. It can save you a lot of work. (The alternative would be to store the element you get in step 2, then (step 3) call parent() on what you stored, then store that, and so on and so on.)

Phew!

Now if you click on the size <select> (or tab to it), the inline help magically appears. But it doesn’t go away when the <select> loses focus.

JavaScript: hiding help when the selector loses focus

Luckily, to make this happen, we can just copy our existing code and make a couple of small changes. Copy the three lines of code (the ones that call on()), add a couple of blank lines below it, then paste them in. Now just change focus to blur and show to hide. The code you pasted in should now look like this:

If you try it now, you’ll see that when the <select> gets focus, the help appears, and when it loses focus, it disappears again. Great! But it’s not enough. Earlier we said we want the help to appear on hover as well. This isn’t a problem— a few more lines of code will do it. But where do we want the hover to work?

JavaScript: show help on hover too

Here, our help text contains a link (to a page that doesn’t exist, unless you feel like creating it). If the help appears when you hover over the <select>, when you go to click the link, it will disappear as soon as your pointer leaves the <select>. Which is not very nice.

We need to make the hover work for the element that contains both the <select> and the help, that is, the row <div>.

Let’s add some more code within our <script> block:

This time, the selector is simpler, because we’re just targeting the rows (that is, the <div>s with a class of field_row). (You may be wondering why I’m using this class, which applies to all four rows, instead of just sticking an ID on the one row that contains the help and use that here. The answer is that this way is more flexible—if I want to add help for another field later, I can just add it to the HTML and it will work, without having to change anything in the JavaScript.)

Instead of on() after the period, this time we’ve got a function called hover(). In contrast to on(), where we specified one piece of code that was run when the event was detected, here we need two: one that tells jQuery what to do when the hover begins, and one that tells it what to do when it ends. That’s why there are two function(){}s.

The code for hover start goes inside the curly braces in the first function(){}, while the hover end code goes in those of the second. To make things easier to read, lets split it up onto separate lines, like this:

Now we can just put the code that actually does stuff in those two blank lines. This code is going to be exactly the same as for the focus and blur events, except here, we don’t need those calls to parent() to navigate up from the field to the row <div>, because we’re already there. Add the necessary lines to make your code look like this:

If you try it now, you’ll see that hovering anywhere in the size row makes the help appear, and when you move away, it disappears. Yes!

Except what happens when the size <select> has focus, and then you hover and move away? No! The help disappears! That shouldn’t happen!

JavaScript: actually, don’t hide it when the hover ends, in this one specific case

What to do? Well, what we need to do is check if the <select> has focus before we hide the help (at the end of the hover)—and if it does have focus, we do nothing. For this, we’ll use JavaScript’s if construct. This lets us check if something is true, then do something. (And if it’s not true, we can either do nothing or do something different.)

Change your code so it looks like this:

What’s going on here? The only new thing here is that we’ve wrapped the line that hides the help in an if block. The line inside the block only gets executed if the bit in the parentheses after if is true. Let’s take a look at that expression:

!$(this).find('input, select').is(':focus')  

Ignore the exclamation point for a moment. this is the row <div>. Within this <div>, we’re using find() to get either the <input> or the <select>, and then we use the is() function (which checks if an element has certain attributes) to check if it has focus.

But this gives us an answer of “true” if the element does have focus. But we want to to hide it only if it doesn’t have focus. That’s where the exclamation point comes in. It means “not”, and simply reverses the result of the expression—true becomes false and false becomes true.

And now it works. (You can see it in action here.)

User-triggered inline help

You’ll be relieved to hear that user-triggered inline help is much easier to set up. All we need to do is take the prototype we just created and make a few changes (mostly removing stuff).

Adding an icon

First off, we need to add some kind of icon that the user can click or hover over to bring up the help. There is a set of Foundation icons that you can use just by adding one line to your page’s <head>:

Let’s add an “i-in-a-circle” icon to the “Size” label. Find it in your HTML and change it so it looks like this:

The fi-info class on the <i> element is how we specify which icon we want. (Other icons get fi-something-else, like fi-heart for a heart or fi-flag for a flag.)

The Foundation icons are an icon font. This means that you can style them just like text. You can make them bigger, change their color, add shadows, whatever. For now, we’re just making the icon a bit bigger, making sure it lines up with the label, and changing the pointer to a hand so that it looks clickable.

Note that the <i> can’t be inside <label> because clicking the label gives focus to the <select> (because of the for attribute that links the two). If we put the icon inside <label>, clicking it the same as clicking the label.

Also, we need to set the <label>’s display to inline-block (instead of the default of block) to force the label and the icon to stay on the same line.

New JavaScript for new behavior

Now, how do we want it to behave? If we make it so that the help only appears when you hover over the icon, then we’re in the same situation we would have been in if we’d made it appear when you hover on the <select>—when you go to click on the link in the help, it disappears before you can.

There are a couple of things we can do. We could introduce a delay, so that when the pointer moves away from the icon, the help does not disappear immediately. The other alternative is to make the help appear when you click on the icon, and have some way to dismiss it. Since we’ve already seen how to make things happen on hover, let’s go for the click-triggered help this time.

Within our $(document).ready(), we can get rid of all the code and replace it with this:

Here, we’re targeting the <i> element with a class of fi-info (our icon), and we’re telling the on() function to look out for click events on it. When it detects a click, we want it to start at this (the icon), go up two levels in the page hierarchy (by calling parent() twice), which takes us to the row <div>, and then call toggle() on it.

toggle() is rather clever. It hides the element if it’s visible, and shows it if it’s hidden. This means you don’t need to keep track of its visibility. So that’s less work for us. And it gives us a way to dismiss the help—you just click the icon again (which is nice and symmetrical).

User-triggered inline help

And that’s it. (See it for real here.)

Bonus learnings! Or I came for the forms, but stayed for the RWD

If you look at either of the above examples on your phone, you’ll see that it’s just the same thing, but squished down and misaligned. This is a bit of a shame. Foundation is a responsive front-end framework—we can use it to change the layout on smaller screens to something that makes better use of the available screen space.

Above, we used small- column classes for everything. This means that the defined layout is used for small, medium, and large screens. All we need to do is change the classes a bit to tell Foundation how the page should be laid out on smaller screens.

Here, we’re just going to have two layouts: one for small (phone-sized) screens, and one for medium (tablets) and large (desktop) screens.

In our small layout, what we really want is to just have everything stacked vertically: first the title, then the thumbnails, then the big picture, then the form. We can get this by just changing every small- class name to medium-. (What this means is that Foundation does something with the column widths we give it for medium and large screens, and treats it as if we didn’t specify any column widths at all on small screens—when you don’t specify column widths for elements, Foundation makes them full width and stacks them vertically.)

On smaller screens, that looks like this (code):

Responsive product page layout on a small screen 1

That’s pretty good, but I think it would look better if the thumbnails were below the big photo instead of above it. Foundation has classes you can add to make this happen.

The way you do it is like this. First, you change the order of the elements so that they appear correctly at small screen sizes. Then you add classes to move them left or right on larger screen sizes.

Let’s swap those two <div>s so they look like this:

Now we need to add push and pull classes to move these two columns to the left and right so that they are in the right positions for medium and large screens. Adding a push class pushes the column to the right, while a pull class pulls the column to the left.

We need to add the medium-push-1 class to the <div> that contains the big photo to push it one place to the right. This will leave the first column free for the thumbnails. Then we add the medium-pull-5 class to the <div> containing the thumbnails to pull it five places to the left, into the first column. This part of the code should now look like this:

Now if we look at it on a small screen (or shrink down our desktop browser window), we can see that the thumbnails are now below the big photo, just like we wanted (real example here):

Responsive product page layout on a small screen 2

And at larger screen sizes, the layout is just the same as before.

Conclusion

We’ve covered quite a lot of ground in this post. Stay tuned for the next post, where we’ll look at responsive enabling and responsive disclosure.