Observing the User Experience: A Practitioner’s Guide to User Research

by:   |  Posted on

“How do we go about learning who our users are and what they really need? And how do we do this in a way that helps us make a strong case for our design decisions to the people in charge?”

Design is disorienting. Especially when you are designing something in a collaborative environment, with multiple stakeholders, pressured deadlines, business objectives and budgetary constraints. We all go into design with the firm belief that the user is our pole star, but so often we lose that focus because of tossing waves, buffeting winds, and the crew screaming in our ears–never mind the dense cloud cover that always seems to obscure that trusty star just when a committee forms to gather requirements.

With all the attention to usability over the last five years or so and the wonderful swelling of information-architecture-related books just since 2001, you would think we would have enough methods and advice to keep our projects in perfect tack. But so many of these resources, excellent though they are, tend to be more about how to pilot the ship than how to find that all-important star and keep it in sight.

I promise not to drive this metaphor hard into the rocky shore, but think of the projects that could have been saved from being lost at sea if every team had a better grasp of user requirements through direct experience of users and their needs. Think also of how many projects could have stayed the course if only there had been an expert way to sell the findings from that experience to the stakeholders, who so easily forget the users for whom their project was intended.

For precisely these reasons Mike Kuniavsky’s Observing the User Experience: A Practitioner’s Guide to User Research is a welcome addition to the half dozen essential books on my cubicle shelf. This book provides lucid, personable, experienced advice that could only come from a seasoned consultant who has seen the good, bad, and ugly of web and application design. Its purpose is to give a solid foundation to any design team in the crucial beginning stages of a project by answering the questions: How do we go about learning who our users are and what they really need? And how do we do this in a way that helps us make a strong case for our design decisions to the people in charge?

Kuniavsky begins Observing with a cautionary tale about a failed corporate web project, a situation he experienced firsthand (changing identifying information to protect the innocent, of course). This situation involved misguided good intentions from corporate management and developers, where something they thought would be just what their users wanted turned out to be a huge waste of time and money. This is just one of many real-life lessons used as background throughout the book.

Kuniavsky introduces us to web user research methodologies by showing us how they fit into an overall process and by defining various roles within a design team. These descriptions are clear and sensible, and they are more descriptive than prescriptive: he is not trying to tell us these are the roles you have to use and this is what you have to call them in guru-speak, but describing with conventional labels what tends to happen in a successful project.

The chapters are well-organized and consistent, and content is cross-referenced from chapter to chapter where appropriate. Unlike many design-related books, this one is actually fairly heavy on text and light on visuals. Where visuals are used, they are very helpful and serve to explicate the content. A good example is his spiral model of Iterative User Research,which cycles from Examination to Definition to Creation and, as it deepens, gets more granular through Contextual Inquiry, Focus Groups, Usability Tests, and so on. Kuniavsky wisely points out that many companies already have marketing research results that may be expected to yield the results necessary for web design, but explains how the tools used by conventional marketing approaches are only part of the solution for user-centered design. Focus groups and surveys can supply valuable information, but focusing on direct experience of user behavior using a combination of appropriate methods offers a stronger core for design.

Kuniavsky goes on to provide an excellent mixture of step-by-step direction and experienced advice on the practicalities of user research. Beginning with how to put together a research plan (invaluable instruction, since planning seems to be the Achilles heel of so many projects), he explains how to make sure business goals are being considered along with user goals. He admits these instructions present a somewhat idealized situation that starts as a blank slate as far as user experience product goals are concerned. However, Kuniavsky manages to keep his advice from being so lofty that no real-world team could actually follow it.

The chapter on recruiting and interviewing is especially thorough. It provides a sample phone screening script and boilerplate recruiting communication, as well as advice on how to handle no-shows, heavily biased users, and people who do not end up fitting your model. In fact, it may be the coverage of so many aberrations and anomalies that make this book so unusually valuable. This is advice one would normally only gain on the job or working side by side with a highly experienced researcher.

Kuniavsky devotes the bulk of the book to describing a series of proven techniques for researching user needs and behaviors, including user profiles, contextual inquiry (plus task analysis and card sorting), focus groups, usability tests, and surveys, as well as more secondary-research approaches such as diaries, log files, customer support, and competitive research. He presents each method in a separate chapter, describing when each one is most appropriate and various methods of execution. Throughout, Kuniavsky glosses his text with marginal notes, giving a reality check or bit of wisdom in each one, such as the reminder that Focus groups uncover people’s /perceptions/ about their needs and their values. This does not mean that they uncover what people /actually/ need or what really /is/ valuable to them-however-knowing perceptions of needs is as important as knowing the needs themselves.

In his descriptions of various methods, there is surprisingly little dogma. In an industry that has spawned a thousand do’s and don’ts lists for design, it is refreshing to find so many techniques described with equal value and rationale. I personally have long held a bias against focus groups, surveys, and marketing research as being especially valuable for fully understanding users, but this book has helped me see these resources in a more positive light.

It is also a relief to read this book’s conversational and low-jargon voice. There are a number of books I find essential in my work that I still have trouble actually comprehending during a busy workday. Somehow this one cuts through the fog of design-speak to present some very sophisticated concepts and methods in a way so that a relative novice could read it and hit the ground running. Take, for example, his lucid description of the role of information architect: It’s the information architect’s job to make the implicit architecture explicit so that it matches what the users need, expect, and understand. The architect makes it possible for the users to navigate through the information and comprehend what they see. I have never seen my job explained with such clarity anywhere else.

Another strength is Kuniavsky’s business-savvy approach to design. In the very first chapters he does an excellent job explaining the various tensions between different groups with their own agendas encountered in any collaborative design effort. He shows how having solid and documented user research can help to defuse these tensions and keep the user as the central focus of the work. In fact, Kuniavsky even has a chapter on Creating a User-Centered Corporate Culture, an ambitious but necessary topic for any corporation finding its business model being warped into a whole new shape by the powerful gravitational pull of the web.

So much of design involves a kind of tea-leaf reading voodoo that is hard to justify or describe to managers and stakeholders. When we do the typical routine–look at some users, have some conversations, and then come back with all these ideas on how to design an expensive project–aren’t the people paying for it fully justified in asking Why do you think you really know how we should build this thing? And how can one blame them for thinking their own ideas are just as valid as ours? Observing the User Experience provides solid techniques for knowing our users from a 360-degree perspective in a way that we can document, communicate, and even sell to other team members and project owners. Think of it as a combination navigational chart, captain’s log, and sextant for web endeavors–a one-stop shop for tools that help your team stay the user-centered course.

About the book:

  • Observing the User Experience: A Practitioner’s Guide to User Research

  • Mike Kuniavsky
  • Morgan Kaufmann, 2003
  • ISBN 1-55860-923-7
  • List Price: $44.95
  • Chapters:
    • Part I: Why Research is Good and How It Fits Into Product Development
      1. Typhoon: A Fable
      2. Do A Usability Test Now!
      3. Balancing Needs Through Iterative Development
      4. The User Experience
    • Part II: User Experience Research Techniques
      1. The Research Plan
      2. Universal Tools: Recruiting and Interviewing
      3. User Profiles
      4. Contextual Inquiry, Task Analysis, Card Sorting
      5. Focus Groups
      6. Usability Tests
      7. Surveys
      8. Ongoing Relationship
      9. Log Files and Customer Support
      10. Competitive Research
      11. Others’ Hard Work: Published Information and Consultants
      12. Emerging Techniques
    • Part III: Communicating Results
      1. Reports and Presentations
      2. Creating a User-Centered Corporate Culture


Andrew Hinton is a Senior Information Architect at The Vanguard Group in Valley Forge, PA. His personal website is www.memekitchen.com.

Designing Customer-Centered Organizations

by:   |  Posted on
“True product innovation requires a radical rethinking of our roles as researchers and designers.”Which way?
Organizations increasingly view usability and user-centered design to be a key ingredient in creating high quality products. Designing for ease of use is a well-accepted goal, even if many organizations have far to go to create user-centered products. Even with the present downturn in the economy, more companies, from new media to established banks, have larger usability and design teams than ever before. Should we be content that we have come so far?

In this context of greater corporate presence, how should user-centered design advocates evaluate whether the field has achieved success? We suggest two questions as possible criteria. First, are the products and services we build becoming more innovative in serving our customers’ needs? And are we, as professionals, confident that our activities are as effective as they can be?

User experience practitioners have long called for their involvement at the earliest stages of product development. If only they could be involved at the start of a project, their voices—and the customers they wish to serve—would have a greater impact on the type of products and services that ultimately get produced. This laudable goal acknowledges the often-late role of usability in the product development cycle, closer to implementation than inception. Less talked about is how even design often occurs late in business formation and product creation—after entrepreneurs and investors have created business plans and goals, and after product managers have defined product strategy and metrics.

True product innovation requires a radical rethinking of our roles as researchers and designers. Smart companies will gradually adopt early stage consideration of usage and users, with innovations in specific products and, more importantly, sets of products. To accelerate innovation, we must go beyond project-by-project improvements and employ many of our existing skills and methods to create customer-centered organizations. Rather than focusing on products, our impact can be greatest when we become advocates for developing organizational capital, the ability of companies to maximize the value of their investments in technology and ultimately their impact on the customers they wish to serve.

This article focuses on the shift from customer-centered products to customer-centered organizations through an emphasis on the why and how of creating change. Contextual research, iterative prototyping, customer frameworks and models, workflow diagramming, and building consensus are skills many interactive user experience professionals already have. Working with cross-functional teams of technologists, marketers, executives, researchers, and designers, we have begun to shift our work, in-house and as consultants, to building organizational capital.

Why now?
Companies are under more pressure than ever to achieve dramatic increases in their operational effectiveness. Despite recent times of relatively flat or even negative sales growth within many industries, expectations for ongoing improvement to the bottom line persist. As a result, reducing the costs of bringing successful products or services to market is a primary concern for even the most successful companies.

At the same time, as market competitiveness increases, the need for effective product strategies similarly becomes much more important. As the range and number of similar products offered to consumers expands, it is an ever greater challenge to define a unique and differentiated value proposition that attracts consumers’ attention and convinces them to buy your product over that of a competitor, and keeps them coming back for repeat purchases.

User-centered design (UCD) methods, now increasingly well-known, offer help on both of these fronts. By incorporating knowledge of how customers will use and react to new products, designers help reduce the risk that costs related to bringing products to market will fail to generate a return. Additionally, by putting more emphasis on the design of the product up front, development cycles can often be dramatically shortened, reducing the time and cost of developing the product. By prototyping product concepts early in the development cycle, teams have a higher likelihood of catching expensive errors earlier in the process when they are cheaper to correct.

Furthermore, early stage user research can identify strategic and tactical opportunities for product differentiation. By illuminating how different customers are likely to interact with the proposed product, research can help clarify and prioritize which features are likely to result in the most benefit and appeal. Designers can then work with researchers to prototype product designs that account for critical user tasks and contexts of use, iteratively refining the end product.

But despite the promises these user and usage-centered design methods hold for reducing development costs and increasing product quality, companies still fail to make sufficient use of them. Why? Conventional wisdom among practitioners suggest that the failing is due in whole or at least in significant part to an inability to successfully demonstrate to corporate decision makers a business case for a compelling return on investment (ROI) of employing user experience practices.

But this may not be, as most practitioners seem to think, so much the failing of being able to demonstrate return, as it is an unwillingness to approach design projects as investments at all (Merholz and Hirsch, 2003). Expenditures related to usability testing, design, prototyping, and content development are not valued as investments, but are seen simply as operational costs, akin to spending on IT support and facilities maintenance. Spending on design is necessary in this view, but something, ideally, to be minimized as much as possible. A return simply is not expected, and hence not calculated. Companies do not realize that by spending more on design, or by spending differently on design, they may be able to realize different returns.

While this may in part be true, the idea that a company can develop better products by listening to its customers is nevertheless a fairly self-evident one, and as such it is difficult to accept the notion that quantitatively proving the value of design is the only recourse to convincing companies to take advantage of these methods. Instead, we argue that the challenges companies face in attempting to truly benefit from design are more systemic, and embedded in the structure of traditional business practice and organization.

“Producing innovative solutions for our customers’ needs requires making significant and lasting contributions to an organization’s performance over the long run.”Beyond product strategy?
For the sake of discussion, we will assume that companies do recognize that employing user experience designers (including usability specialists, information architects, interaction designers, etc.) is worthwhile for the contribution they can make to elevate product quality. During the product development process, companies employ iterative usability testing to test for and correct critical errors in the product design, and incorporate improvements into the final version prior to release. You might think that this scenario represents the usability specialist’s utopian paradise. But what if the product fails in the marketplace nonetheless? “It’s not our fault,” some would say. “We did the best we could with improving the product design, but it was a flawed concept.”

Such a scenario is entirely imaginable, and likely one that many have experienced. But it begs the question: if it isn’t “our” fault, whose fault is it? That of product management? Of the corporate structure? Such answers feel unsatisfactory inasmuch as they deny our responsibility and accountability for product success. Producing innovative solutions for our customers’ needs requires making significant and lasting contributions to an organization’s performance over the long run. We must focus our attention beyond the incremental improvements we are able to offer during product development, and instead address strategic concerns about product and service selection, development, and evaluation. In order to substantively improve the competitiveness of the organizations we serve, we must utilize our methods to help those organizations discover how to more effectively plan for, prioritize, and invest in unique activities that create enhanced potential for identifying and taking advantage of market opportunities.

Contemporary organizations are growing in complexity. Decision-making is increasingly decentralized and distributed, technical infrastructures many layers deep, and cross-functional teams are now de rigueur. Even if a company is able to optimize its process for managing product development projects, it still faces significant challenges in seeking the right mix of development projects within its aggregate portfolio. Most large organizations encounter difficulties forecasting and tracking resource capacity, choosing which team members and skill sets to dedicate to projects, and communicating between groups to ensure that redundant or competing projects do not simultaneously find their way onto the queue. These challenges are exacerbated by the ever-diminishing ability of any one person in the organization to hold a clear picture in their head of the present state of the company’s operations.

If we recognize these as challenges and view them as inherent to today’s complex development systems, we can start to identify ways to use our training and talents to effectively manage them. Our skillfulness with analyzing and understanding structure, for clarifying problems, and for managing communication are strategic assets. These skills can be redirected to enhance an organization’s ability to thoughtfully align its endeavors and activities with its capacity, and to deliver on those initiatives effectively.

To accomplish this initiative, we must not only recognize the need to move user research farther forward in the product design process; we must also disrupt the relegation of design within an organization to a process of making (or, worse still, of decorating), rather than one fundamentally concerned with formulating and developing strategic plans. If we accept the notion that successful companies operate from the bottom up, using customer insight and feedback to form the foundation for developing product strategy, which in turns aggregates to form the basis of an overall corporate strategy, we then have a useful framework for situating design activities within an overall context for creating meaningful change.

Within that framework, the value of user research early in the product development process is obvious, and in fact is necessary in the development of product strategy. Less obvious is the role research-informed-design can play beyond the formulation of product strategy.

But could the same concepts not apply to all areas of corporate planning and effectiveness? If researching customers can help a company develop better products by clearly articulating the most important aspects to consider during product design, why can’t researching a company’s effectiveness at ongoing product development help a company develop a more robust corporate strategy? Product researchers and designers bring with them a unique set of skills that have potential to contribute to effective investment prioritization, capacity planning, and organizational decision making.

To help organizations negotiate the complexity of their technical environments, researchers and designers can map key relationships within the technical infrastructure that help visualize and clarify structural components, data elements, and workflow. To help decide between sets of projects, we can analyze the organization’s available resources for delivering on those projects, and create prototypical scenarios and simulations that reveal possible outcomes of projects in different combinations.

Crafting business strategy includes “creating fit among a company’s activities” (Porter, 1996). Note the similarity in objective to those of design, where quality is evaluated based on fitness to purpose (Alexander, 1970). Within modern day corporations, user experience designers are in the unique position to observe, understand, and communicate the needs of a company’s customers, as well as those of a company’s internal constituencies (including marketing, engineering, product teams, etc.) since the role of design spans multiple functional groups. As such, we have an unprecedented opportunity to sit at the fulcrum of a company’s efforts to align its internal activities with value generating activities for its customers. But to do so, we must take responsibility for driving business decision making, and stop seeing ourselves as passive victims of organizational politics run amok.

Case study: Toy manufacturer
A leading toy manufacturer (TM) approached the interactive design agency where one of us worked with a request to improve the usability of the registration process for a web-enabled educational toy. What began as a usability study for a patch project uncovered organizational structures that produced an “each sold separately” user experience.

Baseline
TM executives and product managers expected that the installation process for the toy’s accompanying software and hardware would take about 10 minutes. By contrast, parents initially imagined the process would take 20 minutes, and reported that they postponed installing on their own because they did not think they had sufficient free time. Our initial research in customer homes with parents and children aged 7 to 12 revealed the problem to be much worse. The actual installation process required on average two hours, with many people unable to complete registration at all.

We discovered that the initial process required performing several tasks, including installing software, connecting a special module to the family PC, registering online, downloading new games, transferring games from device to toy, and remembering how to return for online purchases of new game packs.

Approach
Short-term solutions included reducing required registration entities from three to one, adding navigation and nomenclature that allowed users to understand how far they were from their primary goal, and reducing the interface complexity for first time users. But it was our mapping of the user experience and organizational obstacles to product simplicity and coherence that had lasting impact.

The root of the toy’s problems exceeded the scope of our project. The toy and connector module forms, functions, and packaging were developed by one set of groups, the installation CD by yet another group, and the website design begun after the other artifacts were in production.

Organizational change
Our consulting engagement helped TM change its product design process and recognize weaknesses in its existing organizational structure. Rather than rush to market toys with an ad hoc “internet enabled” gloss, the company has become more cautious about launching complex toys despite the allure of new technology and potentially more profitable distribution channels. Planning and collaboration between product and design groups are recognized as necessary for creating high quality products. And TM reacted to the effectiveness of our multi-stage research and design process by building a large in-house usability and user research department.

Conclusion
Because we served as external consultants we cannot be sure about how successfully TM has been able to move towards becoming a customer-focused organization. We do know some of the measurements for evaluating how far they’ve come:

  • Are customer needs and opportunities a critical input to the development of each product?
  • Have customer models been created across product lines and used for initiating new product development and prioritizing which products receive scarce resources?
  • Have research and design practices migrated from product design to other key organizational strategy and planning areas, including future scenarios modeling, resource allocation, internal workflow modeling, and corporate culture change?

The TM study illustrates how a particular project led to changes in how customer research contributes to product design and strategy. The next case study looks at how research and design methods were used to model an organization’s internal workflow and the impact on user experience, in order to create a framework for organizing product development across distributed business units.

Case study: Financial services
At a leading financial services institution, an internal user experience research and design team was struggling to maintain a set of design guidelines for the company website. Created as a way to ensure consistency across the 8,000+ page site, the existing guidelines governed use of brand assets (e.g., the corporate logo) on web pages, specified page layout, and defined best practices for site navigation. It also included guidelines for creating and maintaining effective metadata, ensuring compliance with ADA standards, and writing effective copy.

Baseline
The team responsible for “owning” the guidelines-championing and maintaining them within the organization-often found that rather than serving as a useful tool to help promote effective online user experience design, the guidelines became a burden to keep up to date, and often caused frustration among the team as product managers, engineers, and third parties remained ignorant of the standards that had been developed, or refused to comply with them. So, in addition to the burden of developing the guidelines and keeping them up to date, the team struggled to keep up with their policing roles-attempting to convince other groups within the organization that they should respect the guidelines and design to them.

Approach
We decided on a two-pronged approach to clarifying the problem and attempting to generate sustainable solutions. First, we used ethnographic user research methods to study the work practices of those responsible for developing, maintaining, approving, and using the guidelines. We discovered that though the guidelines were developed as a way to govern site user experience, in reality they were used much more as a tool for governing production-level decisions: where to place headers, how big images should be, how to label buttons, etc. As new interface questions arose, new guidelines had to be developed and documented, and added to the intranet site where the guidelines were kept and referenced. Since product managers were constantly coming up with new Web product ideas that did not (or, at least in their own minds did not) fit within the currently specified guidelines, the team responsible for maintaining the guidelines was under constant pressure to re-examine, modify, and document their stylistic decisions.

After completing the internally focused research, we then planned and executed a rapid customer ethnography designed to evaluate the guidelines based on the effect they had on creating a high quality website (which, remember, was the ultimate point of the guidelines). Participants in the study were asked to keep journals of all their financial services activities over the course of five days. They were encouraged to tell stories with words and pictures about the experiences they had with our company’s website. We also observed them using the website in their homes and offices, and interviewed them about their usage behavior. Finally, they were interviewed over the telephone about their activities over a 30 day period, recognizing that financial activities tend to re-occur in month long cycles.

Internally-focused and customer research data showed that the great majority of effort on creating and maintaining the guidelines was wasted. While the production-level concerns of the existing guidelines certainly were important, their impact on the user experience was far less significant than more serious failings of the overall execution of the website strategy. Customers of the site generally had little difficulty navigating the site to move between pages or log on to their accounts. But they had little motivation to use the site as it was intended-to research and buy financial products like credit cards, home loans, and small business services.

The advantage of researching and buying financial products online wasn’t evident to customers from the site design, and even when it was, customers were not sure they could trust the site to be as safe, easy, or efficient as doing it through another channel, like over the phone or in a branch. Furthermore, the website failed to communicate the company’s overall value proposition-consolidating multiple financial products at one institution, online or otherwise. Consequently, customers wondered why they wouldn’t be better off going to different specialists for each of their piecemeal needs.

The origins of these failings were evident in the findings from our internal research. Product managers were held accountable for decisions about their specific products (e.g., home loans). The user experience team was held accountable for page-level decisions about the website’s design. No one within the organization was responsible for thinking across products, from a customer’s point of view, in order to drive home the primary value proposition for banking online: to aggregate all products in one easy-to-access, always-available location.

Organizational change
What started as a project about redesigning and updating a guidelines document developed into a program for prioritizing projects, organizing business units, increasing group level accountability, and identifying more effective governance models across the web channel. Senior executives became involved in the analysis and formulation of new action plans, and shepherded the transition to new organizational structures that built in incentives for managers and rank-and-file employees to improve their communication with other business units. Product and functional groups were encouraged to collaborate on decision making and share responsibility for the overall success of the web channel, rather than focus exclusively on individual product sets. New investments were made to the overall design of the site to more closely align it with the company’s overall communication strategy. And a content management system was installed to enable easier maintenance of routine upgrades to the site, as well as to act as an automated solution for enforcing the production level guidelines, which were now built into the page templates.

Closely monitoring the quality of the site user experience became a key part of the company’s metrics program, along with mechanisms for identifying and implementing incremental improvements. The existing user experience team began to be recognized as key contributors to the success of the company’s overall business strategy and were included in high level planning meetings that determined project priorities and budgets for the next calendar year.

Even though all of these changes were relatively recent and are still in progress, initial examination already suggests dramatic successes. Industry analysts rated the new site as being the best overall in the financial services industry. Aggressive sales goals were set for the channel, and were being projected on target or slightly better than planned. Teams reported feeling more cohesive and integrated. Productivity has been increased, and complaints about the website to customer service were down.

Conclusion
Certainly not all of these successes can be solely attributed to our guidelines project. Modern day organizations are dynamic, living entities, and it is unrealistic to expect that any one project can ever completely redefine a company’s business operations. But by revealing the connection between the way our company approached creating its website (how it designed) and the quality of the site it produced (what it designed), the guidelines project catalyzed a significant cultural change within the company. That change dramatically improved the organization’s potential for recognizing and responding to the evolving needs of its customer base and the competitive marketplace, and enabled the company to integrate user experience concerns with its everyday business operations.

“Rather than see usability, design, and product management as separated by a rigid wall akin to the division of church and state, we must pool our skills in a common pursuit of outstanding products.”Future design
This article calls into question two complaints we have heard from people most frequently over the past several years-from the most junior practitioners to leading experience design gurus.

“It would have been great if only they had listened to us.”
“Our innovative approach would have been revolutionary if the dotcom bust had never happened.”

Our skills, methods, practices, and know-how would have made successful products if only “they” had listened to us or if the boom had continued indefinitely.

We take issue with both complaints. Declarations of righteousness may resonate with other peers who have experienced adversity despite their best efforts. However, blaming business people for not listening to our great ideas makes it seem that “their” need to listen to us exceeds our need to communicate effectively to others, a core skill most designers and consultants would claim. Do we not share responsibility for others’ failure to listen to us?

Flush times should not be a pre-condition for the success of our work. Given the inevitability of business cycles of boom and contraction, our value must be in aiding businesses even when resource constraints are greatest. This transformation signals a break from late stage design and usability work and a shift towards less familiar territory.

The case studies illustrate how skills we share with other researchers and designers have worked within real world constraints and succeeded in making user experience more central to product strategy and organizational effectiveness. Conducting research with product developers employs the same research and modeling skills we already have. Mapping the disconnected world of distributed product development extends research and design work from “patching” problems to creating a product strategy that aligns customer demands for clarity and simplicity with organizational capacity.

To create innovative and useful products, our most common approaches are insufficient. We must be ready to sacrifice traditional style guides and a fixation on formal concerns that promote surface consistency and instead strive for new frameworks that ensure consistency of user experience. Rather than see usability, design, and product management as separated by a rigid wall akin to the division of church and state, we must pool our skills in a common pursuit of outstanding products. Rather than transfer blame for failure to business people’s bad decisions and failures to listen to us, we must take responsibility for better communication to decision-makers and for the ultimate success or failure of our products and companies.

Achieving the highest levels of user experience in products and services isn’t possible with an exclusive product and making focus, but requires engagement with organizational planning and decision-making. While the end state is still unclear to us, the methods come from our existing skills sets:

  • Using our customer research and modeling skills to understand the internal operations of organizations;
  • Creating frameworks for decision making about organizational potential rather than product trade-offs;
  • Prototyping and iteratively adjusting business strategies;
  • Visualizing existing and future processes; and
  • Supporting decision-making at the senior executive level and communication throughout an organization.

The economic downturn only accelerated the need to make a choice. We can continue with research and design as usual and seek to build bigger departments that have increasingly less impact. Or we can re-imagine our roles, using our existing skills to form the customer-centered organizations of the future.


  • John Zapolski manages multidisciplinary design teams at Yahoo. He is very active in developing the user-centered design community, serving as advisor and former director of AIfIA, and as the new chair of the AIGA’s Experience Design community.
  • Jared Braiterman is an anthropologist working in product development and organizational change (Stanford PhD, 1996). You can learn more about his consulting work at jaredRESEARCH.

Don’t Test Users, Test Hypotheses

by:   |  Posted on
“Whether you are testing your own design or someone else’s, start by defining questions you want answered.”“Observe your users.” That’s a maxim most interface designers and user experience professionals subscribe to. But how do you “observe?” What do you look for? When testing websites or applications, I’ve found that generating hypotheses about user behavior helps inform the observation process, structure data collection and analysis, and organize findings. It also keeps you honest by being explicit about what you are looking for.

User testing typically consists of a sort of fishing trip. We lower a lure (the user) into the water (the application or site) and see what critters (defects) bite. This is a valuable and time-tested approach. But when we start fishing for defects, we are left with some tough questions. For instance: When are we finished? How many defects do we need to find before we have fully tested the site or application? If we find a defect, how do we know how severe it is, and by what measure? In iterative testing, how do we compare results from the test of the current version with results from testing earlier versions?

A productive way to address these issues and to incorporate user testing into the design process is by articulating, up front, the key issues you are investigating and predicting what users will do in certain situations. Imagine that you have been asked to review the user experience of a consumer shopping website and your first reaction is “They’ll never find the pricing page.” I’m suggesting that turning that hunch into an explicit prediction or hypothesis will improve the testing and the relevance of the findings for the design team.

Here’s how to do it

Whether you are testing your own design or someone else’s, start by defining questions you want answered. Describe the assumptions implicit in the design. Make predictions about users’ behavior and develop hypotheses about what they will do. That’s the first step. Then, structure your testing to address those hypotheses. That way, whatever the result, you have specific, relevant information about the design.

My colleague Dianne Davis and I developed this approach over several years of testing websites and applications. We start with the idea that every design, implicitly or explicitly, predicts how people will respond and behave. Articulating those predictions and thinking of them as hypotheses focuses our usability research and allows us to “test the design,” not the users.

Testing the design

The first step is to generate hypotheses. We begin with an extensive review of the site or application. We use relevant heuristics in our review, of course. We ground the review in the site or application’s goals and philosophy, as defined by its designers and as documented in project literature (requirements, specifications, creative brief, etc.). This is critical because we want the review to reveal whether the site conforms to, or deviates from, the design. It’s important to engage the design team in a dialogue at this point to clarify their goals and articulate their predictions about user behavior. That way, the results of the testing will answer their questions, not ours. (If we are testing our own design, then we use this opportunity to reflect critically on it and identify its key predictions and assumptions.)

This initial review yields what you’d expect: many “defects” and questions about user behavior. It lets us spot areas that we think will cause difficulties for users, aspects of the design that may not deliver on design goals, and features or regions that may go unfound or unused. We don’t treat these “defects” as certain, but rather as potential problems and as guesses about what users will do. We recognize that each of the issues is no more than a prediction, but one that is “test-able.” Our instincts or prior experience may “tell us” that an issue is critical and will affect user experience and task completion. But we won’t really be sure until we test with users. That’s where hypotheses come into the picture.

The predictions and guesses become an informal set of hypotheses about user behavior. For example, our review of an automotive manufacturer’s site suggested that users would not see or use a link that opened a pop-up window with detailed photographs of a product, even when the user was looking for those types of images. So our hypothesis was: Most users will not find the pop-up window. If most users do indeed find and use the link, then our hypothesis is disproved, and the design prediction supported. If they don’t find it, we know what we need to fix. In another study, we tested a redesigned interface for paint–colour-mixing software. We expected that users would have trouble because the application lacked indicators of the sequence involved in the task; we hypothesized that users would not know what to do next, and identified an area that needed to be examined.

Testing users testing the design

“Hypotheses or predictions about user behavior help us develop and refine usage scenarios and tasks.”In the second step of our hypothesis testing approach, we do research with users, employing traditional usability interviewing, observation, and measurement methods. But throughout, the drive to examine the hypotheses and answer our questions shapes the process: from sampling strategy through task selection, observation and interviewing, data analysis, and, ultimately, reporting.

If our hypotheses relate to types of users, that may affect sample selection and study design. For example, in testing an educational site we predicted that a Flash-based animated introduction would be of interest to one part of the audience (kids) but a distraction to another part (teachers). So we were sure to include sufficient users in each category and to compare how these two groups used this feature. (By the way, we were wrong. The kids all clicked “Skip Intro.”)

Hypotheses or predictions about user behavior help us develop and refine usage scenarios and tasks. Of course, core tasks are derived from the project’s use cases or scenarios (if they exist). But in addition, we include tasks and situations that will expose users to the problems and opportunities we have identified. In this way, the users are “testing the design.” The automotive site mentioned above had loads of information on vehicles, but the rich-media, featured content was a key element of the design. So we made sure that our research design included tasks that invited users to search for that sort of information. In contrast, we did not generate hypotheses or tasks to address users accessing other information that was of less significance to the marketing strategy.

Watching and listening

This approach also has an impact on how we observe users and collect data. While we note all defects or user problems that we see, we are particularly interested in user behavior around the “pinch points” and “rabbit holes” that we’ve identified. With multiple observers, the hypotheses also serve as common points of reference. If we go in looking at the same things we can more easily compare and synthesize findings.

User observation methods are based on an ethnographic methodology in which the observer works hard to bracket their biases and expectations. The method I am suggesting complements this traditional approach in two ways. First, it is not intended to replace traditional observation methods, but to supplement them. When conducting tests with users, we keep our eyes open and record relevant user behavior, whether or not it relates to our predictions. We always find previously unidentified defects and see users do unexpected things. If you watch and listen, users will reveal interesting things about the tools and devices and products they use.

A second reason that hypotheses complement observation is that being explicit about expectations helps guard against biases on critical issues. Ethnography is hard to do well because it’s difficult to be aware of your biases and easy to find evidence to support them. In The Handbook of Usability Testing, Jeffery Rubin advises having specific testing goals in mind. “If not, there is a tremendous tendency to find whatever results one would like to find.” (p.95) Starting with hypotheses or predictions provides a framework for consciously assessing and interpreting user testing data.

Sifting and reporting

One thing about qualitative data: there is a lot of it around. Data reduction is always a challenge, and assimilating exhaustive lists of defects can be daunting for the usability person and the rest of the team. Aside from the number of issues, there is the problem of organizing them. The hypotheses you start with can provide a meaningful way to group the results as you interpret them. This way, results are organized around design goals and related user behaviors, rather than around interface features, making them more relevant and pointing out underlying design relationships. The hypothesis-testing approach also helps determine what to fix, because findings and recommendations are easier to prioritize when you are mapping them against previously set goals. If you are doing iterative testing, the hypotheses help triangulate between phases of testing and to see if and how design changes affect user behavior.

When it comes to reporting, I have found that I get the attention of designers and developers when I present findings that are put in terms of the goals and ideas they are pursuing. Since the hypotheses are built around their design vision, I am starting in a strong position. And, when I have actually observed users’ behaviour related to those goals, I can speak with greater confidence on the impact of design decisions. If finding the pricing page is an acknowledged part of the site’s task model, and I see that users can’t find it, designers are forced to ask what went wrong and less likely to blame the user.

Not the null hypothesis

It may seem that having hypotheses will bias you towards seeing certain behaviours and not others—that the observer will try to confirm their hypothesis. But that is not how an empirical method works. In fact, we don’t go out to prove hypotheses, but to test them. We’ve got to be open to whatever we see.

This hypothesis-testing approach to usability is not a true experimental methodology. In applied usability work you typically do not have the time to develop metrics and test their reliability, the luxury to assign users randomly to conditions, or the budget to test with sufficient numbers to use statistical tests of significance. You also want to cast a wide net and not restrict your observation to the hypotheses, as you might in a controlled study.

User testing remains a naturalistic (if not “natural”) research method founded on passive observation. So, it’s important not to micro-manage the user’s experience and remain open to whatever they do. And we don’t present our findings in terms of proved or disproved hypotheses because we don’t have (or want to impose) the strict controls needed for a true experimental design. But thinking in terms of hypotheses or predictions helps bring some of the rigour of empirical methods and helps focus the usability effort.

Though it’s rarely referred to in terms of hypotheses, I expect that in practice, usability often already proceeds in this way. For example, Susan Dray and David Siegel suggest beginning usability studies with a thorough review of the system or product to aid in “prioritizing the design issues and user tasks,” and recommend that it’s important to have “a good idea of the key areas to be probed” (p.28).

Design as hypothesis

I find that by basing hypotheses on a site or application’s goals, I can integrate usability testing into the design process. By thinking in terms of hypotheses based on design goals I can generate relevant, action-oriented findings. In this way, usability doesn’t stifle creativity, it focuses it.

One reason this approach works, in my view, is that every design is a hypothesis. The designer is consciously or unconsciously predicting the user’s behaviour: “If I put a button here and make it look like this, the user will see it and know that they should click it next.” Making such predictions explicit focuses your usability reviews and research, and allows you to test the design, not the users.


Dray, Susan M. and David A. Siegel (1999). “Penny-Wise, Pound-Wise: Making Smart Trade-offs In Planning Usability Studies”, in Interactions. ACM, May-June 1999, pp. 25-30.

Rubin, Jeffrey (1994). Handbook of Usability Testing. John Wiley and Sons, ISBN 0471594032.Avi Soudack does usability consulting, information and instructional design, writing and editing, and research on educational media. A former market researcher and teacher, he’s been helping producers, designers, and others improve their communications and new technology efforts for almost twenty years. He’s been designing interactive instruction and information products for the last ten. His goal is to work in a bright room, whenever possible. His website is http://www.brightroom.ca.

Searching for the center of design

by:   |  Posted on
“In the user experience community, we’re valiantly fighting against the infection of chooser-centered design, and the antidote we prescribe is user involvement.”Design is driven by many considerations. But on each project I’ve worked on, there seems to be a consistent center—a driver that determines priorities, direction, and the metrics used to measure success.

The most common driver I’ve encountered is “chooser-centered” design: Whoever runs the show sets the agenda. That doesn’t always mean the VP is in charge–the “chooser” might be a gung-ho lead developer. As Cooper illustrates in The Inmates are Running the Asylum, techies who are focused on the latest whizzy server platform can be just as unbalanced as a CEO obsessed over expressing the corporate mission, or a creative director who prizes aesthetics to the detriment of all else.

The key isn’t to point fingers at the culprits—it’s to find solutions. I know I’m preaching to the converted when I point out that client-centric, portfolio-centric, technology-centric, marketing-centric, and business-centric approaches abound, and all are flawed. We see the symptoms across the Web with CEO mug shots and vision statements on the homepage, or edgy visual design that wins awards but not customers. So what’s the answer?

In my mind, I hear your response: “We know better. We have the answers!” In the user experience community, we’re valiantly fighting against the infection of chooser-centered design, and the antidote we prescribe is user involvement. It’s the common rallying cry of the UX community: “Put the user in the process! Embrace user-centered design!”

Unfortunately, it’s the wrong answer.

Let me rephrase that—it’s only a partially right answer, and not the key consideration either. User-centered design (UCD) suffers from three significant drawbacks that disqualify it as the ultimate candidate for the center of design.

  1. In placing the user at the center of the process, UCD often ignores other aspects and the process and projects become unbalanced. In reacting to the prevalence of chooser-centric decisions, we grasp UCD with such zeal that we lose sight of the bigger picture. Kent Dahlgren’s CHI-WEB post Usability Contributed to My Layoff vividly illustrates the consequences of extreme user focus.
  2. Putting the user at the center of the process and setting the metrics for project success implies that user-centered design is the “right” approach. Assuming UCD is THE right approach suggests that there is a sort of moral imperative to pursue a user-centered methodology. This has a number of detriments. For people who tacitly adopt the moral imperative position, attempted evangelism can come off with a preachy “I told you so” attitude. When others don’t buy into doing things the “right” way, they are often dismissed as unenlightened luddites who don’t understand the importance of what we do. Thus, some practitioners develop a UCD inferiority complex–a resentful feeling that we’re not appreciated or understood. Often this results in the practitioner’s return to more comfortable territory–conversations within the UX community about methods or tools, or how engineers or marketing just don’t get it. And that leads us to what might be the biggest drawback of UCD.
  3. UCD information is rarely put in terms that resonate with others outside the field–the reality is that user-centered design evangelists often aren’t user-centered at all. We might address ROI, but in the same sentence we use jargon like contextual inquiry, controlled vocabulary, or experience map. While jargon is useful inside the community, and business decision makers are smart enough to pick up our vocabulary, it points to a deeper problem. We naively expect our audiences to learn our lingo, rather than understanding their needs and addressing executives and other decision makers with language and messages tailored for them. We don’t practice what we preach.

None of this means that user-centered design is wrong or worthless. But it’s only part of the picture–necessary, but not sufficient. To see the complete picture requires stepping back and developing a balanced perspective. Individually, practitioners often recognize the other factors at play, but collectively we don’t express the recognition very well.

To connect with decision makers and the people who influence them, we should treat them as “users” of the user experience message. And in this case, being user-centered means not blurting out “User-centered design is the answer!” at the first available opportunity.

While there are a number of alternatives for approaching user experience evangelism, I’m going to share one perspective that has worked for me to begin conversations with decision makers. I call it value-centered design.

(Before we go further, a caveat: Value-centered design isn’t the ultimate answer either, but I hope it helps in your own efforts to connect to decision makers).

mcmullin_090803.gifThe basic premise of value-centered design is that shared value is the center of design. This value comes from the intersection of:

  • Business Goals and Context
  • Individual Goals and Context
  • The Offering (While it sounds like the title of a low-budget horror flick, “offering” is general enough for a wide variety of situations. For a particular project, this might be a product offering, service offering, or content offering)
  • Delivery (How do we get it from the business to the individual?)

Consideration of these goals leads to a particular offering from the business to the individual, delivered through a specific channel. Together, the offering and delivery method create a solution that drives return on investment for the business, and “return on experience” for the individual—return where she gains some benefit for the time, attention, or money invested in the experience. Both parties are satisfied, and this satisfaction establishes the foundation for sustainable initiatives and an ongoing relationship. Meeting business and individual goals creates value, and that’s largely what design is about.

When value is explicitly placed at the center of design, we no longer have to explain what we mean by user-centered design. Our user experience toolbox merely becomes part of the complete picture, working to produce a great solution that meets individual and business goals. User needs are set on equal footing with business needs, and the solution is explicitly a means to achieving those ends, instead of an end itself.

While space doesn’t permit great exposition on the implications of value-centered design, I want to address some key questions I get from my peers in the user experience community.

  1. Isn’t VCD incredibly generic? Isn’t everything about creating value?

    Well, the generality of value is what lets VCD speak to a wide variety of people and situations. However, it’s important to remember that VCD isn’t just about value as a platitude or ideal; value explicitly comes from meeting business and individual goals. The meeting of goals puts individual user goals on the same plane as business goals, which tremendously changes the conversation with business decision makers. In fact, when value is defined this way, everything isn’t about creating value. All those different centers we touched on–award-centric, technology-centric, self-centric approaches–all fail to generate value because they don’t satisfy both individual and business goals (and sometimes neither).

  2. Isn’t VCD just a reframing of current ideas?

    The quick answer is yes. But it’s not “just” a reframing. It’s a reframing that does two key things: puts the user into a business context as an equal player with business goals, and uses language tailored to business decision makers. It also helps me get over my personal “I told you so but nobody listens to me” complex that I seem to share with many in the user experience community. Most importantly, “value” provides a common platform for taking current ideas and introducing them to a much broader audience in a way that “user” can’t.

  3. For something so broad, what real concrete tools come from value-centered design? How can I apply this in my daily work?

Well, I hope B&A will let me talk some more about this at a later date, but for now, here are three ways I apply VCD day to day:

  1. Building buy-in for user experience;
  2. Applying user experience tools to the business side of the equation (ask me sometime about business personas); and
  3. Creating a framework for looking at projects and the tools that they need to generate value.

Of course there are more questions about value-centered design. It’s not perfect, and it’s still evolving within my own practice.

Value-centered design starts a story about an ideal interaction between an individual and an organization and the benefits each realizes from that interaction. How that story ends is still being decided with every new project we pursue. I hope that VCD will spark the first chapter in some great success stories about using a balanced approach to create lasting, sustainable value for businesses and the individuals they work with.

Jess McMullin is a user experience consultant who helps companies innovate products and services. Through value-centered design, Jess works to maximize return on investment for his clients and return on experience for their users. A founder of the Asilomar Institute for Information Architecture, he runs the popular IA news site iaslash. He is based in Edmonton, Alberta, Canada. Jess can be reached at banda(at)interactionary(dot)com.

Usability Heuristics for Rich Internet Applications

by:   |  Posted on
“The key difference between a typical Flash site and an RIA is that RIAs possess the functionality to interact with and manipulate data, rather than simply visualize or present it.”Heuristics, or “rules of thumb,” can be useful in both usability evaluations and as guidelines during design. Jakob Nielsen’s 1994 set of usability heuristics were developed with a focus on desktop applications. In 1997, Keith Instone shared his thoughts on how these heuristics apply to what was a relatively new area: websites. Today, in 2003, with Flash-enabled Rich Internet Applications (RIAs) becoming more popular, Nielsen’s heuristics still offer valuable guidelines for RIA designers and developers.

In this article, we focus on Flash because it currently dominates the RIA landscape. However, many of the lessons for Flash apply to other technologies as well.

Rich Internet Applications offer the benefits of distributed, server-based Internet applications with the rich interface and interaction capabilities of desktop applications. The key difference between a typical Flash site and an RIA is that RIAs possess the functionality to interact with and manipulate data, rather than simply visualize or present it. While RIAs hold significant promise, many in the Flash community don’t have the opportunity to work with interaction designers, information architects, or other user experience professionals. As well, user experience professionals often decry Flash or other rich technologies as “bells and whistles” that detract from user goals. We hope this article provides some common ground for discussion between the two communities.

The list below includes Nielsen’s heuristics in bold; our comments about how they apply to RIAs follow each heuristic. Since RIAs cover a broad range of applications, we know we haven’t covered everything. We’d love to hear your own thoughts and experiences in the comments.

1. Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
RIAs should leverage the rich display capabilities to provide real-time status indicators whenever background processing requires the user to wait. While progress indicators are frequently used during an extensive preload when launching an application, they should also be used throughout a user’s interaction with data. This may be monitoring backend data processing time or preloading time.

When dealing with sequential task steps, RIAs should indicate progress through the task (e.g., “Step 4 of 6”). This helps users understand the investment required to complete the activity and helps them stay oriented during the activity. Labeling task steps will provide a clearer understanding of system status than simply using numbers to indicate progress. RIAs’ ability to store client-side data can be used to allow the user to skip optional steps or to return to a previous step.

System status should relate to the user’s goals, and not to the technical status of the application, which brings us to our next heuristic.

2. Match between system and the real world

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
Understanding the user’s vocabulary, context and expectations is key to presenting a system that matches their world. While RIAs are made possible by the functionality of Flash and other technologies, users are usually not familiar with terms like rollover, timeline, actionscript, remoting, or CFCs – such technology-based terms should be avoided in the application. (See our sidebar for definitions if you’re not sure of them yourself).

While RIAs can offer novel metaphors, novelty often slows usefulness and usability. When using metaphors, ensure that they act consistently with their real-world counterparts. If application functions cause the metaphor to behave in ways that don’t match the real world, the metaphor has lost its usefulness and should be discarded in favor of a different concept.

Both information and functionality should be organized to reflect the user’s primary goals and tasks supported by the application. This supports a user’s feeling of competence and confidence in the task – a key need that is also supported by letting the user stay in control.

3. User control and freedom

Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Users are familiar with browser-based controls, including the Back button and Location field. However, using browser commands within an RIA may result in data loss.

The RIA should include code that is aware and responsive to browser history. For applications that contain complex functionality that is the focus of user attention, creating a full-screen version that hides the browser controls can be appropriate as long as there is a clearly marked exit to return to the browser.

While “undo” and “redo” are not yet well-developed in the Flash toolkit, changes to data can be stored as separate copies, allowing the application to revert to a previous version of the data. However, this becomes quite complex in multi-user environments and requires strong data modeling to support.

Many Flash projects include splash screens or other scripted presentations. These non-interactive exhibitions of technical prowess reduce the user’s feeling of control. The ubiquitous “Skip Intro” link offers little help – instead, consider how any scripted presentation benefits the user. If a scripted sequence doesn’t support user goals, skip the development time in favor of something that does. One area that may be a better investment is working to ensure consistency in the application.

4. Consistency and standards

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
All applications require consistency within their features, including terminology, layout, color, and behavior. Complying with interface standards can help maintain consistency. However, since RIAs are a new category of applications, standards are still being developed. The Microsoft Windows User Experience guidelines, Apple Human Interface guidelines, and Macromedia’s developing guidelines provide some alternatives starting points for RIA standards.

Branding guidelines also often require consistency that RIA teams need to consider. RIAs are often deployed as branded solutions for a variety of customers. The application needs to be flexible in implementing custom copy, color, and logos. However, branding should not compromise good design. RIA teams may need to show that the brand will gain equity through applying useful and usable standards as well as beautiful visual design. A gorgeous, cutting edge, award-winning presentation won’t help the brand if it’s easy for users to make disastrous mistakes that prevent them from reaching their goals.

5. Error prevention

Even better than good error messages is a careful design which prevents a problem from occurring in the first place.
In forms, indicate required fields and formats with examples. Design the system so that it recognizes various input options (780.555.1212 vs. 780-555-1212) rather than requiring the user to comply with an arbitrary format. Also consider limiting the amount of data entry required and reducing input errors by saving repetitious data and auto-filling fields throughout the application.

Avoid system functions with disastrous potential, such as “Delete All Records.” When functions with significant impact are necessary, isolate them from regular controls. Consider an “Advanced Options” area only accessible to administrators or superusers, rather than exposing dangerous functionality to all users.

With RIAs, when problems do occur, network connectivity allows for the capture and transmission of error details. Similarly, the distributed model of RIAs empowers developers to provide minor updates that are nearly immediately available to the user. These immediate updates can provide correction to issues that repeatedly cause user errors. Beyond the technology, another way to prevent errors is to make currently needed information available to the user instead of making them remember things from previous screens.

6. Recognition rather than recall

Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Too often, the rich presentation possibilities of Flash are used to play hide-and-seek with important interface elements. Don’t hide controls that are key to user tasks. Revealing application controls on rollover or with a click can create exciting visual transitions, but will slow user tasks and create significant frustration.

Since people who are engaged in a task decide where to click based on what they see, rollovers or other revealed elements can only provide secondary cues about what actions are appropriate. The interface should provide visible primary cues to guide user expectations and help users predict which controls will help them achieve their goals. While some of these cues will be basic functionality, cues should also be available for frequent users to show functions that save them time or let them work more flexibly.

7. Flexibility and efficiency of use

Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
RIAs can leverage the advanced functionality of the platform to provide accelerators such as keyboard shortcuts, type-ahead auto-completion, and automatic population of fields based on previously entered data or popularity of response.

Less technically sophisticated accelerators should also be available—particularly bookmarks—either by allowing bookmarking in the browser, or creating a bookmark utility within the application itself. Another option for giving quick access to a specific screen is assigning each screen a code which a user can enter in a text field to immediately access the screen without navigating to it.

RIAs also offer the opportunity allow for personalization of the application through dynamic response to popularity or frequency of use, or through user customization of functionality.

Established usability metrics, such as time spent carrying out various tasks and sub-tasks, as well as the popularity of certain actions, can be logged automatically, analyzed, and acted on in a nearly real-time fashion. For example, if a user repeatedly carries out a task without using accelerators, the application could provide the option of creating a shortcut or highlight existing accelerated options for completing the same task. However, providing these options should be an exercise in elegance, instead of a display of technical prowess.

8. Aesthetic and minimalist design

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
For any given feature, style, or branding element, ask two key questions: “What is the return on investment for the business?” and “What is the return on experience for the user?” What value does the element contribute? If a feature can be removed without seriously impacting ROI or ROE, the application will be better without it.

RIA design is often a balancing act between application functionality and brand awareness for the business. Limit user frustration by reducing branding emphasis in favor of functionality. While branding can and often should play an important role, the brand will best be supported by a positive user experience. Rather than creating a complicated visual style with an excess of interface “chrome,” work for simplicity and elegance.

Animation and transitions should also be used sparingly. While animation and transition can make a great demo in the boardroom, gratuitous animation will provoke user frustration. The time to animate an element interrupts a user’s concentration and engagement in their task. Disrupting task flow significantly impacts usability and user satisfaction.

Sound can also disrupt task flow – use subtle audio cues for system actions, rather than gratuitous soundtracks that are irrelevant to the task at hand.

A further advantage of maintaining clean, minimalist design is that it generally results in decreased file size, and lessened load times, which is essential given the limited patience of many internet users. Another advantage is that a clean interface makes it easier for the user to recognize when things are going right, and when things are going wrong.

9. Help users recognize, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

Rollover
Changing the visual appearance of an interface element when the mouse “rolls over” it. A rollover may also trigger changes to other interface elements.

Timeline
The Flash development environment shows a timeline to organize screen elements and their interaction over time, and along with ActionScript is the primary way of creating interactivity in Flash applications.

ActionScript
A JavaScript-based scripting language built into Flash. Used to program interface actions and behaviors.

Remoting
Server technology called Flash Remoting allows the Flash client to interact with software components on a server that can contain business logic or other code. Flash Remoting can allow connections between Flash and server programming environments like ColdFusion, .NET, Java, and PHP. This provides for a cleaner division of labor and better security, with the server-side components doing the heavy computational lifting and the Flash client focusing on user interaction.

CFCs
Acronym for ColdFusion Components – server-based software components written in Macromedia’s ColdFusion language. CFCs natively support remoting.

Error messages should hide technical information in favor of explaining in everyday language that an error occurred. References to “missing objects” or other development jargon will only frustrate users.

RIA error messages can explain complicated interactions using animation. However, animation should be used sparingly in favor of clear textual explanations. Explanations should focus on solutions as much as causes of error. Display error messages along with the appropriate application controls or entry field sot that the user can take appropriate corrective action when reading the message. The ability to overlay help messages or illustrations directly over the interface can be useful in explaining task flow between related screen elements.

When errors are not easily diagnosed, make solution suggestions based on probability – ask what things is the user most likely to want to accomplish right now, and present those options.

RIAs also provide the opportunity to immediately connect a user experiencing major difficulties with support personnel who can guide them to a solution through text chat, video chat, or remote manipulation. These live support channels are just some of the help options available to RIA teams.

10. Help and documentation

Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.
RIAs should contain simple and concise instructions, prompts, and cues embedded in the application itself. More extensive help should be available from within the RIA.
Using animation or video tutorials with concise narration can often guide a user through complex tasks, while engaging individuals who learn better from visual or audio instruction rather than text. Showing the required steps is often easier for the user to understand than mentally translating a text description of the steps to the appropriate interface elements. Providing immediate contextual help through the use of tool tips and contextual help buttons allows the user to complete their tasks without having to shift focus to a help system.

Conclusion
This take on how Jakob Nielsen’s heuristics apply to RIAs are far from definitive. Rather than accepting these examples as unquestioned rules, we hope it sparks your own thinking about how to apply the heuristics in your work, whether you’re a Flash developer or an interaction designer (or both). RIAs hold considerable promise for both Flash developers and user experience practitioners. Usability best practices like Nielsen’s heuristics are essential for realizing that promise.

The key takeaway for the Flash community: RIAs aren’t about grabbing attention, they’re about getting things done. This is a different mindset than many marketing-driven Flash sites, where bells and whistles are often encouraged in an effort to hold short attention spans. With RIAs, there’s no need to shout – the user is already engaged in accomplishing a goal. The best way to rise above the crowd is to cultivate a deep understanding of who your users are, what their goals are, and then design to meet those goals as quickly and elegantly as possible.

The key takeaway for the user experience community: Flash has matured beyond bells and whistles to provide a platform that enables a far better user experience for complex interactions than regular browser technology. While it isn’t perfect, it can open new possibilities for you as a user advocate. You’ll hear less “we can’t do that” from engineering teams, and be able to create interfaces and interactions closer to your vision. Getting to know the potential of Flash and other RIA platforms will help user experience professionals take advantage of the rich interaction available.

Over the coming months and years, RIAs will move from cutting edge to mainstream. That transformation will accelerate with the Flash and user experience communities working together to understand and develop best practices and shared knowledge. We’re looking forward to great new things— if you’re already doing them, drop us a line in the comments.

Jess McMullin is a user experience consultant who helps companies innovate products and services. Through value-centered design, Jess works to maximize return on investment for his clients and return on experience for their users. A founder of the Asilomar Institute for Information Architecture, he runs the popular IA news site iaslash. He is based in Edmonton, Alberta, Canada.

Grant Skinner On the cutting edge of Rich Internet Application conceptualization and development, Grant fuses coding prowess with interface design, marketing and business logic. Grant is internationally recognized for his work on gskinner.com, FlashOS2 and gModeler. As a freelance consultant, Grant works with corporate clients and leading web agencies to deliver online solutions that generate value. A frequent conference speaker, he will be speaking on usability issues specific to RIAs in SIGGRAPH at the end of July. Grant is based in Edmonton, Alberta, Canada.