Reorgs: Rocky or Righteous?

by:   |  Posted on

As designers, we grapple every day with challenging projects. This of course is part of what keeps us coming back. Some challenges, although not directly related to project work, can still be looked at through a UX lens. In this case, I’m talking about a phenomenon you’re likely familiar with: company reorganization.

If you’ve been through a reorg (that’s ‘reorganization’ in water cooler parlance), you’ve probably experienced your share of the whispers, closed-door meetings, and mixed messages that seem to be par for the course when an organization goes through major changes in size, scope, staffing, or management.

I’ve been through a number of these shuffled decks myself, across several companies, and for a variety of reasons. It’s fair to claim that each one is different, but there’s enough overlap to identify patterns and form some baseline recommendations.

If you’re in a role with decision-making authority, then you’re ideally positioned to ensure that the reorg will be designed as an intentional experience with its actual user base in mind.

However, if you’re like the majority of us who aren’t in a position to make decisions about the reorg, you’re probably still reasonably close to the folks who are. Why not take the initiative and lay out some scenarios and recommendations for how the reorg can be designed for optimal reception and impact on your organization?

The users

Whether it’s planned or not, the scope of the reorg will have an audience far larger than the group of people seemingly affected on paper. The experience of these groups throughout the reorg should be purposefully designed by whomever is running the change management show.

Let’s take a look at who your users are.

  1. The folks who are officially part of the reorg. Their status is changing in some way, be it their actual role, reporting structure, or the like.
  2. Coworkers/teams who have direct or dotted-line dependencies with anyone or any team directly involved in the change.
  3. Coworkers/teams whose only connection is physical or cultural proximity or who ultimately report to the same upper management.
  4. Third party vendors who communicate with or provide services to reorg-affected parties.

Here’s what you need to realize: These groups will be getting bits and pieces of news about the reorg whether or not you craft that message explicitly.

With that in mind, you should ensure the messaging supports the business strategy, is accurate, and speaks to each party’s specific concerns.

This is the difference between an unplanned, unpredictable experience and an intentional, designed experience. It’s a golden opportunity to show your stakeholders they are a valued part of the organization, and you’ve got your arms firmly around managing the changes. If the right preparation goes into the reorg, you can nip in the bud any misinformation and unnecessary stress, building confidence in your team’s leadership and capability as a whole.

The alternative is to risk spending what trust currency you’ve accrued to date.

The message

Now that you know who you’re talking to, what do you say? It’s idealistic to think that you’ll know all the details when you begin planning the reorganization–but you do need to initiate your communications plan as close to the start of planning as you can.

Start by crafting general messaging that indicates the why–the logic being the necessity and desired benefits of the reorg. This should be high level until more details are known. If you know enough about the how to paint a low-res picture, do it.

A little bit of information that’s transparent and honest will go a long way–but take care not to make promises you can’t keep. Things can and will change, so own up to the reality that dates and other details are very much in flux to help you avoid having to take back your words when deadlines shift down the road.

As you approach major milestones in the reorg process and as the details solidify, provide appropriate communications to your audience groups–and do so again once the changes have been rolled out. This may seem like a lot of effort, but rest assured your people are asking questions. It’s up to you to address them proactively.

If a milestone date changes–and it will–the audience who’s been paying attention will still be looking to that date unless you update your wayfinding (in the form of project timeline communications). Without this careful attention to detail, you’re sharing bad information–perhaps more damaging than no information at all.

When the rubber meets the road

Inevitably, one question that will come up repeatedly throughout a reorg is “When does all this actually happen?” In other words, when do we start following the new processes, change how we route requests, start doing this and stop doing that?

For both logistical and psychological reasons, knowing how and when transitions will take place is vital. Often the difference between a stakeholder being stressed out (perhaps becoming a vocal opponent of the changes) versus being calm and confident is the company’s honest commitment to consciously bridging the transition with trained, capable support.

This could be as simple as a window of time during which existing persons or processes can continue to be called upon for support or as complex as an official schedule that shows specifically how and when both the responsibilities AND expectations of the audience segments will change.

Usability research

It’s not like you can do A:B testing with a reorg. You can, however, do some polling when the initial reorg information is shared, then midstream, and again after the reorg is complete.

Why do this research? As with any project, from your first person perspective, reorg elements might seem obvious–or you may have overlooked some pretty big pieces. Talking with your ‘users’ can be illuminating and also sends the message that their input is desired and valued.

While some reorgs are expressly designed to reduce overhead/staff, reorgs are not always about cutting heads. Often-times it’s a shuffle of resources (people), and if the right discussions happen you can guide that process to a win win.

Using a handy list written by a gentleman you may know of, here are some dimensions coopted for our use. Employ these as you see fit to generate interview material and discover how well your company reorg experience has been crafted.

Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?

We can ask our participants what they took away from the reorg communications they were sent. This includes actual group or 1:1 meetings, formal documents, emails, etc.

Find out if the materials conveyed the message so the transition was easy to understand. Did they grasp both the high-level view and the granular details? (In other words, overall strategy and the specific impact to them.)

Efficiency: Once users have learned the design, how quickly can they perform tasks?

If the folks you’re polling have been assigned specific assignments in the reorg, ask early on if they fully understand their instructions and if they could have added any insight that might have decreased task costs or durations. Midstream or late in the game you can follow up to see if those instructions turned out to be clear and accurate enough for the tasks to have been carried out efficiently.

Did task instructions have the most time-saving sequence? Were there steps left out of the tasking communications that had to be discovered and completed?

Memorability: When users return to the design after a period of not using it, how easily can they reestablish proficiency?

Remember the telephone game? Someone makes up a story and then each player passes the story on to the next by whispering. When the story makes it back to the author, the details have changed–it’s a different story.

When those involved in a reorg talk with others, they’ll pass along what they know. The simpler the story and the more they’ve understood it, the less you’ll lose in translation.

Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?

A successful reorg requires a lot of work and collaboration between groups. Mistakes tend to be costly and have a ripple effect, becoming harder to correct as time goes on. The critical path of these big projects is placed at risk due to missteps due in large part to (wait for it) learnability and memorability, or due to errors introduced by people who have been put off by the lack of efficiency of the reorg process and attempt to forge their own path.

Another source of error is in failing to communicate enough timely information about role changes to employees and contractors. Major change breeds anxiety, and in a job market where workers have the power and employers are constantly on the prowl for good (and hard to find) talent, it’s a mistake to risk wholesale attrition.

Avoid this error by honestly and accurately communicating dates and the likelihood of roles continuing as is or with changes. If roles are going away, be transparent about that too. Better to maintain trust and respect with clear messaging about terminations than to leave folks in doubt and unable to plan for their future.

Satisfaction: How pleasant is it to use the design?

If the reorg does NOT leave a bad taste in everyone’s mouth, and if the stated project goals have been met, you’re doing it right. Reorgs happen for a reason, typically because something’s suboptimal or simply broken. Ultimately, everyone should pull together and work towards a positive outcome resulting in better workflow, lowered cost of doing business, increased job satisfaction, and, of course, $$$.

Moving on

Regardless of your role in the company and the reorg, consider whether or not you can use your UX superpowers to make the entire process less painful, easier to understand, and more likely to succeed.

Good luck!

Ending the UX Designer Drought

by:   |  Posted on

The user experience design field is booming. We’re making an impact, our community is vibrant, and everyone has a job. And that’s the problem. A quick search for “user experience” on indeed.com reveals over 5,000 jobs posted in the last 15 days (as of March 15, 2014) in the United States alone! Simple math turns that into the staggering statistic of 10,000 new UX-related jobs being created every month.

This amount of work going undone is going to prevent us from delivering the value that UX promises. It’s going to force businesses to look toward something more achievable to provide that value. For user experience design to remain the vibrant, innovation-driving field it is today, we need to make enough designers to fill these positions.

Fortunately, there are a tremendous number of people interested in becoming a UX designer. Unfortunately, it is nearly impossible for these people to land one of these jobs. That’s because of the experience gap. All these UX jobs are all for people with 2-3 years of experience–or more.

UX design is a strategic discipline in which practitioners make recommendations that can have a big impact on an organization’s revenue. Frankly, a designer isn’t qualified to make these kinds of recommendations without putting in some time doing fundamental, in-the-trenches research and design work. While this might seem like an intractable problem, the skills required to do this fundamental work can be learned!

Someone just has to teach them.

Solving the problem

There are many ways to to teach fundamental UX design skills. Design schools have been doing it for years (and the new, practically-focused Unicorn Institute will start doing it soon). However, to access the full breadth of people interested in UX design, education in UX design needs to be accessible to people at any stage of their lives. To do that, you need to make learning a job.

This is not as crazy as it sounds. Other professions have been doing this for hundreds of years in the form of apprenticeship. This model has a lot to offer the UX design field and can be adapted to meet our particular needs.

What is apprenticeship?

In the traditional model of apprenticeship, an unskilled laborer offers their labor to a master craftsman in exchange for room, board, and instruction in the master’s craft. At the end of a certain period of time, the laborer becomes a journeyman and is qualified to be employed in other workshops. To be considered a master and have their own workshop and apprentices, however, a journeyman must refine their craft until the guild determines that their skill warrants it.

While this sounds medieval–because it is–there are a few key points that are still relevant today.

First, apprenticeship is learning by observation and practice. Designing a user experience requires skills that require practice to acquire. Apprentices are also compensated with more than just the training they receive. Even “unskilled,” they can still provide value. A baker’s apprentice can haul sacks of flour; a UX apprentice can tame the detritus of a design workshop.
Apprenticeship is also limited to a specific duration, after which the apprentice is capable of the basics of the craft. In modern terms, apprenticeship is capable of producing junior designers who can bring fundamental, tactical value to their teams. After a few years of practicing and refining these skills, those designers will be qualified to provide the strategic UX guidance that is so sought after in the marketplace.

A new architecture for UX apprenticeship

The apprenticeship model sounds good in theory, but does it work in practice? Yes. in 2013, The Nerdery, an interactive design and development shop in Minneapolis, ran two twelve-week cohorts of four apprentices each. There are now eight more UX designers in the world. Eight designers might seem like a drop in the 10,000-jobs-per-month bucket, but if more design teams build apprenticeship programs it will fill up very quickly.

Building an apprenticeship program might sound difficult to you. However, The Nerdery’s program was designed in such a way that it could be adapted to fit different companies of different sizes. We call this our UX Apprenticeship Architecture, and I encourage you to use it as the basis of your own apprenticeship program.

There are five components to this architecture. Addressing each of these components in a way that is appropriate for your particular organization will lead to the success of your program. This article only introduces each of these components. Further articles will discuss them in detail.

Define business value

The very first step in building any UX apprenticeship program is to define how the program will benefit your organization. Apprenticeship requires an investment of money, time, and resources, and you need to be able to articulate what value your organization can expect in return for that investment.

Exactly what this value is depends on your organization. For The Nerdery, the value is financial. We train our apprentices for them to become full members of our design team. Apprenticeship allows us to achieve our growth goals (and the revenue increase that accompanies growth for a client services organization). For other organizations, the value might be less tangible and direct.

Hire for traits, not talent

Once you’ve demonstrated the value of apprenticeship to your organization and you’ve got their support, the next thing to focus on is hiring.

It can take a long time at first until you narrow down what you’re looking for. Hiring apprentices is much different from hiring mid to senior level UX designers. You’re not looking for people who are already fantastic designers; you’re looking for people who have the potential to become fantastic designers. Identifying this potential is a matter of identifying certain specific traits within your applicants.

There are two general sets of traits to look for, traits common to good UX designers and traits that indicate someone will be a good apprentice. For example, someone who is defensive and standoffish in the face of critical feedback will not make a good apprentice. In addition to these two sets of traits, there will very likely be an additional set that is particular to your organization. At The Nerdery, we cultivate our culture very carefully, so it’s critical for us that the apprentices we hire fit our culture well.

Pedagogy

“Pedagogy” means a system of teaching. Developing the tactics for teaching UX design can take time as well, so it’s best to begin focusing on that once recruiting is underway. At The Nerdery, we found that there are four pedagogical components to learning UX design: orientation, observation, practice, and play.

Orientation refers to exposing apprentices to design methods and teaching them the very basics. In observation, apprentices watch experienced designers apply these methods and have the opportunity to ask them about what they did. Once an apprentice learns a method and observes it in use, they are ready to practice it by doing the method themselves on a real project. The final component of our pedagogy is play. Although practice allows apprentices to get a handle on the basics of a method, playing with that method in a safe environment allows them to make the method their own.

Mentorship

Observation and practice comprise the bulk of an apprentice’s experience. Both of these activities rely on close mentorship to be successful. Mentorship is the engine that makes apprenticeship go.

Although mentorship is the most critical component of apprenticeship, it’s also the most time-intensive. This is the biggest barrier an organization must overcome to implement an apprenticeship program. At The Nerdery, we’ve accomplished this by spreading the burden of mentorship across the entire 40-person design team rather than placing it full-time on the shoulders of four designers. Other teams can do this too, though the structure would be different for both smaller and larger teams.

Tracking

The final component of our apprenticeship architecture is tracking. It is largely tracking apprentice progress that gives apprenticeship the rigor that differentiates it from other forms of on-the-job training. We track not only the hours an apprentice spends on a given method but qualitative feedback from their mentors on their performance. Critical feedback is key to apprentice progress.

We track other things as well, such as feedback about mentors, feedback about the program, and the apprentice’s thoughts and feelings about the program. Tracking allows the program to be flexible, nimble, and responsive to the evolving needs of the apprentices.

Business value, traits, pedagogy, mentorship, and tracking: Think about these five things in relation to your organization to build your own custom apprenticeship program. Although this article has only scratched the surface of each, subsequent articles will go into details.
Part two of this series will cover laying the foundation for apprenticeship, defining its business value and identifying who to hire.

Part three will focus on the instructional design of apprenticeship, pedagogy, mentorship, and tracking.

If you’ve got a design team and you need to grow it, apprenticeship can help you make that happen!

On A Scale of 1 to 5

by:   |  Posted on

Where would we be without rating and reputation systems these days? Take them away, and we wouldn’t know who to trust on eBay, what movies to pick on Netflix, or what books to buy on Amazon. Reputation systems (essentially a rating system for people) also help guide us through the labyrinth of individuals who make up our social web. Is he or she worthwhile to spend my time on? For pity’s sake, please don’t check out our reputation points before deciding whether to read this article.

Rating and reputation systems have become standard tools in our design toolbox. But sometimes they are not well-understood. A recent post at the IxDA forum showed confusion about how and when to use rating systems. Much of the conversation was about whether to use stars or some other iconography. These can be important questions, but they miss the central point of ratings systems: to manage risk.

So, when we think about rating and reputation systems, the first question to ask is not, “Am I using stars, bananas, or chili peppers?” but, “what risk is being managed?”

 

What Is Risk?

We desire certainty in our transactions. It’s just our nature. We want to know that the person we’re dealing with on eBay won’t cheat us. Or that Blues Brothers 2000 is a bad movie (1 star on Netflix). So risk, most simply (and broadly), arises when a transaction has a number of possible outcomes, some of which are undesirable, but the precise outcome cannot be determined in advance.

 

Where Does Risk Come From?

There are two main sources of risk that are important for rating and reputation systems: asymmetric information and uncertainty.

Asymmetric information arises when one party to a transaction can not completely determine in advance the characteristics of the other party, and this information cannot credibly be communicated. The main question here is: can I, the buyer, trust you, the seller, to honestly complete the transaction we’re going to engage in? That means: will you take my money and run? Did you describe what you’re selling accurately? And so on.

This unequal distribution of information between buyers and sellers is a characteristic of most transactions, even in transactions where fraud is not a concern. Online transactions make asymmetric information problems worse. No longer can we look the seller in the eye and make a judgment about their honesty. Nor can we physically inspect what we’re buying and get a feel of its quality. We need other ways to manage our risk generated by asymmetric information.

The other source of risk is not knowing beforehand whether we’ll like the thing we’re buying. Here honesty and quality are not the issue, but rather our own personal tastes and the nature of the thing we’re buying. Movies, books, and wine are examples of experience goods, which we need to experience before we know their true value. For example, we’re partial to red wine from Italy, but that doesn’t mean we’ll like every bottle of Italian red wine we buy.

 

Managing Risk with Design

Among the ways to manage risk, two methods will be of interest to user experience designers:

  1. Signaling is where participants in a transaction communicate something meaningful about themselves.
  2. Reducing information costs involves reducing the time and effort it takes participants in a transaction to get meaningful information (such as: is this a good price? is this a quality good?).

Reputation systems tend to enable signaling and are best utilized in evaluating people’s historical actions. In contrast, rating systems are a way of leveraging user feedback to reduce information costs and are best utilized in evaluating standard products or services.

 It is important to note that reputation systems are not the only way to signal (branding and media coverage are other means, among others), and rating systems are not the only means of reducing information costs (better search engines and product reviews also help, for example). But these two tools are becoming increasingly important, as they provide quick reference points that capture useful data.

As we review various aspects of rating and reputation systems, the key questions to keep in mind are:

  1. Who is doing the rating?
  2. What, exactly, is being rated?
  3. If people are being rated, what behaviors are we trying to encourage or discourage?

 

Who is doing the Rating?

A random poll of several friends shows about half use the Amazon rating system when buying books and the other half ignore it. Why do they ignore it? Because they don’t know whether the people doing the rating are crackpots or if they have similar tastes to them.

Amazon has tried to counteract some of these issues by using features such as “Real Name” and “helpfulness” ratings of the ratings themselves (see Figure 1).

Figure 1

Figure 1: Amazon uses real names and helpfulness to communicate honesty of the review.

This is good, but requires time to read and evaluate the ratings and reviews. It also doesn’t answer the question, how much is this person like me?

Better is Netflix’s system (Figure 2), which is explicit about finding people like you, be they acknowledged friends or matched by algorithm.

Figure 2

Figure 2: Netflix lets you know what people like you thought of a movie.

Both these systems implicitly recognize that validation of the rating system itself is important. Ideally users should understand three things about the other people who are doing the rating:

  1. Are they honest and authentic?
  2. Are they like you in a way that is meaningful?
  3. Are they qualified to adequately rate the good or service in question?

The last point is important. While less meaningful for rating systems of some experience goods (we’re all movie experts, after all), it is more important for things we understand less well. For example, while we might be able to say whether a doctor is friendly or not, we may be less able to fairly evaluate a doctor’s medical skills.

 

What is being rated?

Many rating systems are binary (thumbs up, thumbs down), or scaled (5 stars, 5 chili peppers, etc.), but this uni-dimensionality is inappropriate for complicated services or products which may have many characteristics.

For example, Figure 3 depicts a rating system from the HP Activity Center and shows how not to do a rating system. Users select a project that interests them (e.g., how to make an Ireland Forever poster) and then complete it using materials they can purchase from HP (e.g., paper). A rating system is included, presumably to help you decide which project you should undertake in your valuable time.

Figure 3

Figure 3: The rating system on the HP Activity Center site: what not to do.

A moment’s reflection raises the following question: what is being rated? The final outcome of the project? The clarity of the instructions? How fun this project is? We honestly don’t know. Someone thoughtlessly included this rating system.

Good rating systems also don’t inappropriately “flatten” the information that they collect into a single number. Products and services can have many characteristics, and not being clear on what characteristics are being rated, or inappropriately lumping all aspects into a single rating, is misleading and makes the rating meaningless.

RateMDs, a physician rating site, uses a smiley face to tell us about how good the doctor is (Figure 4).

Figure 4

Figure 4: RateMDs.com rating system for doctors.

Simple? Yes. Appropriate? Perhaps not.

Better is Vitals, a physician rating site that includes information about doctors’ years of experience, any disciplinary actions they might have, their education, and a patient rating system (Figure 5).

Figure 5

Figure 5: The multi-dimensional rating system on Vitals.com.

While Vitals has an overall rating, more important are the components of the system. Each variable – ease of appointment, promptness, etc. – reflects a point of concern that helps to evaluate physicians.

When rating experiences, what is being rated is relatively clear. Did you enjoy the experience of consuming this good or not? Rating physical goods and products can be more complicated. An ad hoc analysis of Amazon’s rating system (Figure 6) should help explain.

Amazon's rating system

Figure 6: Amazon’s rating system is not always consistent.

In this example the most helpful favorable and unfavorable reviews are highlighted. However, each review is addressing different variables. The favorable review talks about how easy it is to set up this router, while the unfavorable review talks about the lack of new features. These reviews are presented as comparable, but they are not. These raters were thinking about different characteristics of the router.

The point here is that rating systems need to be appropriate for the goods or services that are being rated. A rating system for books cannot easily be applied to a rating system for routers, since the products are so entirely different in how we experience them. What aspects we rate need to be carefully selected, and based on the characteristics of the product or service being rated.

 

What behaviors are we trying to encourage?

Any rating of people is essentially a reputation system. Despite some people’s sensitivity to being rated, reputation systems are extremely valuable. Buyers need to know whom they can trust. Sellers need to be able to communicate – or signal based on their past actions – that they are trustworthy. This is particularly true online, where it’s common to do business with someone you don’t know.

But designing a good reputation system is hard. eBay’s reputation system has had some problems, such as the practice of “defensive rating” (rate me well and I’ll rate you well; rate me bad and I’ll rate you worse). This defeats the purpose of a rating system, since it undermines the honesty of the people doing the rating, and eBay has had to address this flaw in their system. What started out as an open system now needs to permit anonymous ratings in order to save the reputation (as it were) of the reputation system.

While designing a good reputation system is hard, it’s not impossible. There are five key things to keep in mind when designing a reputation system:

 

1. List the behaviors you want to encourage and those that you want to discourage

It’s obvious what eBay wants to encourage (see Figure 7). A look at a detailed ratings page shows they want sellers to describe products accurately, communicate well (and often), ship in a reasonable time, and not charge unreasonably for shipping. (Not incidentally, you could also view these dimensions as source of risk in a transaction.)

Figure 7

Figure 7: eBay encourages good behavior.

 

 

2. Be transparent

Once you know the behaviors you want to encourage, you then need to be transparent about it. Your users need to know how they are being rated and on what basis. Often a reputation is distilled into a single number — say, reputation points — but it is impossible to look at a number and derive the formula that produced it. While Wikinvest (Figure 8) doesn’t show a formula (which would be ideal), they do indicate what actions you took to receive your point total.

Figure 8

Figure 8: Wikinvest’s reputation system

Any clarity that is added to a reputation system will make your users happy, and it will make them more likely to behave in the manner you desire.

 

3. Keep your reputation system flexible

Any scoring system is open to abuse, and chances are that any reputation system you design will be abused in imaginative ways that you can’t predict. Therefore it’s important to keep your system flexible. If people begin behaving in ways that enhance their reputation but don’t enhance the community, the reputation system needs to be adjusted.

Changing the weighting of certain behaviors is one way to adjust your system. Adding ratings (or points) for new behaviors is another. The difficulty here will be in keeping everything fair. People don’t like a shifting playing field, so tweaks are better than wholesale changes. And changes should also be communicated clearly.

 

4. Avoid negative reputations

When possible, reputation systems should also be non-negative towards the individual. While negative reputations are important to protect people, negative reputations should not always be emphasized. This is specifically true in community sites where users generate much of the content, and not much is at stake (except perhaps your prestige).

Looking at our example above (Figure 8), Wikinvest uses the term “Analyst” (a nice, non-offensive term … if you’re not in investment banking), to mean, “this person isn’t really contributing much.”

 

5. Reflect reality

Systems sometimes fail on community sites when people belong to multiple communities and their complete reputations are not contained within any one of them. While there are exceptions, allowing reputations earned elsewhere to be imported can be a smart way to bring your system in line with reality and increase the value of information that it provides.

 

Conclusion

Our discussion of rating systems and reputation systems is certainly incomplete. We do hope that we’ve given a good description of risk in online transactions, and how understanding this can help user experience designers better manage risk via the design of more robust rating and reputation systems.

In addition, we’d like to begin a repository of rating and reputation systems. If you find any that you’d like to share, feel free to submit them at http://101ratings.com/submit.php.

Leading Designers to New Frontiers

by:   |  Posted on

iasummit_logo.png


Adaptive Path’s “MX San Francisco”:http://www.adaptivepath.com/events/2008/apr/: Managing Experience through Creative Leadership took place in San Francisco between April 20-22. The conference focused on helping managers and designers deal with the complexity, challenges, and opportunities that make every day so entertaining.

Jeff Parks and Chris Baum sat down with several of the conference speakers and organizers to further examine the issues that the sessions revealed.

You can also follow the Boxes and Arrows podcasts on:
iTunes     Del.icio.us     B&A MX podcast theme music created and provided by BumperTunes™

Creating the Next iPodCordell Ratzlaff
I had the pleasure of speaking with Cordell Ratzlaff about his presentation “Creating the next iPod”. Cordell is leading product design for Cisco’s voice, video, and web collaboration products. We discuss the necessity of creating a great corporate culture in order to create great products.


Download audio

Interactions and RelationshipsRichard Anderson
Chris Baum, editor-in-chief for Boxes and Arrows sits down with editor-in-chief for Interactions Magazine, Richard Anderson at MX San Francisco to discuss the different techniques, and skill sets it takes to develop and publish to the IA and UX communities.


Download audio

New Interactions: Enlightened Trial and ErrorBjörn Hartmann
Björn Hartmann and I discuss his presentation entitled New Interactions: Enlightened Trial And Error. and how he is leading work in design tools for pervasive computing, sensor based interactions, and design by modifications. Björn is a PhD candidate in Human Computer Interaction at Stanford University and Editor-in-Chief of Ambidextrous magazine, Stanford’s Journal of Design.


Download audio

Chocolate and User ExperienceMichael Recchiuti
Michael Recchiuti talks about the experience of making chocolate and how different flavors inspire new creations for the business and his customers. Looking at different professions outside of the web world in which most UX practitioners work can inspire innovation and creativity.


Download audio

Round Table Discussion with Adaptive Path and Boxes and ArrowsChris Baum, Brandon Schauer, Sarah Nelson, Henning Fischer, and Ryan Freitas
We start with a mash-up of these brief interviews followed by a round table discussion with editor-in-cheif at Boxes and Arrows Chris Baum, and four members of the Adaptive Path team including Brandon Schauer, Henning Fischer, Sarah Nelson, and Ryan Freitas about these comments and their own impressions of MX.


Download audio

Thanks to “Adaptive Path”:http://www.adaptivepath.com/ for sponsoring these podcasts.

We Tried To Warn You, Part 2

by:   |  Posted on

A large but unknowable proportion of businesses fail pursuing nearly perfect strategies.

In part I of We Tried to Warn You, three themes were developed:

  • Organizations as wicked problems,
  • The differences of failure leverage in small versus large organizations, and
  • The description of failure points

These should be considered exploratory elements of organizational architecture, from a communications information architecture perspective. While the organizational studies literature has much to offer about organizational learning mechanisms, we find very little about failure from the perspective of product management, management processes, or organizational communications.

Researching failure is similar to researching the business strategies of firms that went out of business (e.g., Raynor, 2007). They are just not available for us to analyze, they are either covered-up embarrassments, or they become transformed over time and much expense into “successes.”

In The Strategy Paradox, Raynor describes the “survivor’s bias” of business research, pointing out that internal data is unavailable to researchers for the dark matter of the business universe, those that go under. Raynor shows how a large but unknowable proportion of businesses fail pursuing nearly perfect strategies. (Going concerns often survive because of their mediocre strategies, avoiding the hazards of extreme strategies).

A major difference in the current discussion is that organizational failure as defined here does not bring down the firm itself, at least not directly, as a risky strategy might. But it often leads to complete reorganization of divisions and large projects, which should be recognized as a significant failure at the organizational level.

One reason we are unlikely to assess the organization as having failed is the temporal difference between failure triggers and the shared experience of observable events. Any product failure will affect the organization, but some failures are truly organizational. They may be more difficult to observe.

If a prototype design fails quickly (within a single usability test period), and a project starts and fails within 6 months, and a product takes perhaps a year to determine its failure – what about an organization? We should expect a much longer cycle from originating failure event to general acknowledgement of failure, perhaps 2-5 years.

There are different timeframes to consider with organizational versus project or product failure. In this case study, the failure was not observable until after a year or so of unexpectedly weak sales, with managers and support dealing with customer resistance to the new product.

However, decisions made years earlier set the processes in place that eventuated as adoption failure. Tracing the propagation of decisions through resulting actions, we also find huge differences in temporal response between levels of hierarchy (found in all large organizations).

Failures can occur when a chain of related decisions, based on bad assumptions, propagate over time. These micro-failures may have occurred at the time as “mere” communication problems.

In our case study, product requirements were defined based on industry best practices, guided by experts and product buyers, but excluding user feedback on requirements. Requirements were managed by senior product managers and were maintained as frozen specifications so that development decisions could be managed. Requirements become treated as-if validated by their continuing existence and support by product managers. But with no evaluation by end users of embodied requirements – no process prototype was demonstrated – product managers and developers had no insight into dire future consequences of product architecture decisions.

Consider the requisite timing of user research and design decisions in almost any project. A cycle of less than a month is a typical loop for integrating design recommendations from usability results into an iterative product lifecycle.

If the design process is NOT iterative, we see the biggest temporal gaps of all. There is no way to travel back in time to revise requirements unless the tester calls a “show-stopper,” and that would be an unlikely call from an internal usability evaluator.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

Here we find the seeds of product failure, but the organization colludes to defend the project timelines, to save face, to maintain leadership confidence. Usability colludes to ensure they have a future on the job. With massive failures, everyone is partly to blame, but nobody accepts personal responsibility.

The roles of user experience


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

As Figure 1 shows, UX reported to development management, and was further subjected to product and project management directives.

In many firms, UX has little independence and literally no requirements authority, and in this case was a dotted-line report under three competing authorities. That being the case, by the time formal usability tests were scheduled, requirements and development were too deeply committed to consider any significant changes from user research. With the pressures of release schedules looming, usability was both rushed and controlled to ensure user feedback was restricted to issues contained within the scope of possible change and with minor schedule impact.

By the time usability testing was conducted, the scope was too narrowly defined to admit any ecologically valid results. Usability test cases were defined by product managers to test user response to individual transactions, and not the systematic processes inherent in the everyday complexity of retail, service, or financial work.

  • Testing occurred in a rented facility, and not in the retail store itself.
  • The context of use was defined within a job role, and not in terms of productivity or throughput.
  • Individual screen views were tested in isolation, not in the context of their relationship to the demands of real work pressures – response time, database access time, ability to learn navigation and to quickly navigate between common transactions.
  • Sequences of common, everyday interactions were not evaluated.

And so on.

The product team’s enthusiasm for the new and innovative may prevent listening to the users’ authentic preferences. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable.

Many well-tested products have been released only to fail in the marketplace due to widespread user preference to maintain their current, established, well-known system. This especially so if the work practice requires considerable learning and use of an earlier product over time, as happened in our retail system case. Very expensive and well-documented failures abound due to user preference for a well-established installed base, with notorious examples in air traffic control, government and security, medical / patient information systems, and transportation systems.

When UX is “embedded” as part of a large team, accountable to product or project management, the natural bias is to expect the design to succeed. When UX designers must also run the usability tests (as in this case), we cannot expect the “tester” to independently evaluate the “designer’s” work. The same person in two opposing roles, the UX team reporting to product, and restricted latitude for design change (due to impossible delivery deadlines) – we should consider this a design failure in the making.

In this situation, it appears UX was not allowed to be effective, even if the usability team understood how to work around management to make a case for the impact of its discoveries. But the UX team may not have understood the possible impact at the time, but only in retrospect after the product failed adoption.

We have no analytical or qualitative tools for predicting the degree of market adoption based on even well-designed usability evaluations. Determining the likelihood of future product adoption failure across nationwide or international markets is a judgment call, even with survey data of sufficient power to estimate the population. Because of the show-stopping impact of advancing such a judgment, it’s unlikely the low-status user experience role will push the case, even if such a case is clearly warranted from user research.

The racket: The organization as self-protection system

Modern organizations are designed to not fail. But they will fail at times when pursuing their mission in a competitive marketplace. Most large organizations that endure become resilient in their adaptation to changing market conditions. They have plenty of early warning systems built into their processes – hierarchical management, financial reports, project management and stage-gate processes. The risk of failure becomes distributed across an ever-larger number of employees, reducing risk through assumed due diligence in execution.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged. Groupthink prevails when people conspire to maintain silence about bad decisions. We then convince ourselves that leadership will win out over the risks; the strategy will work if we give it time.

Argyris’ organizational learning theory shows people in large organizations are often unable to acknowledge the long-term implications of learning situations. While people are very good at learning from everyday mistakes, they don’t connect the dots back to the larger failure that everyone is accommodating.

Called “double loop learning,” the goal is learn from an outcome and reconfigure the governing variables of the situation’s pattern to avoid the problem in the future. (Single-loop learning is merely changing one’s actions in response to the outcome). Argyris’ research suggests all organizations have difficulties in double-loop learning; organizations build defenses against this learning because it requires confrontation, reflection, and change of governance, decision processes, and values-in-use. It’s much easier to just change one’s behavior.

What can UX do about it?

User experience/IA clearly plays a significant role as an early warning system for market failure. Context-sensitive user research is perhaps the best tool for available for informed judgement of potential user adoption issues.

Several common barriers to communicating this informed judgment have been discussed:

  • Organizational defenses prevent anyone from advancing theories of failure before failure happens.
  • UX is positioned in large organizations in a subordinate role, and may have difficulty planning and conducting the appropriate research.
  • UX, reporting to product management, will have difficulty advancing cases with strategic implications, especially involving product failure.
  • Groupthink – people on teams protect each other and become convinced everything will work out.
  • Timing – by the time such judgments may be formed, the timeframes for realistic responsive action have disappeared.

Given the history of organizations and the typical situating of user experience roles in large organizations, what advice can we glean from the case study?

Let’s consider leveraging the implicit roles of UX, rather than the mainstream dimensions of skill and practice development.

UX serves an influencing role – so let’s influence

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

User experience has the privilege of being available on the front lines of product design, research, and testing. But it does not carry substantial organizational authority. In a showdown between product management and UX, product wins every time. Product is responsible for revenue, and must live or die by the calls they make.

So UX should look to their direct internal client’s needs. UX should fit research and recommendations to the context of product requirements, adapting to the goals and language of requirements management. We (UX) must design sufficient variability into prototypes to be able to effectively test expected variances in preference and work practice differences. We must design our test practices to enable determinations from user data as to whether the product requirements fit the context of the user’s work and needs.

We should be able to determine, in effect, whether we are designing for a product, or designing the right product in the first place. Designing the right product means getting the requirements right.

Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications.

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

UX is a recursive process – let’s make recursive organizations as well

User experience is highly iterative, or it fails as well. We always get more than one chance to fail, and we’ve built that into practices and standards.

Practices and processes are repeated and improved over time. But organizations are not flexible with respect to failure. They are competitive and defensive networks of people, often with multiple conflicting agendas. Our challenge is to encourage organizations to recurse (recourse?) more.

We should do this by creating a better organizational user experience. We should follow our own observations and learning of the organization as a system of internal users. Within this recursive system (in which we participate as a user), we can start by moving observations up the circle of care (or the management hierarchy if you will).

I like to think our managers do care about the organization and their shared goals. But our challenge here is to learn and perform from double-loop learning ourselves, addressing root causes and “governing variables” of issues we encounter in organizational user research. We do this by systematic reflection on patterns, and improving processes incrementally, and not just “fixing things” (single-loop learning).

We can adopt a process of socialization (Jones, 2007) rather than institutionalization, of user experience. Process socialization was developed as a more productive alternative to top-down institutionalization for the introduction of UX practices in organizations introducing UX into an intact product development process.

While there is strong theoretical support for this approach (from organizational structuration and social networks), socialization is recommended because it works better than the alternatives. Institutionalization demands that an organization establish a formal set of roles, relationships, training, and management added to the hierarchy to coordinate the new practices.

Socialization instead affirms that a longer-term, better understood, and organizationally resilient adoption of the UX process occurs when people in roles lateral to UX learn the practices through participation and gradual progression of sophistication. The practices employed in a socialization approach are nearly the opposite (in temporal order) of the institutionalization approach:

  • Find a significant UX need among projects and bring rapid, lightweight methods to solve obvious problems
  • Have management present the success and lessons learned
  • Do not hire a senior manager for UX yet, lateral roles should come to accept and integrate the value first
  • Determine UX need and applications in other projects. Provide tactical UX services as necessary, as internal consulting function.
  • Develop practices within the scope of product needs. Engage customers in field and develop user and work domain models in participatory processes with other roles.
  • Build an organic demand and interest in UX. Provide consulting and usability work to projects as capability expands. Demonstrate wins and lessons from field work and usability research.
  • Collaborate with requirements owners (product managers) to develop user-centered requirements approach. Integrate usability interview and personas into requirements management.
  • Integrate with product development. Determine development lifecycle decision points and user information required.
  • Establish user experience as process and organizational function
  • Provide awareness training, discussion sessions, and formal education as needed to fit UX process.
  • Assessment and renewal, staffing, building competency

We should create more opportunities to challenge failure points and process breakdowns. Use requirements reviews to challenge the fit to user needs. Use a heuristic evaluation to bring a customer service perspective on board. In each of those opportunities, articulate the double-loop learning point. “Yes, we’ll fix the design, but our process for reporting user feedback limits us to tactical fixes like these. Let’s report the implications of user feedback to management as well.”

We can create these opportunities by looking for issues and presenting them as UX points but in business terms, such as market dynamics, competitive landscape, feature priority (and overload), and user adoption. This will take time and patience, but then, its recursive. In the long run we’ll have made our case without major confrontations.

Conclusions

The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.

Scott Cook, Intuit’s Founder, famously said at CHI 2006: “The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.”

Intelligent managers actually celebrate failures – that’s how we learn. If we aren’t failing at anything, how do we know we’re trying? The problem is recognizing when failure is indeed an option.

How do we know when a project so large – an organizational level project – will belly-up? How can something so huge and spectacular in its impact be so hard to call, especially at the time decisions are being made that could change the priorities and prevent an eventual massive flop? The problem with massive failure is that there’s very little early warning in the development system, and almost none at the user or market level.

When product development fails to respect the user, or even the messenger of user feedback, bad decisions about interface architecture compound and push the product toward an uncertain reception in the marketplace. Early design decisions compound by determining architectures, affecting later design decisions, and so on through the lifecycle of development.

These problems can be compounded even when good usability research is performed. When user research is conducted too late in the product development cycle, and is driven by usability questions related to the product and not the work domain, development teams are fooled into believing their design will generalize to user needs across a large market in that domain. But at this point in product development, the fundamental platform, process, and design decisions have been made, constraining user research from revisiting questions that have been settled in earlier phases by marketing and product management.

References

Argyris, C. (1992). On organizational learning. London: Blackwell.

Howard, R. (1992). The CEO as organizational architect: an interview with Xerox’s Paul Allaire. Harvard Business Review, 70 (5), 106-121.

Jones, P.H. (2007). Socializing a Knowledge Strategy. In E. Abou-Zeid (Ed.) Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research, pp. 134-164. Hershey, PA: Idea Group.

Raynor, M.E. The strategy paradox: Why committing to success leads to failure (and what to do about it). New York: Currency Doubleday.

Rittel, H.W.J. and Weber, M.W. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169.

Taleb, N.N (2007).The Black Swan: The Impact of the Highly Improbable. New York: Random House.