We Tried To Warn You, Part 2

Posted by

A large but unknowable proportion of businesses fail pursuing nearly perfect strategies.

In part I of We Tried to Warn You, three themes were developed:

  • Organizations as wicked problems,
  • The differences of failure leverage in small versus large organizations, and
  • The description of failure points

These should be considered exploratory elements of organizational architecture, from a communications information architecture perspective. While the organizational studies literature has much to offer about organizational learning mechanisms, we find very little about failure from the perspective of product management, management processes, or organizational communications.

Researching failure is similar to researching the business strategies of firms that went out of business (e.g., Raynor, 2007). They are just not available for us to analyze, they are either covered-up embarrassments, or they become transformed over time and much expense into “successes.”

In The Strategy Paradox, Raynor describes the “survivor’s bias” of business research, pointing out that internal data is unavailable to researchers for the dark matter of the business universe, those that go under. Raynor shows how a large but unknowable proportion of businesses fail pursuing nearly perfect strategies. (Going concerns often survive because of their mediocre strategies, avoiding the hazards of extreme strategies).

A major difference in the current discussion is that organizational failure as defined here does not bring down the firm itself, at least not directly, as a risky strategy might. But it often leads to complete reorganization of divisions and large projects, which should be recognized as a significant failure at the organizational level.

One reason we are unlikely to assess the organization as having failed is the temporal difference between failure triggers and the shared experience of observable events. Any product failure will affect the organization, but some failures are truly organizational. They may be more difficult to observe.

If a prototype design fails quickly (within a single usability test period), and a project starts and fails within 6 months, and a product takes perhaps a year to determine its failure – what about an organization? We should expect a much longer cycle from originating failure event to general acknowledgement of failure, perhaps 2-5 years.

There are different timeframes to consider with organizational versus project or product failure. In this case study, the failure was not observable until after a year or so of unexpectedly weak sales, with managers and support dealing with customer resistance to the new product.

However, decisions made years earlier set the processes in place that eventuated as adoption failure. Tracing the propagation of decisions through resulting actions, we also find huge differences in temporal response between levels of hierarchy (found in all large organizations).

Failures can occur when a chain of related decisions, based on bad assumptions, propagate over time. These micro-failures may have occurred at the time as “mere” communication problems.

In our case study, product requirements were defined based on industry best practices, guided by experts and product buyers, but excluding user feedback on requirements. Requirements were managed by senior product managers and were maintained as frozen specifications so that development decisions could be managed. Requirements become treated as-if validated by their continuing existence and support by product managers. But with no evaluation by end users of embodied requirements – no process prototype was demonstrated – product managers and developers had no insight into dire future consequences of product architecture decisions.

Consider the requisite timing of user research and design decisions in almost any project. A cycle of less than a month is a typical loop for integrating design recommendations from usability results into an iterative product lifecycle.

If the design process is NOT iterative, we see the biggest temporal gaps of all. There is no way to travel back in time to revise requirements unless the tester calls a “show-stopper,” and that would be an unlikely call from an internal usability evaluator.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

Here we find the seeds of product failure, but the organization colludes to defend the project timelines, to save face, to maintain leadership confidence. Usability colludes to ensure they have a future on the job. With massive failures, everyone is partly to blame, but nobody accepts personal responsibility.

The roles of user experience


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

As Figure 1 shows, UX reported to development management, and was further subjected to product and project management directives.

In many firms, UX has little independence and literally no requirements authority, and in this case was a dotted-line report under three competing authorities. That being the case, by the time formal usability tests were scheduled, requirements and development were too deeply committed to consider any significant changes from user research. With the pressures of release schedules looming, usability was both rushed and controlled to ensure user feedback was restricted to issues contained within the scope of possible change and with minor schedule impact.

By the time usability testing was conducted, the scope was too narrowly defined to admit any ecologically valid results. Usability test cases were defined by product managers to test user response to individual transactions, and not the systematic processes inherent in the everyday complexity of retail, service, or financial work.

  • Testing occurred in a rented facility, and not in the retail store itself.
  • The context of use was defined within a job role, and not in terms of productivity or throughput.
  • Individual screen views were tested in isolation, not in the context of their relationship to the demands of real work pressures – response time, database access time, ability to learn navigation and to quickly navigate between common transactions.
  • Sequences of common, everyday interactions were not evaluated.

And so on.

The product team’s enthusiasm for the new and innovative may prevent listening to the users’ authentic preferences. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable.

Many well-tested products have been released only to fail in the marketplace due to widespread user preference to maintain their current, established, well-known system. This especially so if the work practice requires considerable learning and use of an earlier product over time, as happened in our retail system case. Very expensive and well-documented failures abound due to user preference for a well-established installed base, with notorious examples in air traffic control, government and security, medical / patient information systems, and transportation systems.

When UX is “embedded” as part of a large team, accountable to product or project management, the natural bias is to expect the design to succeed. When UX designers must also run the usability tests (as in this case), we cannot expect the “tester” to independently evaluate the “designer’s” work. The same person in two opposing roles, the UX team reporting to product, and restricted latitude for design change (due to impossible delivery deadlines) – we should consider this a design failure in the making.

In this situation, it appears UX was not allowed to be effective, even if the usability team understood how to work around management to make a case for the impact of its discoveries. But the UX team may not have understood the possible impact at the time, but only in retrospect after the product failed adoption.

We have no analytical or qualitative tools for predicting the degree of market adoption based on even well-designed usability evaluations. Determining the likelihood of future product adoption failure across nationwide or international markets is a judgment call, even with survey data of sufficient power to estimate the population. Because of the show-stopping impact of advancing such a judgment, it’s unlikely the low-status user experience role will push the case, even if such a case is clearly warranted from user research.

The racket: The organization as self-protection system

Modern organizations are designed to not fail. But they will fail at times when pursuing their mission in a competitive marketplace. Most large organizations that endure become resilient in their adaptation to changing market conditions. They have plenty of early warning systems built into their processes – hierarchical management, financial reports, project management and stage-gate processes. The risk of failure becomes distributed across an ever-larger number of employees, reducing risk through assumed due diligence in execution.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged. Groupthink prevails when people conspire to maintain silence about bad decisions. We then convince ourselves that leadership will win out over the risks; the strategy will work if we give it time.

Argyris’ organizational learning theory shows people in large organizations are often unable to acknowledge the long-term implications of learning situations. While people are very good at learning from everyday mistakes, they don’t connect the dots back to the larger failure that everyone is accommodating.

Called “double loop learning,” the goal is learn from an outcome and reconfigure the governing variables of the situation’s pattern to avoid the problem in the future. (Single-loop learning is merely changing one’s actions in response to the outcome). Argyris’ research suggests all organizations have difficulties in double-loop learning; organizations build defenses against this learning because it requires confrontation, reflection, and change of governance, decision processes, and values-in-use. It’s much easier to just change one’s behavior.

What can UX do about it?

User experience/IA clearly plays a significant role as an early warning system for market failure. Context-sensitive user research is perhaps the best tool for available for informed judgement of potential user adoption issues.

Several common barriers to communicating this informed judgment have been discussed:

  • Organizational defenses prevent anyone from advancing theories of failure before failure happens.
  • UX is positioned in large organizations in a subordinate role, and may have difficulty planning and conducting the appropriate research.
  • UX, reporting to product management, will have difficulty advancing cases with strategic implications, especially involving product failure.
  • Groupthink – people on teams protect each other and become convinced everything will work out.
  • Timing – by the time such judgments may be formed, the timeframes for realistic responsive action have disappeared.

Given the history of organizations and the typical situating of user experience roles in large organizations, what advice can we glean from the case study?

Let’s consider leveraging the implicit roles of UX, rather than the mainstream dimensions of skill and practice development.

UX serves an influencing role – so let’s influence

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

User experience has the privilege of being available on the front lines of product design, research, and testing. But it does not carry substantial organizational authority. In a showdown between product management and UX, product wins every time. Product is responsible for revenue, and must live or die by the calls they make.

So UX should look to their direct internal client’s needs. UX should fit research and recommendations to the context of product requirements, adapting to the goals and language of requirements management. We (UX) must design sufficient variability into prototypes to be able to effectively test expected variances in preference and work practice differences. We must design our test practices to enable determinations from user data as to whether the product requirements fit the context of the user’s work and needs.

We should be able to determine, in effect, whether we are designing for a product, or designing the right product in the first place. Designing the right product means getting the requirements right.

Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications.

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

UX is a recursive process – let’s make recursive organizations as well

User experience is highly iterative, or it fails as well. We always get more than one chance to fail, and we’ve built that into practices and standards.

Practices and processes are repeated and improved over time. But organizations are not flexible with respect to failure. They are competitive and defensive networks of people, often with multiple conflicting agendas. Our challenge is to encourage organizations to recurse (recourse?) more.

We should do this by creating a better organizational user experience. We should follow our own observations and learning of the organization as a system of internal users. Within this recursive system (in which we participate as a user), we can start by moving observations up the circle of care (or the management hierarchy if you will).

I like to think our managers do care about the organization and their shared goals. But our challenge here is to learn and perform from double-loop learning ourselves, addressing root causes and “governing variables” of issues we encounter in organizational user research. We do this by systematic reflection on patterns, and improving processes incrementally, and not just “fixing things” (single-loop learning).

We can adopt a process of socialization (Jones, 2007) rather than institutionalization, of user experience. Process socialization was developed as a more productive alternative to top-down institutionalization for the introduction of UX practices in organizations introducing UX into an intact product development process.

While there is strong theoretical support for this approach (from organizational structuration and social networks), socialization is recommended because it works better than the alternatives. Institutionalization demands that an organization establish a formal set of roles, relationships, training, and management added to the hierarchy to coordinate the new practices.

Socialization instead affirms that a longer-term, better understood, and organizationally resilient adoption of the UX process occurs when people in roles lateral to UX learn the practices through participation and gradual progression of sophistication. The practices employed in a socialization approach are nearly the opposite (in temporal order) of the institutionalization approach:

  • Find a significant UX need among projects and bring rapid, lightweight methods to solve obvious problems
  • Have management present the success and lessons learned
  • Do not hire a senior manager for UX yet, lateral roles should come to accept and integrate the value first
  • Determine UX need and applications in other projects. Provide tactical UX services as necessary, as internal consulting function.
  • Develop practices within the scope of product needs. Engage customers in field and develop user and work domain models in participatory processes with other roles.
  • Build an organic demand and interest in UX. Provide consulting and usability work to projects as capability expands. Demonstrate wins and lessons from field work and usability research.
  • Collaborate with requirements owners (product managers) to develop user-centered requirements approach. Integrate usability interview and personas into requirements management.
  • Integrate with product development. Determine development lifecycle decision points and user information required.
  • Establish user experience as process and organizational function
  • Provide awareness training, discussion sessions, and formal education as needed to fit UX process.
  • Assessment and renewal, staffing, building competency

We should create more opportunities to challenge failure points and process breakdowns. Use requirements reviews to challenge the fit to user needs. Use a heuristic evaluation to bring a customer service perspective on board. In each of those opportunities, articulate the double-loop learning point. “Yes, we’ll fix the design, but our process for reporting user feedback limits us to tactical fixes like these. Let’s report the implications of user feedback to management as well.”

We can create these opportunities by looking for issues and presenting them as UX points but in business terms, such as market dynamics, competitive landscape, feature priority (and overload), and user adoption. This will take time and patience, but then, its recursive. In the long run we’ll have made our case without major confrontations.

Conclusions

The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.

Scott Cook, Intuit’s Founder, famously said at CHI 2006: “The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.”

Intelligent managers actually celebrate failures – that’s how we learn. If we aren’t failing at anything, how do we know we’re trying? The problem is recognizing when failure is indeed an option.

How do we know when a project so large – an organizational level project – will belly-up? How can something so huge and spectacular in its impact be so hard to call, especially at the time decisions are being made that could change the priorities and prevent an eventual massive flop? The problem with massive failure is that there’s very little early warning in the development system, and almost none at the user or market level.

When product development fails to respect the user, or even the messenger of user feedback, bad decisions about interface architecture compound and push the product toward an uncertain reception in the marketplace. Early design decisions compound by determining architectures, affecting later design decisions, and so on through the lifecycle of development.

These problems can be compounded even when good usability research is performed. When user research is conducted too late in the product development cycle, and is driven by usability questions related to the product and not the work domain, development teams are fooled into believing their design will generalize to user needs across a large market in that domain. But at this point in product development, the fundamental platform, process, and design decisions have been made, constraining user research from revisiting questions that have been settled in earlier phases by marketing and product management.

References

Argyris, C. (1992). On organizational learning. London: Blackwell.

Howard, R. (1992). The CEO as organizational architect: an interview with Xerox’s Paul Allaire. Harvard Business Review, 70 (5), 106-121.

Jones, P.H. (2007). Socializing a Knowledge Strategy. In E. Abou-Zeid (Ed.) Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research, pp. 134-164. Hershey, PA: Idea Group.

Raynor, M.E. The strategy paradox: Why committing to success leads to failure (and what to do about it). New York: Currency Doubleday.

Rittel, H.W.J. and Weber, M.W. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169.

Taleb, N.N (2007).The Black Swan: The Impact of the Highly Improbable. New York: Random House.

10 comments

  1. Peter,

    Thanks for a great article. Really brilliant stuff. For me, this is one of the most significant stories ever written for B&A to date. That may just be my personal reaction because it felt like you were talking about my organization directly. (Maybe you were, in part: I work for LexisNexis).

    Many UX practicioners in large companies are indeed at the bottom of the food chain, with little or no prospect of changing anything outside of his or her immediate project. So, I was hoping for even more practical advice on how to get the message out. I understand the difference between socialization and institutionalization, but some of your recommendations are still high level. Take #6, for instance: Build an organic demand for UX. Can you be more specific? How? With what activities?

    Anyway, great story, and I appreciate your taking the time to share experience and knowledge in this area with the UX community at large.

    Cheers,
    James

  2. Brilliant. Agree with James that it is one of the best in BA, ever. This is the same struggle that I faced while working for most organizations. The need of the hour is an avalanche of User experience leadership, a transition from the doer’s, to the thinkers and the doers, and as you rightly said being the influencers.

    Once again, thank you and this is an article that the team has to read, read, read and internalize

    Cheers
    Masood
    http://masoodnasser.blogspot.com

  3. James, as an attempt to answer your question, methinks that we can go the roundabout::-) way. What I have tried succesfully in the past is to get a buy in from the most passionate of your dev/ UX guys and prototype. In the usual waste of time meetings, show the concept and get interest. This will tremendously increase our chances of demand

    Cheers
    Masood
    http://masoodnasser.blogspot.com

  4. James, Masood – Thanks for your comments. There’s a lot more to tell, but we were unsure about an article as long as this in the first place. And as you’ve intuited, creating an organic demand for UX is the key thing in a socialization approach. You know It’s working when – you find others around the organization are seeking you out and trying to work it out with your boss to get you on their team! I am familiar with a number of large organizations, and the socialization takes off when UX people are given the organizational support to diffuse the practice and your value to other product lines and new projects in their planning phases. Even giving informal presentations outside of your reporting line helps spread the value of UX practices. But in practical terms, we need to find ways to be advocates and part-time advisors to other projects, which will create demand back to the UX organization. I’ve seen this work with only 2 UX people in a large organization. Gotta go now, I’ll try to say some more later!

  5. Spot on article! Like walking down the street and stepping in a puddle- only to discover it’s 20 feet deep!

    You’ve so accurately found all the usual suspects of failure and pointed out that UX is often tacked on to major product efforts like some kind of optional shiny coating (the often implied premise that after all, something can still WORK if it doesn’t shine)!

    The management behavior of hiding from accountability for bad decisions is at a chronic level in most large organizations and I think this is due to the popular practice of managing upward, also known as CYA and kissing up. After all, this is where bonuses and promotions come from. There is no incentive to find and stave off failure. There is no appetite to hear about problems and attack them ruthlessly. For this to really work, it has to be part of the process.

    One statement stood out for me… “Complete alignment to strategy is expected across the entire team. Late-arriving user experience “findings” that could conflict with internal strategy will be treated as threatening, not helpful.”

    I think we’ve all seen this, which drives to the real point. UX must be part of the strategy. If it’s treated on an equal footing with “requirements” which are functional, it would not be ignored, so therefore it would not be optional or threatening.

    You had mentioned in one of your responses a reference to the Total Quality Management movement in the early 90’s, which I remember also. Part of the tenets of continuous improvement was to look for mistakes and reward for it. They often pointed to Japanese car makers and the Demming principles that actually celebrated mistakes and defects. Institutionalizing this across processes brought Japan out of the lowest quality of product, to eventually compete on the world stage as arguably the highest mark of quality over a couple of decades.

    I guess what I’m saying is until that kind of upper management buy-in about quality comes back into favor and those goals become part of a strategy, UX will remain in a “nice to have” but not critical place. If we look at a customer-favored product like the iPhone, it’s very clear that user experience was a core strategy for the product. It could not have hoped for success under the Apple brand without the disciplined philosophy of UX that Apple demands. Failure to get the user experience nailed, would have sunk the product.

    So to be on equal footing or integrated with requirements, UX must be present much further upstream at the concept stage of any project– and must be written into the strategy brief along with everything else strategic to success criteria.

    You’ve actually framed this up by saying: “Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications.”

    I would suggest that along with the early warning of failure, that UX has to be in an explicit role. The implicit role feels familiar and is very prevalent out there, however it feels like a victim role. It’s true that through communications, credibility and influence we remain productive, but for success we have to be at the table with executive sanction from the top of the house. I also agree with Masood, that THE most powerful tool we possess –to cross all of these organizational barriers, is the ability to prototype and show a vision of end state. Thanks again for the great article Peter!

  6. Looking at Jim’s comments above one of the tools that the Japanese car makers used was Quality Function Deployment (QFD). QFD was developed as long ago as the 60s by Dr. Akao as one method of ensuring that user requirements (‘voice of the customer’) were considered and transmuted into system requirements.
    Because it has a long history of success and, although simple in concept, looks quite fancy on paper I wonder if UX could use this as a selling point to clients and the like thus ensuring that user research is built into the requirements at an early stage in the project (I guess JJG’s Scope plane).
    Dr. Akao also has some interesting thoughs and aproaches involved with the spoken and unspoken needs of customers and ‘expected’ and ‘exciting’ quality. Well worth a look if nothing else.

    http://www.qfdi.org/

  7. Peter,
    Thanks for a nice article once again (as part I).
    This article reveals a great combination of various sources of information and comments that form a frame of UX. Is clear for the reader what is UX, in which steps of product development UX is involved, where it should better fit and the importance of UX to avoid future failures. Again this article clearly refers to bigger organisation models but sometimes user experience can meet barriers in smaller organisations as well, despite the fact that product development members work more closely.
    I totally agree with following statement, as it is usually one of the most common reasons why a product fails to meet the users’ expectations.
    “The product team’s enthusiasm for the new and innovative may prevent listening to the users’ authentic preferences. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable.”

    Joe thanks for the ‘ organizational architecture’ slide share group , I have already joined.

  8. Fantastic article! Dead on.

    But how to sell the notion of integrated UX throughout large corporations? I once was asked to figure out why customer satisfaction ratings were dropping for a particular product. What became clear was that 2 things were happening: 1) there was inconsistent support for various user personas (the client could fix that pretty easily by developing new tools and processes. 2) Users viewed the full company (not just her division) as “those guys”. So even if she fixed her tools, if there was a billing error or a customer service error (things she didn’t control) customer sat would go down. She needed to share the personas and the findings of the research across the full customer experience life cycle, and get her management peers to buy into the notion that customer satisfaction is a shared responsibility. She did not. She hoarded the research findings, and ultimately she failed, and customer sat continued to fall.

    Any suggestions on how, as a consultant to one division, take findings across divisions and share them with others?

Comments are closed.