Leading Designers to New Frontiers

Written by: Jeff Parks

iasummit_logo.png


Adaptive Path’s “MX San Francisco”:http://www.adaptivepath.com/events/2008/apr/: Managing Experience through Creative Leadership took place in San Francisco between April 20-22. The conference focused on helping managers and designers deal with the complexity, challenges, and opportunities that make every day so entertaining.

Jeff Parks and Chris Baum sat down with several of the conference speakers and organizers to further examine the issues that the sessions revealed.

You can also follow the Boxes and Arrows podcasts on:
iTunes     Del.icio.us     B&A MX podcast theme music created and provided by BumperTunes™

Creating the Next iPodCordell Ratzlaff
I had the pleasure of speaking with Cordell Ratzlaff about his presentation “Creating the next iPod”. Cordell is leading product design for Cisco’s voice, video, and web collaboration products. We discuss the necessity of creating a great corporate culture in order to create great products.


Download audio

Interactions and RelationshipsRichard Anderson
Chris Baum, editor-in-chief for Boxes and Arrows sits down with editor-in-chief for Interactions Magazine, Richard Anderson at MX San Francisco to discuss the different techniques, and skill sets it takes to develop and publish to the IA and UX communities.


Download audio

New Interactions: Enlightened Trial and ErrorBjörn Hartmann
Björn Hartmann and I discuss his presentation entitled New Interactions: Enlightened Trial And Error. and how he is leading work in design tools for pervasive computing, sensor based interactions, and design by modifications. Björn is a PhD candidate in Human Computer Interaction at Stanford University and Editor-in-Chief of Ambidextrous magazine, Stanford’s Journal of Design.


Download audio

Chocolate and User ExperienceMichael Recchiuti
Michael Recchiuti talks about the experience of making chocolate and how different flavors inspire new creations for the business and his customers. Looking at different professions outside of the web world in which most UX practitioners work can inspire innovation and creativity.


Download audio

Round Table Discussion with Adaptive Path and Boxes and ArrowsChris Baum, Brandon Schauer, Sarah Nelson, Henning Fischer, and Ryan Freitas
We start with a mash-up of these brief interviews followed by a round table discussion with editor-in-cheif at Boxes and Arrows Chris Baum, and four members of the Adaptive Path team including Brandon Schauer, Henning Fischer, Sarah Nelson, and Ryan Freitas about these comments and their own impressions of MX.


Download audio

Thanks to “Adaptive Path”:http://www.adaptivepath.com/ for sponsoring these podcasts.

We Tried To Warn You, Part 2

Written by: Peter Jones

A large but unknowable proportion of businesses fail pursuing nearly perfect strategies.

In Part I of We Tried to Warn You, three themes were developed:

# Organizations as wicked problems,
# The differences of failure leverage in small versus large organizations, and
# The description of failure points

These should be considered exploratory elements of organizational architecture, from a communications information architecture perspective. While the organizational studies literature has much to offer about organizational learning mechanisms, we find very little about failure from the perspective of product management, management processes, or organizational communications.

Researching failure is similar to researching the business strategies of firms that went out of business (e.g., Raynor, 2007). They are just not available for us to analyze, they are either covered-up embarrassments, or they become transformed over time and much expense into “successes.”

In The Strategy Paradox, Raynor describes the “survivor’s bias” of business research, pointing out that internal data is unavailable to researchers for the dark matter of the business universe, those that go under. Raynor shows how a large but unknowable proportion of businesses fail pursuing nearly perfect strategies. (Going concerns often survive because of their mediocre strategies, avoiding the hazards of extreme strategies).

A major difference in the current discussion is that organizational failure as defined here does not bring down the firm itself, at least not directly, as a risky strategy might. But it often leads to complete reorganization of divisions and large projects, which should be recognized as a significant failure at the organizational level.

One reason we are unlikely to assess the organization as having failed is the temporal difference between failure triggers and the shared experience of observable events. Any product failure will affect the organization, but some failures are truly organizational. They may be more difficult to observe.

If a prototype design fails quickly (within a single usability test period), and a project starts and fails within 6 months, and a product takes perhaps a year to determine its failure – what about an organization? We should expect a much longer cycle from originating failure event to general acknowledgement of failure, perhaps 2-5 years.

There are different timeframes to consider with organizational versus project or product failure. In this case study, the failure was not observable until after a year or so of unexpectedly weak sales, with managers and support dealing with customer resistance to the new product.

However, decisions made years earlier set the processes in place that eventuated as adoption failure. Tracing the propagation of decisions through resulting actions, we also find huge differences in temporal response between levels of hierarchy (found in all large organizations).

Failures can occur when a chain of related decisions, based on bad assumptions, propagate over time. These micro-failures may have occurred at the time as “mere” communication problems.

In our case study, product requirements were defined based on industry best practices, guided by experts and product buyers, but excluding user feedback on requirements. Requirements were managed by senior product managers and were maintained as frozen specifications so that development decisions could be managed. Requirements become treated as-if validated by their continuing existence and support by product managers. But with no evaluation by end users of embodied requirements – no process prototype was demonstrated – product managers and developers had no insight into dire future consequences of product architecture decisions.

Consider the requisite timing of user research and design decisions in almost any project. A cycle of less than a month is a typical loop for integrating design recommendations from usability results into an iterative product lifecycle.

If the design process is NOT iterative, we see the biggest temporal gaps of all. There is no way to travel back in time to revise requirements unless the tester calls a “show-stopper,” and that would be an unlikely call from an internal usability evaluator.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

Here we find the seeds of product failure, but the organization colludes to defend the project timelines, to save face, to maintain leadership confidence. Usability colludes to ensure they have a future on the job. With massive failures, everyone is partly to blame, but nobody accepts personal responsibility.

The Roles of User Experience


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

As Figure 1 shows, UX reported to development management, and was further subjected to product and project management directives.

In many firms, UX has little independence and literally no requirements authority, and in this case was a dotted-line report under three competing authorities. That being the case, by the time formal usability tests were scheduled, requirements and development were too deeply committed to consider any significant changes from user research. With the pressures of release schedules looming, usability was both rushed and controlled to ensure user feedback was restricted to issues contained within the scope of possible change and with minor schedule impact.

By the time usability testing was conducted, the scope was too narrowly defined to admit any ecologically valid results. Usability test cases were defined by product managers to test user response to individual transactions, and not the systematic processes inherent in the everyday complexity of retail, service, or financial work.

* Testing occurred in a rented facility, and not in the retail store itself.
* The context of use was defined within a job role, and not in terms of productivity or throughput.
* Individual screen views were tested in isolation, not in the context of their relationship to the demands of real work pressures – response time, database access time, ability to learn navigation and to quickly navigate between common transactions.
* Sequences of common, everyday interactions were not evaluated.

And so on.

The product team’s enthusiasm for the new and innovative may prevent listening to the users’ authentic preferences. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable.

Many well-tested products have been released only to fail in the marketplace due to widespread user preference to maintain their current, established, well-known system. This especially so if the work practice requires considerable learning and use of an earlier product over time, as happened in our retail system case. Very expensive and well-documented failures abound due to user preference for a well-established installed base, with notorious examples in air traffic control, government and security, medical / patient information systems, and transportation systems.

When UX is “embedded” as part of a large team, accountable to product or project management, the natural bias is to expect the design to succeed. When UX designers must also run the usability tests (as in this case), we cannot expect the “tester” to independently evaluate the “designer’s” work. The same person in two opposing roles, the UX team reporting to product, and restricted latitude for design change (due to impossible delivery deadlines) – we should consider this a design failure in the making.

In this situation, it appears UX was not allowed to be effective, even if the usability team understood how to work around management to make a case for the impact of its discoveries. But the UX team may not have understood the possible impact at the time, but only in retrospect after the product failed adoption.

We have no analytical or qualitative tools for predicting the degree of market adoption based on even well-designed usability evaluations. Determining the likelihood of future product adoption failure across nationwide or international markets is a judgment call, even with survey data of sufficient power to estimate the population. Because of the show-stopping impact of advancing such a judgment, it’s unlikely the low-status user experience role will push the case, even if such a case is clearly warranted from user research.

The Racket: The Organization as Self-Protection System

Modern organizations are designed to not fail. But they will fail at times when pursuing their mission in a competitive marketplace. Most large organizations that endure become resilient in their adaptation to changing market conditions. They have plenty of early warning systems built into their processes – hierarchical management, financial reports, project management and stage-gate processes. The risk of failure becomes distributed across an ever-larger number of employees, reducing risk through assumed due diligence in execution.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged. Groupthink prevails when people conspire to maintain silence about bad decisions. We then convince ourselves that leadership will win out over the risks; the strategy will work if we give it time.

Argyris’ organizational learning theory shows people in large organizations are often unable to acknowledge the long-term implications of learning situations. While people are very good at learning from everyday mistakes, they don’t connect the dots back to the larger failure that everyone is accommodating.

Called “double loop learning,” the goal is learn from an outcome and reconfigure the governing variables of the situation’s pattern to avoid the problem in the future. (Single-loop learning is merely changing one’s actions in response to the outcome). Argyris’ research suggests all organizations have difficulties in double-loop learning; organizations build defenses against this learning because it requires confrontation, reflection, and change of governance, decision processes, and values-in-use. It’s much easier to just change one’s behavior.

What can UX do about it?

User experience/IA clearly plays a significant role as an early warning system for market failure. Context-sensitive user research is perhaps the best tool for available for informed judgement of potential user adoption issues.

Several common barriers to communicating this informed judgment have been discussed:

* Organizational defenses prevent anyone from advancing theories of failure before failure happens.
* UX is positioned in large organizations in a subordinate role, and may have difficulty planning and conducting the appropriate research.
* UX, reporting to product management, will have difficulty advancing cases with strategic implications, especially involving product failure.
* Groupthink – people on teams protect each other and become convinced everything will work out.
* Timing – by the time such judgments may be formed, the timeframes for realistic responsive action have disappeared.

Given the history of organizations and the typical situating of user experience roles in large organizations, what advice can we glean from the case study?

Let’s consider leveraging the implicit roles of UX, rather than the mainstream dimensions of skill and practice development.

UX serves an Influencing role – so let’s influence

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

User experience has the privilege of being available on the front lines of product design, research, and testing. But it does not carry substantial organizational authority. In a showdown between product management and UX, product wins every time. Product is responsible for revenue, and must live or die by the calls they make.

So UX should look to their direct internal client’s needs. UX should fit research and recommendations to the context of product requirements, adapting to the goals and language of requirements management. We (UX) must design sufficient variability into prototypes to be able to effectively test expected variances in preference and work practice differences. We must design our test practices to enable determinations from user data as to whether the product requirements fit the context of the user’s work and needs.

We should be able to determine, in effect, whether we are designing for a product, or designing the right product in the first place. Designing the right product means getting the requirements right.

Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications.

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

UX is a recursive process – let’s make recursive organizations as well

User experience is highly iterative, or it fails as well. We always get more than one chance to fail, and we’ve built that into practices and standards.

Practices and processes are repeated and improved over time. But organizations are not flexible with respect to failure. They are competitive and defensive networks of people, often with multiple conflicting agendas. Our challenge is to encourage organizations to recurse (recourse?) more.

We should do this by creating a better organizational user experience. We should follow our own observations and learning of the organization as a system of internal users. Within this recursive system (in which we participate as a user), we can start by moving observations up the circle of care (or the management hierarchy if you will).

I like to think our managers do care about the organization and their shared goals. But our challenge here is to learn and perform from double-loop learning ourselves, addressing root causes and “governing variables” of issues we encounter in organizational user research. We do this by systematic reflection on patterns, and improving processes incrementally, and not just “fixing things” (single-loop learning).

We can adopt a process of socialization (Jones, 2007) rather than institutionalization, of user experience. Process socialization was developed as a more productive alternative to top-down institutionalization for the introduction of UX practices in organizations introducing UX into an intact product development process.

While there is strong theoretical support for this approach (from organizational structuration and social networks), socialization is recommended because it works better than the alternatives. Institutionalization demands that an organization establish a formal set of roles, relationships, training, and management added to the hierarchy to coordinate the new practices.

Socialization instead affirms that a longer-term, better understood, and organizationally resilient adoption of the UX process occurs when people in roles lateral to UX learn the practices through participation and gradual progression of sophistication. The practices employed in a socialization approach are nearly the opposite (in temporal order) of the institutionalization approach:

# Find a significant UX need among projects and bring rapid, lightweight methods to solve obvious problems
# Have management present the success and lessons learned
# Do not hire a senior manager for UX yet, lateral roles should come to accept and integrate the value first
# Determine UX need and applications in other projects. Provide tactical UX services as necessary, as internal consulting function.
# Develop practices within the scope of product needs. Engage customers in field and develop user and work domain models in participatory processes with other roles.
# Build an organic demand and interest in UX. Provide consulting and usability work to projects as capability expands. Demonstrate wins and lessons from field work and usability research.
# Collaborate with requirements owners (product managers) to develop user-centered requirements approach. Integrate usability interview and personas into requirements management.
# Integrate with Product Development. Determine development lifecycle decision points and user information required.
# Establish User Experience as process and organizational function
# Provide awareness training, discussion sessions, and formal education as needed to fit UX process.
# Assessment and renewal, staffing, building competency

We should create more opportunities to challenge failure points and process breakdowns. Use requirements reviews to challenge the fit to user needs. Use a heuristic evaluation to bring a customer service perspective on board. In each of those opportunities, articulate the double-loop learning point. “Yes, we’ll fix the design, but our process for reporting user feedback limits us to tactical fixes like these. Let’s report the implications of user feedback to management as well.”

We can create these opportunities by looking for issues and presenting them as UX points but in business terms, such as market dynamics, competitive landscape, feature priority (and overload), and user adoption. This will take time and patience, but then, its recursive. In the long run we’ll have made our case without major confrontations.

Conclusions

The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.

Scott Cook, Intuit’s Founder, famously said at CHI 2006: “The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.”

Intelligent managers actually celebrate failures – that’s how we learn. If we aren’t failing at anything, how do we know we’re trying? The problem is recognizing when failure is indeed an option.

How do we know when a project so large – an organizational level project – will belly-up? How can something so huge and spectacular in its impact be so hard to call, especially at the time decisions are being made that could change the priorities and prevent an eventual massive flop? The problem with massive failure is that there’s very little early warning in the development system, and almost none at the user or market level.

When product development fails to respect the user, or even the messenger of user feedback, bad decisions about interface architecture compound and push the product toward an uncertain reception in the marketplace. Early design decisions compound by determining architectures, affecting later design decisions, and so on through the lifecycle of development.

These problems can be compounded even when good usability research is performed. When user research is conducted too late in the product development cycle, and is driven by usability questions related to the product and not the work domain, development teams are fooled into believing their design will generalize to user needs across a large market in that domain. But at this point in product development, the fundamental platform, process, and design decisions have been made, constraining user research from revisiting questions that have been settled in earlier phases by marketing and product management.

References

Argyris, C. (1992). On organizational learning. London: Blackwell.

Howard, R. (1992). The CEO as organizational architect: an interview with Xerox’s Paul Allaire. Harvard Business Review, 70 (5), 106-121.

Jones, P.H. (2007). Socializing a Knowledge Strategy. In E. Abou-Zeid (Ed.) Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research, pp. 134-164. Hershey, PA: Idea Group.

Raynor, M.E. The strategy paradox: Why committing to success leads to failure (and what to do about it). New York: Currency Doubleday.

Rittel, H.W.J. and Weber, M.W. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169.

Taleb, N.N (2007).The Black Swan: The Impact of the Highly Improbable. New York: Random House.

We Tried To Warn You, Part 1

Written by: Peter Jones
I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure.

There are many kinds of failure in large, complex organizations – breakdowns occur at every level of interaction, from interpersonal communication to enterprise finance. Some of these failures are everyday and even helpful, allowing us to safely and iteratively learn and improve communications and practices. Other failures – what I call large-scale – result from accumulated bad decisions, organizational defensiveness, and embedded organizational values that prevent people from confronting these issues in real time as they occur.

So while it may be difficult to acknowledge your own personal responsibility for an everyday screw-up, it’s impossible to get in front of the train of massive organizational failure once its gained momentum and the whole company is riding it straight over the cliff. There is no accountability for these types of failures, and usually no learning either. Leaders do not often reveal their “integrity moment” for these breakdowns. Similar failures could happen again to the same firm.

I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure. We must try to stop the train, even if we are many steps removed from the larger decision making process at the root of these failures.

h2. Organizations as Wicked Problems

Consider the following scenario: A $2B computer systems integrator provider spends most of a decade developing its next-generation platform and product, and spends untold amounts in labor, licenses, contracting, testing, sales and marketing, and facilities. Due to the extreme complexity of the application (user) domain, the project takes much longer than planned. Three technology waves come and go, but are accommodated in the development strategy: Proprietary client-server, Windows NT application, Internet + rich client.

By the time Web Services technologies matured, the product was finally released as a server-based, rich client application. However, the application was designed too rigidly for flexible configurations necessary for the customer base, and the platform performance compared poorly to the current product for which the project was designed as a replacement. Customers failed to adopt the product, and it was a huge write-off of most of a decade’s worth of investment.

The company recovered by facelifting its existing flagship product to embrace contemporary user interface design standards, but never developed a replacement product. A similar situation occurred with the CAD systems house SDRC, whose story ended as part two of a EDS fire sale acquisition of SDRC and Metaphase. These failures may be more common that we care to admit.

From a business and design perspective, several questions come to mind:
* What were the triggering mistakes that led to the failure?
* At what point in such a project could anyone in the organization have predicted an adoption failure?
* What did designers do that contributed to the problem? What could IA/designers have done instead?
* Were IA/designers able to detect the problems that led to failure? Were they able to effectively project this and make a case based on foreseen risks?
* If people act rationally and make apparently sound decisions, where did failures actually happen?

This situation was not an application design failure; it was a total organizational failure. In fact, it’s a fairly common type of failure, and preventable. Obviously the market outcome was not the actual failure point. But as the product’s judgment day, the organization must recognize failure when goals utterly fail with customers. So if this is the case, where did the failures occur?

It may be impossible to see whether and where failures will occur, for many reasons. People are generally bad at predicting the systemic outcomes of situational actions – product managers cannot see how an interface design issue could lead to market failure. People are also very bad at predicting improbable events, and failure especially, due to the organizational bias against recognizing failures.

Organizational actors are unwilling to acknowledge small failures when they have occurred, let alone large failures. Business participants have unreasonably optimistic expectations for market performance, clouding their willingness to deal with emergent risks. We generally have strong biases toward attributing our skills when things go well, and to assigning external contingencies when things go badly. As Taleb (2007)1 says in The Black Swan:

bq. “We humans are the victims of an asymmetry in the perception of random events. We attribute our success to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living. Ninety-four percent of Swedes believe that their driving skills put them in the top 50 percent of Swedish drivers; 84 percent of Frenchmen feel that their lovemaking abilities put them in the top half of French lovers.” (p. 152).

Organizations are complex, self-organizing, socio-technical systems. Furthermore, they can be considered “wicked problems,” as defined by Rittel and Webber (1973)2. Wicked problems require design thinking; they can be designed-to, but not necessarily designed. They cannot be “solved,” at least not in the analytical approaches of so-called rational decision makers. Rittel and Webber identify 10 characteristics of a wicked problem, most of which apply to large organizations as they exist, without even identifying an initial problem to be considered:

# There is no definite formulation of a wicked problem.
# Wicked problems have no stopping rules (you don’t know when you’re done).
# Solutions to wicked problems are not true-or-false, but better or worse.
# There is no immediate and no ultimate test of a solution to a wicked problem.
# Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
# Wicked problems do not have an enumerable set of potential solutions.
# Every wicked problem is essentially unique.
# Every wicked problem can be considered to be a symptom of another [wicked] problem.
# The causes of a wicked problem can be explained in numerous ways.
# The planner has no right to be wrong.

These are attributes of the well-functioning organization, and apply as well to one pitched in the chaos of product or planning failure. The wicked problem frame also helps explain why we cannot trace a series of decisions to the outcomes of failure – there are too many alternative options or explanations within such a complex field. Considering failure as a wicked problem may offer a way out of the mess (as a design problem). But there will be no way to trace back or even learn from the originating events that the organization might have caught early enough to prevent the massive failure chain.

So we should view failure as an organizational dynamic, not as an event. By the time the signal failure event occurs (product adoption failure in intended market), the organizational failure is ancient history. Given the inherent complexity of large organizations, the dynamics of markets and timing products to market needs, and the interactions of hundred of people in large projects, where do we start to look for the first cracks of large-scale failure?

h2. Types of Organizational Failure

How do we even know when an organization fails? What are the differences between a major product failure (involving function or adoption) and a business failure that threatens the organization?

An organizational-level failure is a recognizable event, one which typically follows a series of antecedent events or decisions that led to the large-scale breakdown. My working definition:

“When significant initiatives critical to business strategy fail to meet their highest-priority stated goals.”

When the breakdown affects everyone in the organization, we might say the organization has failed as whole, even if only a small number of actors are to blame. When this happens with small companies, such as the start-up I worked with early in my career as a human factors engineer, the source and the impact are obvious.

Our company of 10 people grew to nearly 20 in a month to scale up for a large IBM contract. All resources were brought into alignment to serve this contract, but after about 6 months, IBM cut the contract – a manager senior to our project lead hired a truck and carted away all our work product and computers, leaving us literally sitting at empty desks. We discovered that IBM had 3 internal projects working on the same product, and they selected the internal team that had finished first.

That team performed quickly, but their poor quality led to the product’s miserable failure in the marketplace. IBM suffered a major product failure, but not organizational failure. In Dayton, meanwhile, all of us except the company principals were out of work, and their firm folded within a year.

Small organizations have little resilience to protect them when mistakes happen. The demise of our start-up was caused by a direct external decision, and no amount of risk management planning would have landed us softly.

I also consulted with a rapidly growing technology company in California (Invisible Worlds) which landed hard in late 2000, along with many other tech firms and start-ups. Risk planning, or its equivalent, kept the product alive – but this start-up, along with firms large and small, disappeared during the dot-bomb year.

To what extent were internal dynamics to blame for these organizational failures? In retrospect, many of the dot-bombs had terrible business plans, no sustainable business models, and even less organic demand for their services. Most would have failed in a normal business climate. They floated up with the rise of investor sentiment, and crashed to reality as a class of enterprises, all of them able to save face by blaming external forces for organizational failure.

h2. Organizational Architecture and Failure Points

Recognizing this is a journal for designers, I’d like to extend our architectural model to include organizational structures and dynamics. Organizational architecture may have been first conceived in R. Howard’s 1992 HBR article “The CEO as organizational architect.” (The phrase has seen some academic treatment, but is not found in organizational science literature or MBA courses to a great extent.)

Organizations are “chaordic” as Dee Hock termed it, teetering between chaotic movement and ordered structures, never staying put long enough to have an enduring architectural mapping. However, structural metaphors are useful for planning, and good planning keeps organizations from failing. So let’s consider the term organizational architecture metaphorical, but valuable – giving us a consistent way of teasing apart the different components of a large organization related to decision, action, and role definition in large project teams.

Let’s start with organizational architecture and consider its relationships to information architecture. The continuity of control and information exchange between the macro (enterprise) and micro (product and information) architectures can be observed in intra-organizational communications. We could honestly state that all such failures originate as failures in communications. Organizational structure and processes are major components, but the idea of “an architecture,” as we should well know from IA, is not merely structural. An architectural approach to organizational design involves at least:

  • *Structures*: Enterprise, organizational, departmental, networks
  • *Business processes*: Product fulfillment, Product development, Customer service
  • *Products*: Structures and processes associated with products sold to markets
  • *Practices*: User Experience, Project management, Software design
  • *People and roles*: Titles, positions, assigned and informal roles
  • *Finance*: Accounting and financial rules that embed priorities and values
  • *Communication rules*: Explicit and implicit rules of communication and coordination
  • *Styles of interaction*: How work gets done, how people work together, formal behaviors
  • *Values*: Explicit and tacit values, priorities in decision making

Since we would need a book to describe the function and relationships within and between these dimensions, let’s see if the whole view suffices.

Each of these components are significant functions in the organizational mix, all reliant on communication to maintain its role and position in the internal architecture. While we may find may have a single communication point (a leader) in structures and people, most organizational functions are largely self-organizing, continuously reified through self-managing communication. They will not have a single failure point identifiable in a communication chain, because nearly all organizational conversations are redundant and will be propagated by other voices and in other formats.

Really bad decisions are caught in their early stages of communication, and become less bad through mediation by other players. So organizations persist largely because they have lots of backup. In the process of backup, we also see a lot of cover-up, a significant amount of consensus denial around the biggest failures. The stories people want to hear get repeated. You can see why everyday failures are easy to catch compared to royal breakdowns.

So are we even capable of discerning when a large-scale failure of the organizational system is immanent? Organizational failure is not a popular meme; employees can handle a project failure, but to acknowledge that the firm broke down – as a system – is another matter.

According to Chris Argyris (1992), organizational defensive routines are “any routine policies or actions that are intended to circumvent the experience of embarrassment or threat by bypassing the situations that may trigger these responses. Organizational defensive routines make it unlikely that the organization will address the factors that caused the embarrassment or threat in the first place. (p. 164)” Due to organizational defenses most managers will place the blame for such failure on individuals rather than the consequences of poor decisions or other root causes, and will deflect critique of the general management or decision making processes.

Figure 1 shows a pertinent view of the case organization, simplifying the architecture (to People, Process, Product, and Project) so that differences in structure, process, and timing can be drawn.

Projects are not considered part of architecture, but they reveal time dynamics and mobilize all the constituents of architecture. Projects are also where failures originate.

The timeline labeled “Feedback cycle” shows how smaller projects cycled user and market feedback quickly enough to impact product decisions and design, usually before initial release. Due to the significant scale, major rollout, and long sales cycle of the Retail Store Management product, the market feedback (sales) took most of a year to reach executives. By then, the trail’s gone cold.


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

Over the project lifespan of Retail Store Management, the organization:

  • Planned a “revolutionary” not evolutionary product
  • Spun off and even sequestered the development team – to “innovate” undisturbed by the pedestrian projects of the going concern
  • Spent years developing “best practices,” for technology, development, and the retail practices embodied in the product
  • Kept the project a relative secret from rest of the company, until close to initial release
  • Evolved technology significantly over time as paradigms changed, starting as an NT client-server application, then distributed database, finally a Web-enabled rich client interface.

Large-scale failures can occur when the work domain and potential user acceptance (motivations and constraints) are not well understood. When a new product cannot fail, organizations will prohibit acknowledging even minor failures, with cumulative failures to learn building from small mistakes. This can lead to one very big failure at the product or organizational level.

We can see this kind of situation (as shown in Figure 1) generates many opportunities for communications to fail, leading to decisions based on biased information, and so on. From an abstract perspective, modeling the inter-organizational interactions as “boxes and arrows,” we may find it a simple exercise to “fix” these problems.

We can recommend (in this organization) actions such as educating project managers about UX, creating marketing-friendly usability sessions to enlist support from internal competitors, making well-timed pitches to senior management with line management support, et cetera.

But in reality, it usually does not work out this way. From a macro perspective, when large projects that “cannot fail” are managed aggressively in large organizations, the user experience function is typically subordinated to project management, product management, and development. User experience – whether expressing its user-centered design or usability roles – can be perceived as introducing new variables to a set of baselined requirements, regardless of lifecycle model (waterfall, incremental, or even Agile).

To make it worse (from the viewpoint of product or requirements management), we promote requirements changes from the high-authority position conferred by the reliance on user data. Under the organizational pressures of executing a top-down managed product strategy, leadership often closes ranks around the objectives. Complete alignment to strategy is expected across the entire team. Late-arriving user experience “findings” that could conflict with internal strategy will be treated as threatening, not helpful.

With such large, cross-departmental projects, signs of warning drawn from user data can be simply disregarded, as not fitting the current organizational frame. And if user studies are performed, significant conflicts with strategy can be discounted as the analyst’s interpretation.

There are battles we sometimes cannot win. In such plights, user experience professionals must draw on inner resources of experience, intuition, and common sense and develop alternatives to standard methods and processes. The quality of interpersonal communications may make more of a difference than any user data.

In Part II, we will explore the factors of user experience role, the timing dynamics of large projects, and several alternatives to the framing of UX roles and organizations today.

Enterprise IA Methodologies:

Written by: James Robertson

Information architects working within enterprises are confronted by unique challenges relating to organisational culture, business processes, and internal politics. Compared to public website or interface design projects, key aspects differ in the application of IA discipline relating to uncertainties around the exact nature of the business problems being solved.

In a typical web or design project, the information architect is given a task, such as:

  • Improve the design of the website for consumers
  • Develop a user interface for a new business application
  • Make it easier for staff to find information on the intranet

In all these cases, the problem is known, and the challenge is to work out the best way to design the solution. User-centered design methodologies then provide a rich toolbox for delivering an effective solution.

However, within the enterprise space, the problem to be solved is often not well understood. For example, information architects may be approached with ill-defined “problems” such as:

  • Improve the effectiveness of the intranet
  • Help call center staff to access required information
  • Increase the uptake of the document management system
  • Support sales staff with better online resources

The first task for the information architect in this context is to better understand the problem. Only then can an overall approach be defined, and the normal user-centered design process initiated.

In practice, this means that enterprise IAs often start two steps earlier, focusing first on analyzing needs, and then defining a strategy and scope to meet those needs.

Traditional IA methodologies

EIA.0407.diagram1.jpg
Diagram 1: Illustrates a typical IA approach

While there are many valid ways to redesign a website or intranet, most projects start with user research to identify user tasks and goals. Then, the IA uses these results to develop a draft IA, which is tested in an iterative manner. Wireframes detail the user interface, and usability testing or similar techniques are used to refine it.

The overall goal of this approach is to clearly understand what the user is trying to achieve when using the site or system, allowing the IA to develop a solution that is both effective and satisfying.

This much is well understood, and much better documented elsewhere.

Enterprise IA approaches

Within the enterprise, the core user-centered design methodology remains just as valid. However, to be effective, the process must start two steps earlier.

EIA.0407.diagram2.jpg
Diagram 2: Illustrates an Enterprise IA approach

Step 1: Needs analysis
The first step now becomes needs analysis, which uses the same user research techniques as in typical user-centered design (interviews, contextual inquiry, observation), but to different ends. This time, we don’t ask questions about the system, but instead focus on obtaining a more complete and holistic picture of what staff do and the environment in which they work.

This might include questions such as:

  • What activities make up your job?
  • What information do you need to do these activities?
  • Where do you currently get this information?
  • How do you find out what’s happening in the organisation?
  • What is the most frustrating task you had to complete in the last month?

Rather than support the design process, this research helps the IA understand the nature of the problem. Open-ended and ethnographic, this research will undoubtedly highlight the unexpected and the unknown, both of which radically shape the approach going forward.

(For more on needs analysis in the context of intranets, see my earlier article on this topic, “Succeeding at IA in the Enterprise”)

2: Strategy and scope
The needs analysis then informs the creation of an overall strategy, scope and direction. From this clear framework for the IA work, a comprehensive overall roadmap of the activities required can emerge. The strategy also identifies the most critical issues to be solved, along with the activities with the potential to deliver the greatest business benefits. In this way, the IA work can be targeted for the greatest impact.

In many cases, needs analysis helps the team discover underlying issues which need to be addressed before any IA or design activity can succeed. (Cultural and business process issues are common examples.)

Together, the strategy and scope define the “problem” and provide a concrete context for the user-centered design process. Along with the illumination of the practical aspects of the work to come, the strategy also builds the business case for change and creates a sense of urgency.

A real-life case study drawn from a number of different intranet projects in call center environments illustrates the effects of the enterprise approach.

Case Study: Call Centers

In many organisations, call centers now serve as the primary point of contact with customers or the public. Whether in the insurance industry or within a government agency, call centers handle a huge volume of queries and transactions.

Within the call center, staff work in a high-pressure environment. They are expected to literally answer questions correctly within 30 seconds. Failure to deliver the right information leads to customer complaints or legal liability (organisations are directly responsible for every piece of information given out by a call center). Slow response times can create long queues, more complaints, and customer attrition.

To meet these expectations, call center staff require an effective and well-designed set of information resources. The typical call center intranet contains a large number of documents and news items for staff.

As an information architect, we are often brought into this environment to ‘redesign the call center intranet, to make it into an effective resource for staff. Based on experiences in other environments, it is natural to make a number of assumptions about where efforts should be focused:

  • The most common questions or transactions handled by staff should be identified
  • Effort should then be focused on providing resources to answer these key tasks
  • Paper resources should be migrated onto the call center intranet where possible
  • A user-centered redesign should be conducted of the call center intranet, to ensure it is effective

Spend a day or two in the call center, and the gap between these assumptions and reality will quickly become apparent. Let’s look at a call center in the insurance and investment industry.

In this particular case, the most obvious non-technical artifact in the call center was the book of photocopied notes that most staff had sitting beside them, scribbled on and annotated with sticky notes. Then there were the sheets pinned to the cubicle walls, similarly inscribed.

Additionally, product brochures adorned everyone’s desk to cover the 30-40 products sold at any given point. A deeper look uncovered the huge amount of email filed away in folders within Outlook.

There were good reasons why the call center worked in this manner:

  • Customers rang up asking, “On page 54 it says this, but what does it mean?” The call center staff needed to quickly access the same page to walk them through the details.
  • Key information (such as system codes) needed to be instantly available. Pieces of paper pinned to the wall would always be quicker than looking up an electronic system.
  • All important communications were broadcast via email, and maybe (only maybe) added to the intranet. Since the details probably were not needed at that point, it was necessary to file away the emails for later use.

In this situation, we drew some unexpected conclusions from these (and other) observations, even having been primed by previous call center projects.

In the end, our efforts focused on:

  • Managing uncommon rather than common details: The information related to the most frequent queries or transactions resided in the heads of staff. The complex issue that only came up every 6-12 months posed the real challenge.
  • Capturing old rather than new information: The brochures for the current products worked perfectly well. The real problem came from investment products that could go back decades and were still covered by the original terms of the contract. Finding a 20 year-old printed policy was not easy!
  • Eliminating email as the distribution mechanism: As long as email remained the primary way to deliver critical information, the intranet could never succeed. There were also clear productivity benefits in eliminating the duplicated information management conducted by every individual staff member.

The net result was that the call intranet still needed redesigning, but the needs analysis gave a very clear idea of where to focus efforts and identified the unique environmental aspects of call centers to be taken into account when conducting any work.

Solving the Wrong Problem

This case study highlights the importance of conducting the needs analysis process before embarking on any design or development activities. Failure to do so exposes the organisation to the risk of solving the wrong problem—putting significant effort into developing a solution that fails to work in practice.

Always assess the issue at hand to work out whether it is the cause or merely the symptom. For example, the intranet may be very poorly structured, with considerable usability problems. This naturally calls for a user-centered design project delivering a new IA and page layouts.

However, the underlying causes of the problem may be the disorganised publishing model, the lack of resources, or key cultural problems. If only the symptom (the design of the site) is tackled, the site will immediately start to slide back into disrepair the day it is re-launched.

A small piece of initial needs analysis work and strategy and scope planning allows identification of these underlying problems. Address them, if possible, before (or during) the project, improving the chances that the site continues to prosper post go-live.

Summary

Within the enterprise environment, our methodologies must start two steps earlier than the typical user-centered design process.

  1. Make use of holistic needs analysis techniques to build a clear picture of the real needs and issues of staff, along with an understanding of the environment in which they work.
  2. Then create a meaningful strategy and scope that identifies the symptoms and the causes. This information allows us to correctly target our work and ensure that we deliver solutions that actually work for staff.

 

(All of this is not to say that it’s easy to get the opportunity or the mandate to conduct this initial work, before being forced to jump straight into the design process. But it is possible—we’ve done it many times—and a discussion of how to tackle the broader positioning of enterprise IA will have to wait until another article.)

About the author
James Robertson is apparently the “intranet guy,” or so he was told at the IA Summit in Vancouver. He runs Step Two Designs (www.steptwo.com.au), a consultancy based in Sydney, Australia, and has written over 150 articles on intranets and content management, which can be found on his site. He also has a blog, the writing of which gives him something to do each morning while his brain warms up.

Change Architecture: Bringing IA to the Business Domain

Written by: Bob Goodman

“Information architects hold the potential to become master Bead Game players who help companies play the right music to succeed. But gaining a seat at the business table requires that we change aspects of our usual perspective.”

In Herman Hesse’s Nobel-prize winning novel, The Glass Bead Game, skilled players tap into a symbolic language that encodes all of human knowledge into a kind of music to be played and shared: [1]

These rules, the sign language and grammar of the Game, constitute a kind of highly developed secret language drawing upon several sciences and arts, but especially mathematics and music (and/ or musicology), and capable of expressing and establishing interrelationships between the content and conclusions of nearly all scholarly disciplines. … on all this immense body of intellectual values the Glass Bead Game player plays like the organist on an organ. [2]

Today, as the world of knowledge increasingly resides encoded in digital form, stored in databases, and accessed through the web, information architects hold the potential to become master Bead Game players who help companies play the right music to succeed. But gaining a seat at the business table requires that we change aspects of our usual perspective.

As IAs, we are not just architecting information; we are using information to architect change. In “traditional” information architecture, the target of work is usually a website or a web-based application. Change architecture steps outside of these bounds. The domain is not limited to a web team; it expands to include today’s dynamic business environment and the way people, processes, and tools interact and interoperate. The target is no longer limited to web browsers; rather, it is the minds of those people charged with understanding the broader business landscape and contributing to better business decisions.

When seen from a change architecture perspective, the IA’s existing toolkit—normally used to discover and capture information, re-categorize content for easier consumption, and visualize ideas for shared understanding and action—naturally supports this expanded business domain. IAs can help companies reap the benefits of positive change by reducing fear of change, creating hope for the future, enhancing adaptivity to change, and architecting applications and processes that enable business success.

Thinking about change architecture raises new questions:

  • How can we clearly communicate with clients about the ways information architecture paves the way for positive change?
  • What role do digital (or even physical) assets—including site maps, work flows, and visual explanations—play in helping a team and a company share a vision for change?
  • How do we help our clients, and their employees or customers, adapt to and embrace change?
  • How can we change the perception that IA is just a step in a website production process?

Not just for websites anymore

In fact, a number of information architects are already applying IA methods to business problems beyond the web. A few recent cases in point:

  • At Vanguard, the mutual fund firm, information architects Richard Dalton and Rob Weening stepped into the company’s strategic planning process to synthesize and visualize findings from extensive client interviews and make recommendations to internal business decision-makers about solving key pain points. Dalton and Weening faced initial skepticism about the ability of IA to overcome what they call the “web design” stereotype in the strategic planning arena. [3]
  • At Dynamic Diagrams, a Rhode Island-based information architecture firm, the company’s “visual explanation” services are often employed by companies with complex business processes or products to help put an internal team on the same page. The Dynamic Diagrams team advocates the use of “isometric” illustrations that bring perspective—in both the literal and figurative sense—to large-scale information and process issues. [4] “When applied correctly,” note several members of the firm in the Interaction Design Journal, “the introduction of depth makes the information easier to grasp by appealing to our intuitive understanding of space.”
  • At EZgov, which helps bring government services online, information architect Peter Boersma and other internal team members convinced decision makers to incorporate user-centered practices into the company’s software development process. Their persuasion tools include workflows and process maps overlaid by visual design. [5]

Commenting on his experience, Boersma notes that visuals, when converted into life-size objects such as posters, can help convert an abstract realm into something tangible that the team can talk about: “Visual explanations, when designed well, are the proverbial pictures that are worth more than 1000 words; they make lengthy explanations unnecessary. But, more importantly, they allow for discussion by pointing at things and indicating relationships by drawing lines in the air, when the visual is projected or hung on the wall.” [6]

The physical form, scale, and transmission of visual explanations can become extremely important as the medium for “spreading the news.” Dalton and Weening created one large-scale information map, and then hundreds of smaller “placemat” versions that were distributed to business units. [7] Depending on a particular IA’s skill set, these visual assets may be developed directly by him or her, or they may be developed in close collaboration with a visual designer.

Anecdotal evidence points to an evolution of IA as a unique approach to business consulting that combines analysis with tangible digital assets and actions. While business consulting comes in many flavors, information architects bring a particular set of top-down and bottom-up tools and capabilities to the table. IA practitioners may not necessarily think of themselves as change architects or persons engaged in change architecture, but there is a common thread of working to make changes in the process and/or perceptions of a collaborative team.

Learning more about change

As IAs, we know a lot about working with information. However, we need to learn more about attitudes toward change. Areas of knowledge that could be incorporated into change architecture include business strategy, business process intelligence, and cultural psychology. Change architecture could also benefit from the lessons of change management, a business consulting approach with roots that pre-date the emergence of the web.

One of the key models in change management comes from Kurt Lewin, one of the founders of modern social psychology. Lewin suggested a three-phase approach of change, which has been distilled into the following framework: Unfreeze, Transition, and Refreeze. Here’s a quick look at each phase:

Unfreeze: People tend to create a comfort zone where habits, patterns, and processes repeat in a somewhat static, fixed way. This gives them a sense of familiarity, control, and purpose. As Charles Handy writes in an essay, Unimagined Future: “Most of us prefer to walk backward into the future, a posture which may be uncomfortable but which at least allows us to keep on looking at familiar things as long as we can.” [8] There is an instinctive and understandable resistance to change. Old patterns have a powerful ability to propagate across a culture, achieving a kind of cultural lock-in and monopolizing the way people think about possibilities. Before someone becomes change-ready, they often need to be “unfrozen” from their static environment.

Transition: Transitioning marks the journey across the chasm of change. People and organizations reconfigure themselves from an old formation to a new one (“re-form”), through many different and often difficult realignment steps and stages. The first step is often the hardest, and leaders need tools to help people to avoid “change shock”, feel hopeful about change, and acclimate to the new possibilities.

The writings of creativity expert Edward de Bono are an excellent source of transitioning tools. He draws the following analogy in his book, Parallel Thinking: “Your existing cooking-pots may allow you to cook all the meals you have always cooked, but if one day you want to cook dim sum, then you may need to get a proper steamer system.” [9]

The practice of IA provides transitioning tools that can help people limber up their thinking and explore new structures, new terminology, and new approaches. For example, card-sorting sessions, interactive prototypes, and visual explanations safely simulate change in advance and let people “try it on for size” before the full change arrives.

Refreeze: Aims to bring a renewed sense of confidence and comfort to the person or organization’s changed environment. Refreezing also helps bolster the changes, so the organization avoids falling back into the earlier frozen patterns. (Alas, refreezing is perhaps not the best word choice. In today’s constantly changing environment, one shouldn’t strive to achieve another frozen state, but rather an integration of stability and dynamism.)

Big change, small change, and loose change

Lewin’s change model brings to mind major top-down changes. But what about the smaller-scale everyday decisions that drive the tempo and tenor of business? In their article, “Who Has The D? How Clear Decision Roles Enhance Organizational Performance” in the January 2006 issue of the Harvard Business Review, Bain & Company consultants Paul Rogers and Marda Blenko offer a compelling framework for clearing decision bottlenecks. [10]

They call it “RAPID,” for the sake of a catchy acronym (even though the terms are ordered differently), and define five key roles in the decision-making process: those who recommend a course of action based on discovery and analysis, those who offer input on the recommendation, those who review and agree to the recommendation, those who ultimately decide on the recommendation, and those who perform the decided action.

Although the Rogers and Blenko article focuses on role-definition, not information architecture, it has a number of implications for our field. For one, information architects are often asked to perform a recommendation role. The often-hazy path leading from recommendation to decision and action has historically been a source of great professional frustration to many IAs, who may chalk up shortcomings in that terrain to “politics.” From a change architecture perspective, the conflict inherent in this decision-making process may be seen instead as an opportunity. While people often disagree over the possible outcomes and pace of change, we need to understand that conflict is an attribute—not a side effect—of the decision-making process. In addition, business decisions increasingly play out across a distributed team that never actually converges face-to-face. Decisions hang in the ether and, in the words of Rogers and Blenko, “get stuck inside the organization like loose change.”

If we IAs become attuned to this situation, we’ll come to understand that the assets we create for fostering understanding are well-suited to helping clear these decision-making bottlenecks and improving the decision “throughput” across the company. Teams that are divided by office, country, continent, and culture can be placed on the same page. IAs are in a position to not only inform the situation, but also proactively propose a workflow to define the path leading from a recommendation to “performing” that recommendation. With these approaches, an information architect can become a kind of Black Belt in architecting and navigating big, small, and even loose change within an organization.

Is change architecture worth changing for?

Using the paradigm of change architecture, IAs can become more aware of the idea that when we step onto the business stage of a project, we will first need to unfreeze aspects of the situation and the environment, and ultimately make the path from recommendation to action visible to the participants.

Change architecture could even be applied to the trade of information architecture itself. When I began as an information architect 10 years ago, such matters were outside my field of vision; I thought of my role only in terms of providing information and documentation. Today, I recognize that practicing information architecture in an organization—either as an employee or as a consultant—requires intervention, persuasion, and leadership.

For many IAs, even the idea that the first phase of a new project engagement requires unfreezing to create a change-ready state would itself represent a major change. But information architecture may be a domain that is ready for a sea change. The signs are there: the internal soul-searching that has taken place on IA mailing lists and conferences, the seeming confusion about the overlap or gap between IA and design, and the struggle to find a shared language. Could it be we are unfreezing, heading toward transition?

Learn More

Podcast with Bob Goodman on Change Architecture “Bob also shares his thoughts about Web 2.0 and the value add this new approach to the web will bring to organizations. As well, we discuss different approaches to IA and Usability including card sorting and Bob’s experiences with Listening Labs.”

Footnotes:

[1] The connection of the “Glass Bead Game” and its players to the domain of “information visualization” was recently noted by Alan Marcus, “Visualizing the Future of Information Visualization”, Interactions, (March/April 2006): 42-43.

[2] Herman Hess, The Glass Bead Game, (New York: Picador, 2002), 15.

[3] Robert Dalton and Rob Weening, “A Foray Across Boundaries: Applying IA to Business Strategy and Planning”, (Power Point presentation.)

[4] Paul Kahn, Piotr Kaczmarek, Krzysztof Lenk, “Applications of Isometric Projection for Visualizing Web Sites”, Information Design Journal, (Volume 10, No. 3, 2000): 221-229.

[5] Peter, Boersma, “Integrating IA Deliverables in a Web application methodology”, paper adapted from ASIS&T Bulletin publication, February/March 2005.

[6] Peter Boersma, e-mail exchange, March 2006.

[7] Dalton and Weening, “A Foray Across Boundaries.”

[8] Charles Handy, “Unimagined Future,” in The Drucker Foundation : The Organization of the Future, ed. Marshall Goldsmith (San Fransico: Jossey-Bass, 1996): 377.

[9] Edward, Debono, “Parallel Thinking,” (London: Viking, 1995),

[10] Paul Rogers and Marda Blenko, “Who Has The D?,” Harvard Business Review (January 2006): 53-61.