We Tried To Warn You, Part 1

Written by: Peter Jones
I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure.

There are many kinds of failure in large, complex organizations – breakdowns occur at every level of interaction, from interpersonal communication to enterprise finance. Some of these failures are everyday and even helpful, allowing us to safely and iteratively learn and improve communications and practices. Other failures – what I call large-scale – result from accumulated bad decisions, organizational defensiveness, and embedded organizational values that prevent people from confronting these issues in real time as they occur.

So while it may be difficult to acknowledge your own personal responsibility for an everyday screw-up, it’s impossible to get in front of the train of massive organizational failure once its gained momentum and the whole company is riding it straight over the cliff. There is no accountability for these types of failures, and usually no learning either. Leaders do not often reveal their “integrity moment” for these breakdowns. Similar failures could happen again to the same firm.

I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure. We must try to stop the train, even if we are many steps removed from the larger decision making process at the root of these failures.

h2. Organizations as Wicked Problems

Consider the following scenario: A $2B computer systems integrator provider spends most of a decade developing its next-generation platform and product, and spends untold amounts in labor, licenses, contracting, testing, sales and marketing, and facilities. Due to the extreme complexity of the application (user) domain, the project takes much longer than planned. Three technology waves come and go, but are accommodated in the development strategy: Proprietary client-server, Windows NT application, Internet + rich client.

By the time Web Services technologies matured, the product was finally released as a server-based, rich client application. However, the application was designed too rigidly for flexible configurations necessary for the customer base, and the platform performance compared poorly to the current product for which the project was designed as a replacement. Customers failed to adopt the product, and it was a huge write-off of most of a decade’s worth of investment.

The company recovered by facelifting its existing flagship product to embrace contemporary user interface design standards, but never developed a replacement product. A similar situation occurred with the CAD systems house SDRC, whose story ended as part two of a EDS fire sale acquisition of SDRC and Metaphase. These failures may be more common that we care to admit.

From a business and design perspective, several questions come to mind:
* What were the triggering mistakes that led to the failure?
* At what point in such a project could anyone in the organization have predicted an adoption failure?
* What did designers do that contributed to the problem? What could IA/designers have done instead?
* Were IA/designers able to detect the problems that led to failure? Were they able to effectively project this and make a case based on foreseen risks?
* If people act rationally and make apparently sound decisions, where did failures actually happen?

This situation was not an application design failure; it was a total organizational failure. In fact, it’s a fairly common type of failure, and preventable. Obviously the market outcome was not the actual failure point. But as the product’s judgment day, the organization must recognize failure when goals utterly fail with customers. So if this is the case, where did the failures occur?

It may be impossible to see whether and where failures will occur, for many reasons. People are generally bad at predicting the systemic outcomes of situational actions – product managers cannot see how an interface design issue could lead to market failure. People are also very bad at predicting improbable events, and failure especially, due to the organizational bias against recognizing failures.

Organizational actors are unwilling to acknowledge small failures when they have occurred, let alone large failures. Business participants have unreasonably optimistic expectations for market performance, clouding their willingness to deal with emergent risks. We generally have strong biases toward attributing our skills when things go well, and to assigning external contingencies when things go badly. As Taleb (2007)1 says in The Black Swan:

bq. “We humans are the victims of an asymmetry in the perception of random events. We attribute our success to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living. Ninety-four percent of Swedes believe that their driving skills put them in the top 50 percent of Swedish drivers; 84 percent of Frenchmen feel that their lovemaking abilities put them in the top half of French lovers.” (p. 152).

Organizations are complex, self-organizing, socio-technical systems. Furthermore, they can be considered “wicked problems,” as defined by Rittel and Webber (1973)2. Wicked problems require design thinking; they can be designed-to, but not necessarily designed. They cannot be “solved,” at least not in the analytical approaches of so-called rational decision makers. Rittel and Webber identify 10 characteristics of a wicked problem, most of which apply to large organizations as they exist, without even identifying an initial problem to be considered:

# There is no definite formulation of a wicked problem.
# Wicked problems have no stopping rules (you don’t know when you’re done).
# Solutions to wicked problems are not true-or-false, but better or worse.
# There is no immediate and no ultimate test of a solution to a wicked problem.
# Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
# Wicked problems do not have an enumerable set of potential solutions.
# Every wicked problem is essentially unique.
# Every wicked problem can be considered to be a symptom of another [wicked] problem.
# The causes of a wicked problem can be explained in numerous ways.
# The planner has no right to be wrong.

These are attributes of the well-functioning organization, and apply as well to one pitched in the chaos of product or planning failure. The wicked problem frame also helps explain why we cannot trace a series of decisions to the outcomes of failure – there are too many alternative options or explanations within such a complex field. Considering failure as a wicked problem may offer a way out of the mess (as a design problem). But there will be no way to trace back or even learn from the originating events that the organization might have caught early enough to prevent the massive failure chain.

So we should view failure as an organizational dynamic, not as an event. By the time the signal failure event occurs (product adoption failure in intended market), the organizational failure is ancient history. Given the inherent complexity of large organizations, the dynamics of markets and timing products to market needs, and the interactions of hundred of people in large projects, where do we start to look for the first cracks of large-scale failure?

h2. Types of Organizational Failure

How do we even know when an organization fails? What are the differences between a major product failure (involving function or adoption) and a business failure that threatens the organization?

An organizational-level failure is a recognizable event, one which typically follows a series of antecedent events or decisions that led to the large-scale breakdown. My working definition:

“When significant initiatives critical to business strategy fail to meet their highest-priority stated goals.”

When the breakdown affects everyone in the organization, we might say the organization has failed as whole, even if only a small number of actors are to blame. When this happens with small companies, such as the start-up I worked with early in my career as a human factors engineer, the source and the impact are obvious.

Our company of 10 people grew to nearly 20 in a month to scale up for a large IBM contract. All resources were brought into alignment to serve this contract, but after about 6 months, IBM cut the contract – a manager senior to our project lead hired a truck and carted away all our work product and computers, leaving us literally sitting at empty desks. We discovered that IBM had 3 internal projects working on the same product, and they selected the internal team that had finished first.

That team performed quickly, but their poor quality led to the product’s miserable failure in the marketplace. IBM suffered a major product failure, but not organizational failure. In Dayton, meanwhile, all of us except the company principals were out of work, and their firm folded within a year.

Small organizations have little resilience to protect them when mistakes happen. The demise of our start-up was caused by a direct external decision, and no amount of risk management planning would have landed us softly.

I also consulted with a rapidly growing technology company in California (Invisible Worlds) which landed hard in late 2000, along with many other tech firms and start-ups. Risk planning, or its equivalent, kept the product alive – but this start-up, along with firms large and small, disappeared during the dot-bomb year.

To what extent were internal dynamics to blame for these organizational failures? In retrospect, many of the dot-bombs had terrible business plans, no sustainable business models, and even less organic demand for their services. Most would have failed in a normal business climate. They floated up with the rise of investor sentiment, and crashed to reality as a class of enterprises, all of them able to save face by blaming external forces for organizational failure.

h2. Organizational Architecture and Failure Points

Recognizing this is a journal for designers, I’d like to extend our architectural model to include organizational structures and dynamics. Organizational architecture may have been first conceived in R. Howard’s 1992 HBR article “The CEO as organizational architect.” (The phrase has seen some academic treatment, but is not found in organizational science literature or MBA courses to a great extent.)

Organizations are “chaordic” as Dee Hock termed it, teetering between chaotic movement and ordered structures, never staying put long enough to have an enduring architectural mapping. However, structural metaphors are useful for planning, and good planning keeps organizations from failing. So let’s consider the term organizational architecture metaphorical, but valuable – giving us a consistent way of teasing apart the different components of a large organization related to decision, action, and role definition in large project teams.

Let’s start with organizational architecture and consider its relationships to information architecture. The continuity of control and information exchange between the macro (enterprise) and micro (product and information) architectures can be observed in intra-organizational communications. We could honestly state that all such failures originate as failures in communications. Organizational structure and processes are major components, but the idea of “an architecture,” as we should well know from IA, is not merely structural. An architectural approach to organizational design involves at least:

  • *Structures*: Enterprise, organizational, departmental, networks
  • *Business processes*: Product fulfillment, Product development, Customer service
  • *Products*: Structures and processes associated with products sold to markets
  • *Practices*: User Experience, Project management, Software design
  • *People and roles*: Titles, positions, assigned and informal roles
  • *Finance*: Accounting and financial rules that embed priorities and values
  • *Communication rules*: Explicit and implicit rules of communication and coordination
  • *Styles of interaction*: How work gets done, how people work together, formal behaviors
  • *Values*: Explicit and tacit values, priorities in decision making

Since we would need a book to describe the function and relationships within and between these dimensions, let’s see if the whole view suffices.

Each of these components are significant functions in the organizational mix, all reliant on communication to maintain its role and position in the internal architecture. While we may find may have a single communication point (a leader) in structures and people, most organizational functions are largely self-organizing, continuously reified through self-managing communication. They will not have a single failure point identifiable in a communication chain, because nearly all organizational conversations are redundant and will be propagated by other voices and in other formats.

Really bad decisions are caught in their early stages of communication, and become less bad through mediation by other players. So organizations persist largely because they have lots of backup. In the process of backup, we also see a lot of cover-up, a significant amount of consensus denial around the biggest failures. The stories people want to hear get repeated. You can see why everyday failures are easy to catch compared to royal breakdowns.

So are we even capable of discerning when a large-scale failure of the organizational system is immanent? Organizational failure is not a popular meme; employees can handle a project failure, but to acknowledge that the firm broke down – as a system – is another matter.

According to Chris Argyris (1992), organizational defensive routines are “any routine policies or actions that are intended to circumvent the experience of embarrassment or threat by bypassing the situations that may trigger these responses. Organizational defensive routines make it unlikely that the organization will address the factors that caused the embarrassment or threat in the first place. (p. 164)” Due to organizational defenses most managers will place the blame for such failure on individuals rather than the consequences of poor decisions or other root causes, and will deflect critique of the general management or decision making processes.

Figure 1 shows a pertinent view of the case organization, simplifying the architecture (to People, Process, Product, and Project) so that differences in structure, process, and timing can be drawn.

Projects are not considered part of architecture, but they reveal time dynamics and mobilize all the constituents of architecture. Projects are also where failures originate.

The timeline labeled “Feedback cycle” shows how smaller projects cycled user and market feedback quickly enough to impact product decisions and design, usually before initial release. Due to the significant scale, major rollout, and long sales cycle of the Retail Store Management product, the market feedback (sales) took most of a year to reach executives. By then, the trail’s gone cold.


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

Over the project lifespan of Retail Store Management, the organization:

  • Planned a “revolutionary” not evolutionary product
  • Spun off and even sequestered the development team – to “innovate” undisturbed by the pedestrian projects of the going concern
  • Spent years developing “best practices,” for technology, development, and the retail practices embodied in the product
  • Kept the project a relative secret from rest of the company, until close to initial release
  • Evolved technology significantly over time as paradigms changed, starting as an NT client-server application, then distributed database, finally a Web-enabled rich client interface.

Large-scale failures can occur when the work domain and potential user acceptance (motivations and constraints) are not well understood. When a new product cannot fail, organizations will prohibit acknowledging even minor failures, with cumulative failures to learn building from small mistakes. This can lead to one very big failure at the product or organizational level.

We can see this kind of situation (as shown in Figure 1) generates many opportunities for communications to fail, leading to decisions based on biased information, and so on. From an abstract perspective, modeling the inter-organizational interactions as “boxes and arrows,” we may find it a simple exercise to “fix” these problems.

We can recommend (in this organization) actions such as educating project managers about UX, creating marketing-friendly usability sessions to enlist support from internal competitors, making well-timed pitches to senior management with line management support, et cetera.

But in reality, it usually does not work out this way. From a macro perspective, when large projects that “cannot fail” are managed aggressively in large organizations, the user experience function is typically subordinated to project management, product management, and development. User experience – whether expressing its user-centered design or usability roles – can be perceived as introducing new variables to a set of baselined requirements, regardless of lifecycle model (waterfall, incremental, or even Agile).

To make it worse (from the viewpoint of product or requirements management), we promote requirements changes from the high-authority position conferred by the reliance on user data. Under the organizational pressures of executing a top-down managed product strategy, leadership often closes ranks around the objectives. Complete alignment to strategy is expected across the entire team. Late-arriving user experience “findings” that could conflict with internal strategy will be treated as threatening, not helpful.

With such large, cross-departmental projects, signs of warning drawn from user data can be simply disregarded, as not fitting the current organizational frame. And if user studies are performed, significant conflicts with strategy can be discounted as the analyst’s interpretation.

There are battles we sometimes cannot win. In such plights, user experience professionals must draw on inner resources of experience, intuition, and common sense and develop alternatives to standard methods and processes. The quality of interpersonal communications may make more of a difference than any user data.

In Part II, we will explore the factors of user experience role, the timing dynamics of large projects, and several alternatives to the framing of UX roles and organizations today.

The Trouble With Web 2.0

Written by: Alexander Wilms

Today, there is a lot of buzz around a number of topics labeled as "Web 2.0“. Consultants jump on the "Web 2.0" bandwagon and IT vendors desperately struggle to add “Web 2.0” features to their products. But the term is still unclear and nobody has a good definition of what “Web 2.0” is and what it is not. The term was originally coined by Tim O’Reilly in an article describing the changes in business processes and models that have been triggered by new and creative combinations of already existing technologies.

Other “social networking” services (like wikis and blogs) have been added to the Web 2.0 “genre”, generating a new end user experience on the Internet. Many large enterprises are now starting “Web 2.0” projects, their IT departments seemingly eager to show their technological abilities. The “new” is strangely attractive and everyone wants to be on board.

But what is the real core of this new phenomenon? According to the conclusions of Tim O’Reilly the core is not the technology (which has been there for some time) but the emergence of new “patterns” – new or changed business processes and a new concept of the user. These patterns have been put to good use in the World Wide Web. But can they be transferred to a corporate environment, at the enterprise level, as well? Let’s have a closer look on them.

  • the Web 2.0 platform breaks down borders between services
  • Web 2.0 utilizes the collective intelligence of its users
  • Web 2.0 cannot control the process of knowledge creation
  • Web 2.0 is constantly linking knowledge, thereby not protecting intellectual property

The web as platform.

Or the Web 2.0 platform breaks down borders between services

One pattern identified by Tim O’Reilly is a change in the business model of software suppliers. In “Web 1.0” times the web was used only as a transport medium, delivering predefined information (e.g. static HTML pages) to client based software products (e.g. the Netscape browser). In Web 2.0 companies use the web as a platform, using the enhanced technology offering to create web-based applications or services that do not require any installation on the user’s machine (e.g. the Google Search Engine). If the client does not install a program that allows tracking of usage then charging for usage become difficult.

Business models that rely on the sale of personalized or concurrent software licenses will not work anymore. The suppliers will be forced to find another way to charge for their services. To generate income they may use advertisements (like Google) or additional service or content offerings (like “land” purchases in Second Life). Internal IT departments however may have to rethink their funding models, especially if they are funded by various company departments for their services. They will not be able to allocate operating costs per license if they want to be Web 2.0 .

Let’s assume a departmentally funded IT department offers a blog or wiki as a new service. No client installation is necessary, but access can be restricted to the members of those departments who pay for the service. Restricted access, as we will see later, will affect the overall value of a collaboration service and yet is not a fair sharing model – it opens up all sorts of claims for charge reduction on the grounds that the other departments use the service more intensively or in a different manner. A per-capita cost allocation may be a solution, but then the departments might try to overextend service usage as they don’t pay extra for more intensive use. The means of measuring usage and paying for it will have to change in order for the IT department to get paid for its services.

But this new platform pattern implies more than just a change in software distribution mechanisms. Not being bound to a client means these new web services typically do not rely on a single source of data, like a classical application with its own database, but combine data from different sources, enabled by open source protocols and standards. The large number of mash-ups (data combined and displayed in a new and usually creative manner) that have emerged around the Google map function are a good example. With these new service offerings the core competency of the service provider shifts from software development to database management and the ability to combine data that is available on the Web to create meaningful information.

Many internal IT departments face a different situation where access to data and the means to combine them are restricted by departmental boundaries, technological incompatibilities or data security and protection rules. Person-related HR or sales data are among the most protected in a corporate environment, and their usage is highly restricted for good reason. Data security and protection rules will also be a showstopper for externally hosted services. According to Article 25 of the European Data Protection Guideline, issued by the European Commission, personal data may only be transferred to third countries (all non-EU countries) if the receiving country provides an adequate level of protection. US organizations have to qualify under the “safe harbor” program to be able to receive data from a European organization.

But even then the idea of having your company’s most valuable data in an externally hosted application (e.g. the Salesforce CRM), with data transfer over the Internet will be the ultimate nightmare of every company’s IT security officer. The required security prohibits the clever combination and reinterpretation of data that we see on Google maps or elsewhere.

Web 2.0 cannot control the process of knowledge creation.

Or utilizing and enabling collective intelligence

We have seen that the new Web 2.0 services are powered by the ability to combine data from different sources. But where does the data come from and who creates it? The Web 2.0 pattern of “collective intelligence” shifts the task of creating and maintaining data and content from centralized resources to a dispersed user community. The eBay selling platform would be useless without the activities of the millions of sellers and buyers, who are creating the content and a critical mass of offerings that attract other users into using the service. Wikipedia would be a completely empty shell without its users creating and maintaining the content. In his article Tim O’Reilly states that the value of the service is related to the scale and dynamism of the data it provides. The larger the number of articles in Wikipedia the more users will use it as reference, therefore the service gets better the more people use it and contribute to it. Tim O’Reilly calls this the “LOW” principle – “let other’s do the work”. This works perfectly in the Web with its huge user base (according to the statistics already more than one billion people) but will this pattern also work in a corporate environment?

The corporate user base, even in the largest enterprises, is smaller than the user base on the Web which limits the number of potential contributors. If the quality of the service depends on the number of users then companies have a disadvantage compared to the Web-based services. In many interviews end users said that they often prefer to research on the Web instead of using their corporate knowledge resources because they are confused by the complexity of internal search and because the information they can find on the web seems to be more recent. However the reliability of Web-based information that can be edited by everybody is in doubt when comes to hard legal or medical use. Would you be willing to bet your career on a Wikipedia article?

If the quality of the service is improved by the amount of user commitment it will only be successful if a critical mass of users can be attracted. While Web services like eBay, Wikipedia or Flickr have been soaring over the last years, driven by user commitment, corporate services often have a contribution pattern like this:

web2_knowledgecurve.gif

But what attracts users to donate their time and energy to contribute to Web services like Wikipedia or Flickr while not doing so to corporate services? Psychology and economics teach us that there is no such thing as altruism – whatever people do will create a personal return of value for them. This personal value is measured by individual criteria. Respect and prestige, personal reputation, political beliefs or desires, and of course monetary incentives influence the decision as to whether their contribution creates this value. People create an article in Wikipedia because they believe that the topic is interesting or important or because they want to see their name in print, and put pictures on Flickr because they want to share them with others, thereby influencing how they are perceived by others. The value of contributing must be higher than doing something else (e.g. watching a sports game on TV or adding to the corporate knowledge base).

In a corporate environment this might be different as a different set of values becomes dominant. Company vision, goals, or instructions will be added on top of the personal value criteria, together with given priorities changing the decision of what creates value. Of course one big driver for such decisions is the need for an employee to be externally “chargeable”, a typical situation in the consultancy business. If a consultant has to decide whether he spends his time generating direct revenue for the company (and therefore for himself) by working for a client; or having to explain to his supervisor why he choose to improve the internal knowledge base instead he or she will opt for the first alternative. As long as contribution is regarded as less important topic in the corporate hierarchy the priority of knowledge initiatives will go down over time simply because people start to value other things as more important.

So contribution is rational for people if there is a reward. But there is another rationale for people, especially in large organisations. Cooperation within the Web communities is mainly driven by non-monetary values as the contributors don’t receive any money for their input. These communities are networks of specialists who rely on each others knowledge. Investing time to create and contribute knowledge will pay off here because there is no direct competition and other people’s knowledge might help you tomorrow. Small consultant companies may be examples of those tight communities. If there is direct competition people normally tend to change their behaviour. They try to acquire and to protect their special knowledge. The German language even has a special term for this kind of behaviour: “Herrschaftswissen”, which means superiority through withheld or not communicated information or knowledge. Many corporations have answered these issues by centralizing knowledge management into special departments . But Web 2.0 requires involving the largest number of users possible and a centralized approach might not be the right answer anymore. Corporate projects that focus on providing Web 2.0 technologies alone will fail if the companies do not change their rewarding schemes and knowledge management processes. Such projects will need to rethink incentives for participators and to create the time slots necessary for people to contribute. If the corporation wants the genuine benefit from Web 2.0 then they must not underestimate the effort it takes to produce it. Finding a balance between corporate or proprietary knowledge and the free-for-all idea exchange of Web 2.0 services is critical if a corporate IT department wants be benefit from Web 2.0 style services.

Web 2.0 cannot control the process of knowledge creation.

Or the uncontrollable wisdom of the “blogosphere”

Judging by the hype they have created; wikis and web logs (blogs) seem to be an important part of the Web 2.0 patterns. As blogs have spread through the web like wildfire vendors of content management systems are struggling to add this functionality into their applications. Already the term “blogosphere” has emerged as a collective term encompassing all blogs and their interconnections. It is the perception that blogs exist together as an extended connected community or a complex social system. However it is common knowledge that a system composed from several component parts will act differently than its individual isolated components. An ant colony develops complex social behaviour and erects structures – a task a single ant could never perform. While a single nerve cell is only able to transfer electrical impulses the enormous network of synaptical references and trackbacks of the human forebrain enables conscious thought.

Tim O’Reilly states that the blogosphere creates a structure that resembles the human brain. Expressing an idea in a single blog might not change the world, but if this idea is picked up, discussed and commented in a large number of blogs it not only gets the attention of many people – it might get enhanced, developed, refined, challenged and eventually transformed evolutionary into something that might influence the way of the world. Like in the anthill or the human brain this process is not controlled by a single instance – it is driven by the participation and cooperation of many individuals with their individual motives. This absence of a controlling instance allows for creativity, progress of ideas and the expression of individual opinions. The old saying that the whole is more than the sum of the parts is true here. However it is a self-organized process that follows its own rules – forcing or securing this is not currently possible nor probably desirable.

The various discussions on the attempts of some nation states to restrict Internet access within their borders show that organizations that rely on its inhabitants to keep within the existing structures will only tolerate an “uncontrollable” environment up to a certain level and will try to erect restrictions if this environment starts to thread organizational foundations. Using the discussions within a blogosphere to enable the development of new consulting solutions might be welcome to a corporation while critical comments on the latest corporate policies and procedures might not. In some corporate areas where following predefined procedures and processes (e.g. accounting standards) is necessary or where secrets have to be protected an open exchange among employees might not be allowed at all to protect the company interests. And companies who start allowing employees to blog will experience that enforcing control on the blog or wiki contents will be detected and will create strong opposition. Companies have to be aware that open service offerings like blogs and wikis cannot be removed afterwards or restricted in scope without losing employee loyalty or making themselves look like fools in the process. And worse – as the Internet provides a means to get around those blockages that the enterprise might believe are comforting: people might take their internal discussions public.

One other aspect – the strength of a wiki is the presentation of content in a loosely structured, intuitive way, creating a hypertext structure resembling creative human thought processes. As there is no visible hierarchical structure in a wiki retrieving content mainly relies on search. That is why Wikipedia or Google show a large search field on their main pages. This unstructured organisation of content fails if the content itself is highly structured. A law commentary contains the text of the law structured by paragraphs, with some comments or additional materials for each paragraph, sometimes for each sentence. Having a search might help if materials related to a special topic are needed, but if an experienced lawyer needs the latest comment on a certain paragraph he needs a different navigation – he will prefer to select the paragraph directly from a list, browsing through the hierarchy of paragraphs and comments. So wikis are a great tool but not a cure-all.

The Web 2.0 is constantly linking knowledge, thereby not protecting intellectual property.

Or the perpetual beta, mash-ups and new intellectual property

Another pattern Tim O’Reilly points to – most of the emerging 2.0 services will (or should) forever wear a “beta” sticker on their homepages. As the role of the user moves from passive consumer to active participant; the quick and continuous implementation of user driven enhancements becomes a driver for the service provider, especially in a competitive market environment.

Release cycles tend to diminish as deployment does not count for Web based services – users will always get the latest version of the service when (re-)loading the site. While development cycles shrink to days or hours and pressure to continuously implement new features rises quality assurance procedures becomes less important – bug-fixes can be deployed instantly and as long as the users don’t pay for the service they will tolerate errors or will be willing to learn new features by themselves. In a corporate environment this might be different. Deployment delays do play a role, especially when permanent online access is not given or network traffic is limited. Quality of the service and at least a certain operational stability will be more important than speed of delivery, especially if software errors will cause financial damage or threaten the company in other ways. And as corporate applications tend to be more complex than Web services training efforts becomes an important part. So corporate applications cannot be developed and deployed using a “perpetual beta” mode.

One other thing is the focus – while most of the Web 2.0 companies focus on a single product or a small suite of similar services; an internal corporate IT service provider will have hundreds, sometimes thousands of services (and applications) to provide. Release management and portfolio management will be needed to ensure maximum value, which means that development resources might not be able to work on one service the whole time.

Another pattern that follows from this development mode is the emergence of so called mash-ups. As the rapid development relies on “lightweight programming models” (another pattern described by Tim O’Reilly), like scripting languages the security of the code is minimal, exposing the software to the users who are able to use or even “hijack” the existing interfaces to create their own solutions and mash-ups on the existing platforms, combining data from different sources to create new information and knowledge. In some cases (e.g. Google Maps) this is welcome to the service provider as it increases the spread of the service and the data quality it provides – all the users of Google Maps do provide Google with an enormous amount of location data, enabling Google to create the most detailed worldwide Yellow Pages ever seen. However if access to or re-use of the data should be limited (e.g. to paying customers) the Web 2.0 technology might not be safe enough. In business to business relations and also in a corporate environment data protection, security and the protection of intellectual property are issues of huge importance, so an open technology platform will be out of scope. On the other hand this limits companies in leveraging the know-how and creativity of its users. Even the internal use of existing Internet web-based services might cause issues as the company cannot control the service. What if the service provider decides to change, charge for or even discontinue an external service the company has come to rely on? Replacing the service will again create additional efforts to adapt the internal applications, which might outweigh the savings created by the free use of the service.

What now?

We have seen that there are differences between the Web and corporate environments. While the Web is a deregulated environment, with millions of users contributing and easy access to data, corporations have to restrict their users for many reasons, thereby limiting the potential of the Web 2.0 patterns. While Web 2.0 patterns work well in the Web there might be obstacles and issues when they are implemented in a corporate environment without adaptation. “Might” because every company is an individual organization and there are no easy, “one-size-fits-all” solution. On the other hand the Web 2.0 patterns have been proven to be too successful to be ignored.

There is no ready-made solution, only some good advice. The most important and most simple is that corporate behaviours and processes are not changed just by implementing a new IT service. Installing a blog in a formal and hierarchical corporate culture will not change the company in an open and informal community. Web 2.0 patterns will only work if the corporate and even national culture is already responsive to more collaboration and participation or if the implementation is accompanied by other measures to support cultural change. Creating and holding up motivation of users to contribute, seemingly no problem in the WWW with a billion users will be one of the success factors. So corporate Web 2.0 implementation projects have to broaden their scope, have to add structural and cultural change or process redesign to their charter. And those “soft” topics tend not to have easy solutions. So when your IT department or an external consultant excitedly tells you about how they are adding “Web 2.0” to the corporate computing environment: be prepared for a difficult birthing process and adjusting your expectations.

I would be happy to hear about your experiences.

Building the UX Dreamteam

Written by: Anthony Colfelt

Part one of a two-part article.

Finding the right person to compliment your User Experience team is part art and part luck. Though good interviewing can limit the risk of a bad hire, you need to carefully analyze your current organizational context, before you can know what you need. Herein lies the art. Since you can’t truly know a candidate from an interview, you gamble that their personality and skills are what they seem. Aimed at managers and those involved in the hiring decision process, this article looks at the facets of UX staff and offers ways to identify the skills and influence that will tune your team to deliver winning results.

The Art

There are many pieces to the User Experience puzzle. The art of fitting the roles together to compliment each other and your particular situation requires a bit of luck and intuition. Try as we might, it is nearly impossible to find someone who is highly skilled in all areas, so you will want to find either a "Jack of all trades" or a specialist. First, lets explore some loose definitions of various skills that make up the User Experience Team.

Skills are measurable. Anybody can learn new skills through education or apprenticeship. They are the capital built over the course of a career, making the applicant more saleable. Categories of research, information architecture, interaction design, graphic design and writing help us communicate and understand the part each skill plays in defining user experience. Not to be confused with roles – which define the activities of any member on the team – staff employ skills to do the work.

Let’s look at skills in a sequential order, as they’re typically utilized when practicing User Centered Design. We’ll begin with research.

Research Skills

Research is interwoven into all user experience roles – the inspiration and validation of ideas and designs greatly enhances the chance of success in meeting your design objectives.

This skill, as it relates to UX, is about asking questions and illuminating a subject area in unobvious ways. Knowledge of psychology, sociology and anthropology are used to tease out intelligence from users, market data and academia. In this regard, Interaction Designers and Information Architects must use research skills to inform the strategic aspects of their job. Even a cursory study of a potential product’s competitive landscape requires an essential research component.

The researcher in us takes a scientific approach to the study of humanity and uses quantitative and qualitative studies to inform the design process. Roles on the UX Dreamteam employ techniques such as:

  • Contextual inquiry – field research that involves interviewing users “in context” i.e. as they perform familiar tasks in their normal environment
  • Surveys – one questionnaire answered by many respondents, statistically analyzed for trends direct us toward a user’s requirements
  • Usability testing – key for highlighting UI and system design flaws as well as opportunities
  • Card sorting – used by IAs to test categorization ideas
  • Emotional response testing – great value to graphic designers seeking direction on the impact of their visuals

Research skills punctuate the UX professionals’ work agenda.

Being good at research is key, but disseminating the results for maximum impact to ensure findings are used is equally important. A lack of attention to this can undermine valuable work. Good communicators reap the benefits of clearly, poignantly presenting facts and theories.

A researcher, whether dedicated to this role or filling it temporarily, needs to be pragmatic. Remaining objective – interpreting findings only from collected data – is often a challenge when we are invested in a particular idea or direction. Researchers should be inquisitive and analytical with an empathetic instinct to dig beneath the surface of things.

Screening tips: Look for some evidence that a candidate understands scientific method with regard to research. They should also be able to separate themselves from an emotional attachment to their own ideas. Not to say they should be dispassionate about finding the right answer, but their personal biases should not taint this effort. Probe their ability to analyze data. Test to see if their nature is exploratory (good) or if they are just as happy to make general assumptions (not so much). See how they have creatively engaged the team with research findings by threading them in to the day’s work.

Information Architecture Skills

Information Architecture entails designing an information system and the users’ pathways through it. The IA’s goal is to create a system that will provide useful information to suit the user’s context. System structure, inputs and outputs of information, semantic analysis and accommodating changes in the user’s context are in the information architect’s domain.

Frequently Information Architecture (IA) and Interaction Design (IxD) skills are confused. Job titles of one or the other do not neatly describe the skills at work and it’s common for an “IA” to use IxD skills and visa versa. Jesse James Garrett in his book The Elements of User Experience differentiates IA and IxD by the type of system being designed. He asserts that Information Architecture fits a model of the web as a hypertext system, rather than a software interface. Johnathan Korman from Cooper delves into the distinction in his article The Web, Information Architecture and Interaction Design – “IA means defining information structures to answer the question "how does a user find the information they want?… IxD means defining system behaviors to answer the question "how does a user take the action they want?”…”

IA and IxD roles can work in tandem. The IA defines what data needs to appear and the IxD crafts the UI and user flow. Primarily IxDs in this setup are focused on the nuances of the functionality of the system, and IAs on the data that drives it or is manipulated through it. This is a good strategy for large scale, data-centric projects such as defining a content management system. For smaller projects, one person may perform both roles more efficiently. What type of systems does your team work on? How much of your work is about “content” and searching and how much is about software UI?

IA activities fall into two categories. Big IA includes creating ontology, categorization and metadata design. Little IA is labeling, auditing content, creating sitemaps and wireframes. Do you know which of these you really need?

Richard Saul Wurman – an architect and graphic designer – coined the term “Information Architecture” about 30 years ago. He laid out the domain of what’s now more commonly thought of as broadly “information design” with an emphasis on systemic design. The practice of IA we see today was matured by those in the field of information and library sciences, such as Peter Morville. An IA is an analytical, left-brained beast with a detailed eye for modeling content, metadata and information retrieval systems. They are tireless completers, auditing seemingly endless quantities of information, carefully filtering it and finding the patterns within.

Screening tips: Look for patience, attention to detail and a comfort with language, particularly vocabulary, synonyms and definitions. Pattern analysis and capacity for cataloging and organizing information such as content types, article topics, genres, authors, dates, etc, is essential. Conclusions should not all be derived from their own organizational prowess ­– are they inclined to gain inspiration or test ideas with users? The difference between administrative, intrinsic and descriptive metadata should not be foreign, after all, they revel in semantics!

Interaction Design Skills

The Interaction Designer is a story-weaver – scripting the narrative between man and machine – the dialogue of system response to user action. Goals, behavior and flow are significant strategic concerns, but this skill goes beyond making interfaces relevant and usable. IxD marries personality with each interaction story, creating a system with which users make an emotional connection. Interaction Design and Visual Communication work together to breathe life into software UI. IxD defines the way the user manipulates the interface and Visual Communication determines how that looks in concert with all the other visual elements on screen. Blending analysis and creativity – working between artistry and engineering – Interaction Design concepts muster team consensus around what to build via the user interface layer.

Scenarios, flow diagrams, interaction models, prototypes and wireframes are typical deliverables of interaction design. They capture the desired user experience that is translated into a functional specification.

Because interaction design is primarily about creating intuitive interfaces, a measure of empathy produces the best results. This skill is not a precise science, so humility and resilience in the face of criticism (or sometimes failure) is also good.

Screening tips: Look for an interest in and aptitude for psychology; passion for making things work intuitively; enthusiasm for the difference between good and great interactions. Do they understand how to brand an interaction? Good IxDs make stories; can they hold your interest? The world is full of interaction – they should draw their inspiration widely. They must be comfortable with research and usability concepts too.

Graphic Design Skills

Graphic design (also known as Visual Communication, Information Design or Visual Design) is primarily concerned with clearly communicating the aesthetic, personality and function of a system and to invoke feeling. Strategically, an understanding of branding on a level deeper than visual identity, delving into messaging, semiotics and interaction is important. It is here that they work closely with writers and Interaction Designers on software or with an IA on hypertext systems. Tactically, Visual Communicators ensure that the UI layer is lucid, communicates visual hierarchy and represents the brand in ways that appeal to the end user. Inherently creative, Visual Communicators demonstrate a passion for the marriage of beauty and function.

In collaboration with other disciplines, graphic designers translate concepts visually to persuade stakeholders. They produce ‘comps’ (short for composite or comprehensive) of the UI, advertisements, illustrations and corporate identity treatments. Some companies like to have their graphic designers produce CSS, thereby ensuring that every detail is captured in the finished product. When a graphic designer must compromise their design for technical reasons, an acceptable solution is arrived at more quickly with no friction between development and design. It’s helpful if your graphic desinger can converse in the terms of your technology.

The wider field of graphic design has its share of passionate folk. However, most that have moved to the technology sector have since matured of “artist’s ego”. A lot of compromise typically comes with crafting the surface layer of technology so only those who are flexible survive. Evoking emotional response, passion, flair and patience for refining details are hallmarks of the graphic designer.

Screening tips: Test for an understanding of branding beyond the visual, moving into interaction and messaging. Be sure they embrace usability concepts and processes and are as concerned with user comprehension as beauty. Gather evidence of “willingness to compromise”. Do they value what other UX disciplines bring to the team? Ensure they understand CSS or the constraints of your particular interface technology. How concerned are they with engaging the emotions of the user?

Writing Skills

Good writers can effortlessly guide users through an interface with concise instructional copy. They have the ability to create memorable taglines, deduce complex concepts into layman’s terms and author well-researched and thoughtful articles. Great writers have honed their skills well beyond what we learned in high-school English.

Steve Calde from Cooper says in his article Technical Writing and Interaction Design, writers have a pivotal role to play in the interaction design process: “As the first people actually trying to explain how the product works in users’ terms, technical writers are in a unique position to spot problems.” He is speaking from the technical writing perspective.

When we talk about writing to express a brand, there is a synergy between all disciplines committed to creating a strong voice. A writer’s ability to express the brand through phraseology is key not only for creating associative messages for the customer, but also for driving home a subtle Interaction or Visual Design personality.

Other than manuals or help files, instructions, labels, advertising headlines and copy, a deliverable missing from many UX teams is a style guide that details how concepts are to be expressed. Do you currently have a clearly articulated and documented voice and style?

Writing requires patience. Language allows us to express ourselves in many different ways and it can be a contentious area for stakeholders concerned with the message sent to readers. Therefore, subjective rework can happen, especially with highly visible projects. Empathetic people make good technical writers since they can quickly learn to speak the language of an audience who needs them to be clear. Equally, those exhibiting flair and wit often craft great marketing material.

Screening Tips: Are they comfortable with language? Can they demonstrate a command of the language to explain or sell ideas. Can they demonstrate how you create a ‘Brand Voice’ and keep it consistent?

While skills are important, less tangible qualities are arguably more so. With time skills are developed, but people who are creative or analytical, strategic or tactical, directive or hands-on are like this by nature. It behooves the hiring manager to identify which of these qualities are needed. In the next part of this article, we will look at some of the less tangible qualities of UX Dreamteam members and organizational contexts that determine which skills you really need.

Getting Hired

Written by: Olga Sanchez-Howard

During a heated discussion on the difference between an Information Architect (IA) and an Interaction Designer (IxD), I suggested that what we do is more important than what we call ourselves. The response was that a label is an alias that carries a set of meanings. Yes, but what happens when there are two aliases that are very closely aligned? We can choose the alias we feel fits us best, but what do employers think?

As the User Experience Network (UXnet) local ambassador for the D.C. Metro area, one of my responsibilities is supporting local UX-related groups. Austin Govella, an IA colleague, thought UXnet should help get some answers to the question of what matters to employers, so we began to work on an event to gather professionals and employers to help us figure this out.

The ensuing event, titled IA Round-up, was a discussion panel and workshop where IAs, IxDs, usability professionals, and their employers came together to discuss what employers care about and what the perfect resume should look like.

The panel included three individuals representing three different types of employers: the agency, the corporation, and the small business. On the agency side, Dan Brown, principal of EightShapes, gave us a clear understanding of the agency perspective. On the corporate side, Livia Labate, senior manager of information architecture and usability at Comcast, outlined the best strategy to get a job with a large corporation. On the small business side, Michele Marut, human factors specialist at Respironics, Inc., described what she looks for. And I, Olga Howard, MC’d the event.

At the IA Round-up we found two reasons why employees and their potential employers may not find the right match:

  1. The terms used by professionals and employers sometimes mean different things.
  2. Resumes and portfolios may not sufficiently explain the work involved, or there may not be enough samples of work–wireframes, taxonomies, etc.

What Employers Care About

Employers have very specific needs and won’t spend much time trying to figure out the difference between an IA and an IxD. They just want their position filled. So while IAs and IxDs are having heated debates, employers pay attention to our resumes – that’s where semantics matter. The following key areas show how we can improve our resumes.

Paint a picture with your documentation:
Accurately describing documentation is difficult, if not impossible. It’s simpler just to show the documents themselves–they tell the story of where we started, where we ended, and how we got there. Unfortunately, we live in a Non-Disclosure Agreement (NDA) world that usually prevents us from showing our documentation. Regardless, according to our panelists, they’d rather see a highly censored document than no document at all.

Include only what employers ask for:
This is a tricky one. Most resumes tend to include what employers ask for, but some of us add other qualifications because we’re concerned the employer won’t see the breadth of our experience.

Present a sense of purpose:
This is the number one issue we heard from our panelists. When we put everything on the resume, the perspective on what’s important is lost.

Include a job history:
Every employer wants to know what jobs we’ve had, what we’ve accomplished, and how we accomplished it. Employers are also looking for employment gaps: if there are any, say why.

Be truthful and promote yourself:
A truthful resume is not the same thing as a factual resume. When we are part of a team we should say which areas of the project we were responsible for.

Create a straightforward resume:
Personality should not be part of the resume. Instead, focus on factual information. If our experience describes the kind of skills and knowledge the employer is looking for, they will want to see examples of our work—our portfolios.

Have a portfolio online:
Although we are bound by NDA rules, we can censor as much as necessary. As our panelists said, they’d rather see a highly censored document than no document at all.

Formalize your UX portfolio:
Lack of formality in presenting a portfolio is like a photographer showing you her photographs in a pile rather than neatly stored, each in a plastic sheet, ready for easy viewing.

What employers are looking for in portfolios is HOW we like to do our work. This is really where your personality shows.

  • Are you attentive to detail?
  • Do you communicate clearly?
  • Do you spend time only on the important aspects of the job?

Unfortunately, the portfolio is where most of us lack clarity. In your portfolio, you should include scans of sketches, drawings, and anything else you use to do your job.

Some people include odes to their heroes, and that’s ok in the portfolio. It speaks to their work and values.

Changing Careers

UX is so new that universities have just begun to offer degree programs. Although many of us actually started in another line of work, there are established communities of practice that new UX professionals should turn to, get involved in, and learn from.

Transferable skills:
A number of skills from other fields transfer to IA, but the only clear way to understand what these skills are is to read about IA, IxD, and usability and start volunteering to do projects. The IA Institute offers a mentorship program, and UXnet is always looking for volunteers.

Once you begin working in the field, you’ll know what strengths you can present to employers. Being new sometimes makes it difficult to have an opinion about the UX conversation going on, but you have a unique perspective and that’s what matters, so have an opinion.

The question of age:
What a nervous experience it must be to be older than your UX peers and compete for the same job. If you are this person, you have years of experience behind you. You have strengths younger UXers probably don’t have, so pay close attention to the job description and play to your strengths. One example is the person who has been a manager for many years. This person can play to their managerial strengths and speak to supporting the UX team in UX work. Employers are usually willing to build roles around your strengths.

One issue raised is that some older people are set in their ways. That is to say, set in the ways and processes that were in place during their tenure. These days, things change so fast that it’s hard to keep up with new thoughts and ideas, so older folks looking to work in UX need to be extremely flexible and adaptable to different processes and cultures.

Two questions you can ask yourself before moving to UX are:

  1. Why are you interested?
  2. Given that culture is a large aspect of work, will you add to the culture?

Next Steps

How much are you worth?
Find out how much other UX professionals are getting paid. This will give you a good idea of what salary you should ask for. The IA Institute Salary Survey and the Aquent Survey of Design Salaries will be helpful.

Where can you find job listings?
You can find great job listings on several websites including here in the Boxes and Arrows jobs section, the IA Institute job board, and the IxDA jobs section.

How can you get help with your resume?
If you need more help, the IA Institute’s mentoring program is a good place to start. Even if you don’t find a mentor in your area, you’ll find very friendly IAI members who will help you out. You can also contact your UXnet Local Ambassador and host your own IA Round-up. This will help give you context as to what local UX employers are looking for.

For formatting direction also try using Livia’s resume template below.

First Last
123 Name St, City, ST

(000) 000-0000 | first.last@firstlast.com | http://firstlast.com/portfolio

High-Level Summary/Goals as an IA: where you see yourself as an IA, what you like to do

Month YY to Month YY: My Title, Company Name, Location
– Two or three sentences describing responsibilities go here.
– Your favorite, proudest accomplishment goes here
– Your second greatest accomplishment goes here
– Your third relevant accomplishment goes here

(Repeat for as many relevant jobs as you want to show.)

Degree Title, YYY, Institution Degree Title, YYY, Institution

You can find Livia’s direction and template at Livlab.com.

Designing for Nonprofits

Written by: Olga Sanchez-Howard

We all find ourselves looking in the mirror at one time or another and asking ourselves if we’re doing all we can for the good of society. What’s it all for?

Those of us in the user experience (UX) profession can actually do something about it. As information architects, interaction designers, usability consultants, and developers, we don’t have to change our careers to do something good for society. All we have to do is connect with the right nonprofit: One that shares our goals and whose mission we support.

Once I asked myself that question, I decided to take a sabbatical from the commercial field and devote my time entirely to nonprofit entities. During my two-year nonprofit experience, I found that there are some differences in working with nonprofit organizations that can be monumental challenges.

The most important difference between nonprofits and commercial or government entities is how they do business. This trickles down to every aspect of working with nonprofits and will ultimately affect anyone’s decisions to work or not work with them. The following are some of the challenges I faced in my two-year commitment to only work with nonprofits.

Requests for Proposals (RFPs) are Creatively Divided

A non-profit’s cash reality—the uncertainty of income—is one perspective not shared by government or commercial entities, at least not to the same degree.

Nonprofits depend on their income from government grants or the public-at-large, so an inconsistent cash flow might make them want to scrimp and save. For this reason, many nonprofits tend to break a project into its parts and bid out the work to a variety of companies in an attempt to obtain the most inexpensive solution.

The bidding situations I’ve encountered in this fragmented approach have divided the project into the following parts.

a) Marketing/Campaign management: Most of the time, this is the highest priority and the conversation revolves around how to get donors, volunteers, or activists. Naturally, the conversation then moves to the campaign tool.

b) Design: As of late, nonprofit organizations have begun to pay close attention to the user experience and are actively sending their employees to information architecture, interaction design, and usability conferences. This is a big step in the right direction. If anyone needs UX work, it’s nonprofits since their mission relies on the public’s money, volunteer efforts, and activism. In this case, the user truly is king.

c) Technology: Is it a content management system (CMS) or a campaign management tool? I’ve done a ton of research on this and found no good answer. Large nonprofits almost always buy big CMS tools that they don’t need, many times as a result of politics but also under a false impression of perceived value. I’ve been surprised that, given the option to chose a smaller more effective tool, most nonprofits chose to go with the big CMS because they think they’ll need those extra features in the future. But that future rarely comes because the site design and—most of the time—the back-end change about every five years.

d) Implementation: This generally goes to the company that wins the technology part of the project, unless it’s Sharepoint or something that comes from a large corporation. In this scenario, there may be an intermediate company that does implementation, or the project managing or design vendor will have a group of developers who can implement.

e) Maintenance: This will most likely fall to the internal development team because the organization is looking to spend little money.

So, although in a commercial project I may win the entire project, with a nonprofit I would most likely be one of three or four partners in the project. If that isn’t enough of a challenge, I found that in many nonprofits, stakeholders differ greatly depending on the stakeholder’s position and department.

Stakeholder Expectations May Differ From One Person to the Next

Unlike most commercial projects, where I usually work closely with the marketing team, in nonprofits I worked with all the directors of the entire organization…and the expectations from each stakeholder are entirely unique.

I once found myself in a room with stakeholders who requested very different information. One stakeholder requested a chart of “quantified” user statistics from their current site; another requested “qualified” data. Yet a third wanted to see none of that…”too much information for me.” Managing those kind of expectations can be challenging.

A worst-case-scenario was when I was working on the Big Brothers Big Sisters design and I found myself in a conference room with the directors and CEOs of the federation’s organizations throughout the country. My challenge was to get all the stakeholders on the same page and comfortable enough to allow a handful of the federation agencies to represent the entire country. With my microphone clipped, a projector, and an amazing presentation assistant, I was able to walk them through design elements as they asked questions. By the end of the conference, I had met my challenge with seven agencies representing the entire country.

Focus on the Mission Can Leave Details Dangling

Nonprofits have a mission which is 100 times more amplified than a commercial entity selling products. A nonprofit, by its definition, IS its mission. Without the mission, the organization doesn’t exist. So, while the commercial sector is asking us how they can sell widgets using the web site, the nonprofit is asking how our work is helping the mission.

At first glance I thought this was great; this is what I want commercial companies to do since they’re so often focused on the widget. But it’s not that simple. In order to get buy-in on the big picture, I need consensus on the smaller pieces that make the big picture—usually from a large number of stakeholders. And if the organization is not paying attention to the smaller pieces, getting to the big picture can be difficult.

Creating Emotion in Design

Look and feel is extremely important for nonprofits because emotion is so intertwined with connecting the user to a specific issue or cause. Emotionally compelling creative connects design and the mission. The challenge here is in balancing appropriate design with the emotion necessary to inspire the user to become a volunteer, donate, or call their congressperson.

So how can balance between design, good usability, and emotion be achieved? It all comes down to the designer. The trick is to find designers who can evoke emotion with their design. Having done that, directing good usability and strong design will create the necessary balance to inspire users to act.

One important lesson I’ve learned is that an appropriate design does not translate into a snazzy site with the latest gizmos or the latest in Flash. There are nonprofits who don’t want to look like they’re rolling in money; in fact, their goal is to look like they’re doing their job despite the budget. So, my job is to help them present a lot of information and make the user experience enjoyable. Information architecture professionals are very valuable to nonprofits because we tend to think about how people will find the content rather than how cool the site will look.

Our Work is As Worthwhile as Our Cause

In an ever-changing world, there is one thing that can’t be taken away from us—our conviction. In the past few years, nonprofits have begun to realize that good user experience design is one of the most effective ways they can achieve their goals, and they are beginning to set high standards for their cause. Despite the sometimes peculiar-to-nonprofits challenges, we should help nonprofits step up by adopting a cause and competing for the work—because we know we can do better.