Google, Stanford, and The Government Fight Swine Flu

Written by: Nate Bolt

Bolt | Peters recently collaborated with a team at Stanford University on designing the Google Sites template for local governments to use as a backup to deliver information on the H1N1 outbreak, and also disasters and emergencies in general. The goal was to create a template that was well laid-out, easy for non-techie local governments to edit and update with content, and conveyed the most important information to different audiences.

Swine Flu info template

 

How It Started: The Quick Fix

With the recent outbreak of H1N1, Santa Clara County’s official public flu information site was taken down by the surge in web traffic. To help relieve the demand, the Stanford SIE Program, a Stanford University group that develops technology for social change, stepped in literally within hours of the interruption to create an ad hoc backup site using Google sites, so people could still access the critical info.

This is the version of the site they originally posted, using Google Sites’ standard WYSIWYG editing tools:

Stanford's original stopgap design
Stanford’s original stopgap design

After the site went live, Stanford trained the Santa Clara County maintain it and add their own information. Santa Clara County needed to have site that could handle the traffic and get the information out as quickly as possible—which is to say that there wasn’t a whole lot of time to think about design.

This experience made it clear that it would be valuable to create a well-designed, easy-to-edit template for local governments to distribute information in case of emergencies—not just H1N1, but any public hazard, including floods, earthquakes, wildfires, and so on.

The team contacted us in late October with the original draft of the website. Since it was important to make the site available as soon as possible to deal with the ongoing H1N1 outbreak, so the timeline we had for design recommendations was really brief—just a few days. With that in mind, we got to work.

Spotting the Problems

Because of the layout restrictions, our design evaluation focused primarily on information design. We had two main issues with the early design, along with a handful of usability tweaks here and there.

First draft of Google template

1. Lack of visual hierarchy.

With two columns of equal width and mostly indistinguishable boxes filled with text, it was hard to tell at a glance what information was urgent, time-sensitive, or recently added.

2.
Big chunks of info, no
 organization

The info wasn’t grouped into meaningful categories—there wasn’t much visual or spatial distinction between contact info, prevention info, calls to action, and so on, making it difficult to zero in on information even if you know what you were looking for. Also, the info came in big blocks of unscannable prose, and deep reading is the last thing you want to do when you’re trying to learn about the tsunami headed your way.

3. It didn’t anticipate the distinct needs of the most critical user
segments.

There was plenty of good info on the site, but it was never clear who a given piece of info was for—you couldn’t scan the page headers and think, “Yeah, there’s what I’m looking for”. Instead it had a “general audience” feel to it; the info didn’t take into account that different groups might have different needs and different levels of urgency.

4. Buried info.

Vital info on vaccines, symptoms, and SMS / Twitter updates was absent from the front page entirely, lurking deep in the navigation.

Our Recommendations

To keep editing simple for the local government end-users who would be using the template, we had to stick to using the WYSIWYG Google Sites editor, which meant no custom CSS and very little control over layout. We also had literally a single day to make our recommendations and synthesize them into a first-draft mockup—the result wasn’t pretty, but got our main info design recommendations across loud and clear.

First revision of template
Our first stab at redesigning the H1N1 template

Redesign Goal #1: Prioritize information according to the urgency
of visitor need.

Our design takes into account that there are different “general public” user segments with different goals, and that there are tiers of urgency and priority. From most-to-least urgent, we identified these segments:
* People infected with the flu: Immediate help / contact info
* People worried that they might have the flu: Self-diagnosis
* People concerned about catching and/or spreading the flu: Preventative measures and vaccine info)
* People just curious, staying informed: Information about travel restrictions, public response, news updates, deep flu info

The structure of the site was altered to serve each of these segments:
# We added a page-width alert box that would be displayed to convey urgent, time-sensitive info—this is a common feature on many of Google’s sites, as well as CNN.com.
# A yellow-shaded box to give the highest priority group, currently affected individuals, clear instructions on what to do.
# The left-column contains info on self-diagnostic and preventative measures for at-risk or concerned individuals.
# The top-right contains info on vaccines; below is links to deep info, research, and update notifications. Though the Google Sites template didn’t allow us to resize the right column, we noted that it should be made smaller than the left column.
# The left sidebar navigation was reserved for redundant quick links to most important info, as well as links to pages for specially affected individuals and organizations.

Redesign Goal #2: Reduce block text down to easy-to-scan lists
and chunks.
Cut the bureaucratic BS.

We broke down the block text to keep from overwhelming users with too much difficult-to-scan information upfront. Lists were kept to under 8 items long, unless they broken down into categorized sub-lists; text was kept under five lines. All information that couldn’t be condensed in this way was moved to lower-level pages, and linked from
higher-level pages.

 

Users don’t need to know what the mission statement and goals of the organization are; they just want to know if and how they are personally affected, and what they can do in case they are affected.

Redesign Goal #3: Use informative headers that directly address
user goals, and which give all users clear next steps.

All types of visitor (e.g. directly affected, at risk, concerned, just curious, administrative, medical, etc.) should be able to tell by the header if that information is “for them”. We tweaked the headers to make it clear what kind of info you could find in each section. We also made it clear what, if anything, each user segment needed to do:
* Immediately affected individuals are given step-by-step instructions on how to deal with their
emergency situation(s).
* At-risk individuals are given step-by-step information on preventative, precautionary, and self-
diagnostic measures.
* Individuals seeking non-urgent information can be given supplementary external information
resources (by phone, online, and downloadable / printable) and means to stay updated (by email,
text, RSS, Twitter).
* Urgent contact info is immediately visible in the right sidebar.

The First Revision

After we sent over our recommendations and mockup, a member of the team sent us a nice note a day or two later:

You’ve convinced us that we have to completely rethink and redesign the site from scratch, so
your style guidelines come at a very good time. I can’t thank you enough for opening our eyes to
how we can do this in a much better way. I think we can create a site that works much better than
the original site.

…and then a few days after that, Stanford sent a revised version over to Google, who worked with the design firm OTT Enterprises to create two new designs with some added visual design flourishes.

Unfortunately we don’t have permission to show you this revision yet (working on it), but it wasn’t bad—certainly cleaner and better organized, easier to scan, less dense. There was, however, a large distracting green gradient background, some empty space in the sidebar columns, a few stock photos, and a muddled color palette (green, blue, yellow, and gray).

Our second batch of suggestions focused mostly on visual design and layout. Just a few of them:

Visual Design

* Get rid of the gradient background; it’s distracting and confuses the emphasis on different parts of the site, since it runs behind multiple different elements.
* Get rid of the green coloring, which is tertiary to the blue and yellow. Instead, use several shades of blue as the primary color and a little light beige or light grey as a secondary trim. Blue signifies authority, calmness, trustworthiness, etc., which are of course appropriate here. Notice how almost every major government agency’s website (including the CDC) uses dark blue and gray as the main colors.
* Remove the stock images, including the CDC and Flu.gov widget images, which look like ads. The phone and email icons are fine, however.
* Rather than images, make the content headers stand out with size and strong typography—”make the content the focus” is an old web design adage, and the content, in this case, is the flu information. Here are a bunch of sites that use typography, font size, whitespace, and bold layout to create emphasis, using few images [list of a bunch of websites].

Layout

* Header and upper-page content takes up way too much space—note that the important info (”If you or your child…”) doesn’t begin until more than halfway down the screen. Compress.
* I like how Design #2 places the alert and contact info in the sidebar; in Design #1 the sidebar is wasted space. This frees up space to move important info (Vaccine and How to Care for Someone With The Flu) up to the right side.
* This design consists mostly of links to deeper pages, but there’s definitely room to put more specific, useful info right on the homepage: symptoms, preventative measures, vaccine info (see our original design)
* Get rid of the Contents box.

The Results

Stanford started over once again, aided by our style guide and input from OTT Enterprises. After further iterations in Google Sites, they handed it over to the Google visual designers, and here’s the before-and-after:

Before
Google Sites template, super rush draft

After
Google Sites Public Health Template 1.0

Can you do better?

As with all things on the web, the template is a design-in-progress, and will be improved as time goes on. Stanford SIE is looking for feedback on the design, so here’s where you can send feedback for the Public Health template and the All Hazards template. Since these Google Sites templates are available to everyone, you can even make your own design edits and mock up improvements.

Or if you just think it’s great and you just want to use it yourself, here’s the complete list of links:

Google Sites Templates blog post

Public health sites:

Template
Example site
User guide

All hazard sites:

Template
Example
User guide
Stanford SIE site (we’re credited here!)

Note: Nate and Tony’s book on remote testing, “Remote Research”:http://www.rosenfeldmedia.com/books/remote-research/, will be published by Rosenfeld Media in 2010.

Control and Community: A Case Study of Enterprise Wiki Usage

Written by: Matthew C. Clarke

The Balance of Power

There are a wide variety of uses for Wikis and a level of interest in using them that’s matched by an extensive range of Wiki software. Wikis introduce to the Internet a collaborative model that not only allows, but explicitly encourages, broad and open participation. The idea that anyone can contribute reflects an assumption that both content quantity and quality will arise out of the ‘wisdom of the crowd.’

There are, however, negative effects of this extreme openness. One problem is the deliberate vandalism of Wiki pages. Another is that even those with no destructive intent may yet degrade the quality of a Wiki’s content through lack of knowledge or skill. Anyone can write nonsense as though it were fact. Anyone can accidentally delete useful information. Someone with half-baked knowledge of grammar may change all the “its” to “it’s.” Of course, someone more knowledgeable may notice the problem and fix it … but then again maybe they won’t.

Wikis can impose various forms of control to protect against these risks, including user registration, moderation, enforced stylistic rules, and imposing prescribed topic structures and page layouts. These types of control, however, are typically seen as contrary to the basic Wiki concept.

Consequently, one of the central tensions when managing a Wiki is between centralized control and anarchy. In the public arena, the balance of power tends towards anarchy, but in a corporate environment a more centralized approach is often required.

In this article I describe one application of the Wiki way to a common corporate process and extract some guidelines for the effective use of Wikis in that context. In particular, I am seeking insight from this case study into the “balance of power” tension.

The example on which these reflections are based is a project within the software company CorVu [1] to improve the technical knowledge base related to the products we sell. Like many companies, CorVu has extensive knowledge of its own products and a desire to make that knowledge available to customers. A major block to achieving that desire has been a lack of people with the time to either record the internal knowledge or to fashion the knowledge into a customer-ready format. We needed to spread the load so that a broad range of developers, tech writers, professional service consultants and others could all contribute what time and knowledge they had to a shared goal. Our hope was that a process built around several Wiki sites would facilitate this collaborative approach.

There’s no guarantee, of course, that lessons learned in that context will transfer to others. But without documented cases such as this one, any theorizing about the balance of power issue is just speculation.

Three contexts for a Wiki

To start with, it is important to clarify the key differences between three contexts in which Wikis are used: public, team and enterprise Wikis. [2].

Public Wikis

By a “public Wiki,” I mean one where any Internet user can read and contribute to the collaborative effort. It may be that editing content is restricted to a registered user group (as is the case with Wikipedia), but anyone can register. Consequently, the size of the contributing community is potentially huge, there is a high level of anonymity, and the contributors do not typically relate to each other outside the confines of the Wiki.

In this context, very little centralized control is evident. You typically find some explicit guidelines for contributors, either formulated by the founders/hosts, or as an evolving page edited by the contributors themselves. There is also an implicit understanding of etiquette and an implied social contract that comes with joining the “community.” But in the end, anyone can edit anything … and anyone else can un-edit it. This is the essence of anarchy: not that anything goes, but that what goes depends on peer acceptance. In an anarchy, it is not the case that there is no control; rather, the control is exerted by peers (around the edges) rather than by an authority (in the centre).

Requiring registration prior to participation does not alter the anarchistic nature of the process. Registration has numerous benefits, not least of which is that contributors can be recognized and gain respect for their contributions. Registration may also increase the sense of belonging because it reflects each contributor’s conscious choice to join the community. That sense of belonging is essential to any viable anarchy.[3]

Moderation, on the other hand, inevitably moves the balance of power towards the centre. Moderation invests some users with the power to limit the contributions of other users. While moderation is sometimes seen as necessary in order to combat vandalism and dissension, this imposition of authority denies the libertarian aspirations of most public Wikis.

Team Wikis

A “team Wiki” is one where the people who read and contribute all belong to the same team or work-group. Perhaps the R&D team uses the Wiki to record evolving product specifications; or the members of a local church collaboratively documents its history; or a class of students collates the results of a research project. Membership of the team predates and takes precedence over membership of the Wiki community. A person joins the team and as a by-product may be requested or required to use the Wiki. The number of people participating tends to be small and the contributors are likely to relate to each other outside the context of the Wiki.

In contrast to public Wikis, where self-selection guarantees that the vast majority of users are technically savvy and keen to be involved, the people contributing to a team Wiki may not be doing so voluntarily or with much enthusiasm. It may well be a required part of their work that they would prefer to avoid. The need to make the Wiki as easy as possible to use becomes even more important in this context. This includes clear navigation and an effective search function, but more than anything else it means a simple, familiar user interface for editing text. Many team Wikis fail simply because the potential contributors refuse to learn Wiki markup or to use a non-wysiwyg editor.

In this context, registration is essential, but moderation is not. The restrictions on who can contribute protect against vandalism and, because the collaborators have pre-existing relationships and a common commitment to a higher cause, the community operates with a trust model. In fact, apart from the restrictions on membership, a team Wiki is unlikely to impose much control at all over contributions. Standards, structures, and conflicts will be resolved using the organization’s normal processes outside the Wiki. The collaborators will discuss and vote, or demand and threaten, or just do what the boss says, without that process being explicitly controlled by mechanisms within the Wiki.

Enterprise Wikis [4]

When it comes to implementing Wikis across a large enterprise such as a global corporation, a new set of concerns affect the balance of power. Management wisdom is required to maximize participation while keeping business objectives clearly in sight.

In my experience, it is rare that a single Wiki site within an enterprise is open to contributions by any employee. Where this is the case, moderation is likely to be required because of the large numbers of contributors who have no direct accountability to each other. The concerns at the enterprise level relate to how numerous organizational Wikis within the enterprise can be integrated into the IT infrastructure and how the use of Wikis can most effectively support corporate goals.

Rather than allow the proliferation of diverse Wiki projects throughout the enterprise, IT management is more likely to select the Wiki software that everyone is to use and perhaps host all instances centrally. It may be that some IT managers are “control freaks,” but there are good reasons for standardizing on Wiki software:

  • Risk. If many work groups host their own Wiki using their own choice of software, there is a significant risk of knowledge loss. It is hard to guarantee that each work group will secure the Wiki adequately or ensure appropriate disaster recovery. What happens if the work group’s server dies? Will they have an adequate backup procedure? What happens if the work group’s IT expertise leaves the company? Will the knowledge of how to run the Wiki be passed on to the remaining team? What happens if the Wiki software no longer operates when the server’s operating system is upgraded? Centralized Wiki management can avoid such problems.
     
  • Support. Most Wiki software is easy to learn (at least to us!), but some are certainly easier to learn than others. In a context where many employees participate in multiple Wikis within the enterprise, training and user frustration can be reduced by using the same software for all the Wikis.
     
  • Cost. Centralized IT management can also reduce the total cost of ownership of Wiki projects. That may be counter-intuitive given that most Wiki software is free. But the costs of running a Wiki include the cost of the hardware that hosts the Wiki, the time it takes to manage the Wiki (installation, user admin and support, backup, etc.) and the time it takes to teach people how to use the system. Although these costs may be small for each work group, the total across the enterprise can be substantial, and can be reduced by standardization and centralization.

In this context, the balance of power swings inevitably towards centralized control. The challenge is how to do so without stifling the free and creative contributions that are essential to a Wiki’s success.

The CorVu case study

The company I work for, CorVu, started using Wikis within its R&D group back in 2000 using the original WikiWikiWeb software. The project described below was based on MoinMoin, but we have also used DoKuWiki and have since standardized on Confluence.

CorVu produces software that assists other enterprises to implement their strategy and to track their performance against that strategy over time. CorVu has a variety of channels for making its internal product knowledge available to its customers, but the product functionality grows at a faster rate than the Tech Writers can keep up with. Apart from the fundamental description of each feature, a complex assortment of configuration details need to be documented – performance optimization, best-practice implementation techniques, interactions with third-party software, etc. A lot of knowledge at that level resides with the Professional Services team rather than the Product Development team. Often, the people with the knowledge do not have the time nor the writing skills to record it, and the people with the responsibility to deliver documentation to the customers do not have the knowledge. There’s nothing uncommon about that problem!

Since the goal of capturing and disseminating quality technical documentation requires collaboration, I thought that a Wiki might help. So we set up two independent Wikis to capture knowledge from two different groups of employees, and a third so that customers could access a sanitized version of that knowledge.

I’m not putting my own case forward as the paradigm of success. In fact, although the project yielded a significant improvement in capturing internal knowledge, we have not yet achieved the final goal of effectively disseminating that knowledge to our customers.

Wiki Workflow Diagram

Figure 1. Knowledge capture and dissemination using three Wikis


R&D Wiki
This Team Wiki is the home of internal coding standards, design documents, etc. Anyone on the product development team can contribute, while employees in other departments can only view.


Professional
Services Wiki

The Professional Services Wiki (actually called the ‘Internal Technical Knowledge Base’) is a Team Wiki for recording how the product is used in practice, for instance: internal discussion about bugs, compatibility with third-party software, implementation tips and techniques, performance optimization, etc.

Anyone in the organization can edit this Wiki, but the primary contributors are Professional Service staff (consultants and help desk). This Wiki has two intentions: to be the primary location for recording and accessing internal product knowledge, and to be the staging ground for knowledge that can later be released to customers.

We centrally imposed the top level of structure and navigation here, based on product modules. This makes it easier for contributors to know where new content should be added. Specific pages enable FAQs to be built over time. Where it is relevant, information from the R&D Wiki is incorporated into this Wiki.

We scrapped a commonly used set of email distribution lists in favor of a process whereby questions and answers are posted to this Wiki site. This means that problem solving previously lost in email trails is now captured and searchable.


Customer Wiki
The Customer Wiki has the same basic structure as the Professional Services Wiki. That is, nearly all of the pages in the Professional Services Wiki have a matching page in the Customer Wiki. The difference is that the content in the Customer Wiki is edited by professional technical writers.

Each page of the Professional Services Wiki includes a status block indicating who the primary author was, who has checked the accuracy of the technical content, and who has checked spelling, grammar and adherence to the corporate documentation style. Only when those steps have been completed can the page be copied over to the Customer Wiki. An important part of that process is to make judgments about what information should be kept internal and what the company wants to reveal to its customers.

The Documentation Department is the only group who can edit the Customer Wiki. Although customers can leave comments, they cannot modify the published content.

In this project, there was a clear business goal and a centrally-driven process to attain that goal. The Professional Services and Customer Wikis were seeded with pages that provided a structure for delivering accurate and accessible content to customers. While the ability to contribute was widespread, there were explicit “rules of engagement” around user registration, topic naming, page layout templates, content categorization, and navigation.

Although there was a degree of central control, we tried to balance that with encouragement for broad-based collaboration–otherwise, why use a Wiki? The distinction that guides this balance is between structure and content. Although the structure is imposed centrally, content is generated by a diverse range of people in a way that promotes openness, the recognition of contributors, editing of any content without fear of criticism, and shared responsibility for quality.

Since the quality of the documentation exposed to our customers is crucial, the process includes a QA step that is uncommon for Wikis. We did not want to constrain all contributors to adhere to strict grammar, spelling and style rules. Instead we left the knowledge capture stage free from those restrictions and used technical writers to edit the content before its dissemination to customers.

It may seem strange that we would use a Wiki to publish non-editable information, but this is a testament to the versatility of the software. Wikis provide a very fast means of building a web site, whether collaboration is the intention or not. In our case, we use one Wiki site to capture knowledge from one group of people and another Wiki site to disseminate the information to a different group of people. With regard to my categorization of Public, Team and Enterprise Wikis, the “Customer Wiki” is a hybrid: it is built by a specific team and hosted within an enterprise infrastructure in order to publish in the public arena. A more traditional approach to software documentation would have been to repackage the knowledge into some other HTML or PDF format for customer consumption. But the maintenance of that dichotomy would have been far more onerous than copying between two parallel Wikis.

Managing an Enterprise Wiki project

Embedding Wiki tools across an enterprise is an organizational change project and as such requires appropriate planning and project management, along both technical and cultural dimensions. I won’t go over those generic processes, nor repeat suggestions for Wiki adoption that are documented in places like WikiPatterns. But drawing from CorVu’s experience, I will highlight some advice for project managers in the enterprise Wiki context.

People

  1. Seek patronage at the highest possible level. That is, find a person with as much power within the enterprise as possible who will sponsor the project. The sponsor may do no more than ‘give the nod’ to your work, but that invests you with the authority to draw on other people’s time. In CorVu’s case, the CEO himself was a key supporter.
  2. Enthuse a champion. This needs to be a person who is well respected, who will lead by example, and in doing so enthuse others. The champion will need to be able to put a lot of time into the project and will often be the primary contributor to the Wiki, especially at the beginning. In our case, that turned out to be myself.
  3. Identify the group of people who can be expected to generate the majority of the Wiki content. These are typically subject matter experts. Discuss with them the value of writing down what they know or Wiki-izing what they have already written.
  4. Identify anyone whose participation is mandatory. Is there a key political player or subject matter expert who absence from the project will cause others to think, “Well, if she’s not involved, I’m certainly not going to waste my time?”
  5. Since our goal was to create a knowledge base for external consumption, it was important that the content generated by subject matter experts was checked for both accuracy and readability in the same way as other customer documentation. Consequently, the people involved in the project needed to include professional technical writers.

Tools

There are many different Wiki software tools in the market (Wiki Matrix lists over 100) but most are not adequate for an enterprise rollout. CorVu’s experience suggests that an enterprise Wiki requires at least the following:

  1. Administration tools to manage a large number of users, with integration to enterprise security mechanisms (e.g. LDAP and single sign-on).
  2. Separately secured spaces for different knowledge areas.
  3. Effective management of attachments that includes versioning and a built-in search function that indexes the attachments.
  4. Integration with other enterprise software such as portals, business intelligence, and content management systems.
  5. Many contributors in an enterprise context will be non-technical. This makes it essential that the Wiki has a familiar, WYSIWYG editing mode rather than forcing users to learn some Wiki markup language.
  6. An assortment of non-functional requirements such as good reputation, reference sites, some assurance of product longevity, and the availability of support.

Generating participation

All Wikis stand or fall based on whether an active community is formed. You can’t achieve the ‘wisdom of the crowd’ unless you have an active crowd. The means of achieving that across an enterprise are somewhat different from public Wikis.

  1. Build a critical mass of contributors. Since the contributors are employed by the enterprise, it is possible to make the Wiki part of people’s responsibilities. At CorVu we found this to be imperative. Unlike a public Wiki (where there are many people who contribute huge amounts of time as a hobby), in a work context (where everyone is probably too busy already), this isn’t going to happen. So write it into job descriptions. Get managers to send emails to their staff saying that one hour a week should be spent writing up their knowledge on the Wiki. Arrange a seminar on how to use the system. Use the company newsletter to promote the value of the project.
  2. Build a critical mass of topics. To be used, the site must be useful. To generate traffic to the site, make the most frequently required information available on the Wiki first, and make the Wiki the only source for that information. In CorVu’s case, for example, one significant page stored the latest product release information. When any software version was moved from internal QA to Beta, or from Beta to General Release, this page was updated. Once people learn that the Wiki contains a lot of useful information they will look there for answers to start with rather than wasting someone else’s time by phoning or emailing questions.
  3. Send links rather than information. Set an expectation that when anyone is asked for some detailed information, the response should be a link to a Wiki page. If the information has not yet been Wiki-ized, don’t type a lengthy answer in an email; instead, spend an extra minute typing it into a Wiki page.
  4. Provide recognition and rewards. As with most Wikis, the best way to encourage participation in the long term is to ensure that the efforts of the contributors are valued. This is easier in team and enterprise Wikis than in public Wikis because the contributors are known. Wiki pages can indicate explicitly who the primary authors were. There can also be rewards within the enterprise beyond the boundaries of the Wiki. For instance, some employees may have components of their annual review linked to their involvement in Wikis.

The future of enterprise Wikis

Our experience with Wikis at CorVu has been very positive and gives encouraging signs about the future potential of this approach to shared document workspaces. There are multiple offerings that meet enterprise IT standards, and the tools currently available are robust, simple to administer, simple to use, and inexpensive. The CorVu case also shows that enterprise Wikis can be used not only for internal purposes, but also as a means of publishing information to external stakeholders.

By putting minimal central control in place an enterprise can gain significant benefit from this simple technology, including improved knowledge capture, reduced time to build complex knowledge-based web sites, and increased collaboration. Although enterprise Wiki use requires a greater degree of centralized control than public Wikis, this need not impinge on the freedom to contribute that is the hallmark of a Wiki approach. The balance of power is different in an enterprise context, but fear of anarchy should not prohibit Wiki adoption.

Nevertheless, I predict that Wikis will disappear over the next 5 to 10 years. This is not because they will fail but precisely because they will succeed. The best technologies disappear from view because they become so common-place that nobody notices them. Wiki-style functionality will become embedded within other software – within portals, web design tools, word processors, and content management systems. Our children may not learn the word “Wiki,” but they will be surprised when we tell them that there was a time when you couldn’t just edit a web page to build the content collaboratively.


[1] CorVu is now a subsidiary of Rocket Software, but this case study pre-dates that acquisition.

[2] There is another form of Wiki that I have ignored here – the personal Wiki – but in that case, questions about the balance of control do not arise.

[3] In an editorial comment, Christina Wodtke offered the insight that if identity is essentially disposable, then registration does very little. Perhaps it is only when the link between registration and identity is persistent that protecting one’s reputation becomes an important motivation towards good behavior.

[4] What I call an ‘Enterprise Wiki’ others have called a ‘Corporate Wiki’. I prefer the former because it is not restricted to corporations in the business world, but also applies to government agencies, churches, and large not-for-profit organizations.


Researching Video Games the UX Way

Written by: Nate Bolt

Video games are often overlooked in the scope of usability testing simply because, in a broad sense, their raison d’etre is so different than that of a typical functional interface: fun, engagement, and immersion, rather than usability and efficiency. Players are supposed to get a feeling of satisfaction and control from the interface itself, and in that sense, interaction is both a means and an end. The novelty and whimsy of the design occasionally comes at the expense of usability, which isn’t always a bad thing—that said, video games still have interfaces in their own right, and designing one that is easy to-use and intuitive is critical for players to enjoy the game.

Consider how video games are currently researched: market research-based focus groups and surveys dominate the landscape, measuring opinion and taste in a controlled lab environment, and largely ignoring players’ actual in-game behaviors. Behavior is obviously the most direct and unbiased source of understanding how players interact with the game—where they make errors, where they become irritated, where they feel most engaged. When Electronic Arts engaged Bolt|Peters to lead the player research project for Spore, we set out to do one better than the usual focus group dreck by coming at it from a UX research perspective.

SIMULATED NATIVE ENVIRONMENT RESEARCH

One overarching principle guided the design of this study: we would let the users play the game in a natural environment, without the interference of other players, research moderators, or arbitrary tasks. This took a good bit of planning. Usually, we prefer to use remote research methods, which allow us to talk to our users in the comfort of their own homes. Spore, however, was a top-secret hush-hush project; we couldn’t very well send out disks for just anybody to get their hands on. Instead, CEO Nate Bolt came up with what we call a “Simulated Native Environment.” For each of the ten research sessions, we invited six participants to our loft office, where they were seated at a desk with a laptop, a microphone headset, and a webcam. We told them to play the game as if they were at home, with only one difference: they should think-aloud, saying what ever is going through their mind as they’re playing. When they reach certain milestones in the game, they would fill out a quick touchscreen survey at their side, answering a few questions about their impressions of the game.

Elsewhere, Nate, the clients from EA, and I were stationed in an observation room, where we set up projectors to display the players’ gameplay, the webcam video, and the survey feedback on the wall, which let us see the players’ facial expressions alongside their in-game behaviors. Using the microphone headset and the free game chat app TeamSpeak, we were able to speak with players one-on-one, occasionally asking them what they were trying to do or to go a little more in depth about something they’d said or done in the game.

Doesn’t that sound simple? Actually, the setup was a little brain-hurting: we had six stations; each station needed to have webcam, gameplay, survey, and TeamSpeak media feeds broadcast live to the observation room – that’s 18 video feeds and 6 audio feeds, and not only did the two (that’s right, two!) moderators have to be able to hear the participants’ comments, but so did the dozen or so EA members. On top of that, everything was recorded for later analysis.

“The feedback we received from users wasn’t based on tasks we’d ordered them to do, but rather on self-directed gameplay tasks the users performed on their own initiative”

The important thing about this approach is the feedback we received from players wasn’t based on tasks we’d ordered them to do, but rather on self-directed gameplay tasks the players performed on their own initiative. We didn’t tell players outright what to do or how to do things in the game, unless they were critically stuck (which was useful to know in itself). The observed behavior and comments were more stream-of-consciousness and less calculated in nature.

The prime benefits to our approach were the absence of moderators, which mitigated the Hawthorne effect, as well as the absence of other participants, eliminating groupthink. Additionally, the players were more at ease: it’s hard to imagine these video outtakes (see below) being replicated in a focus group setting. Most importantly, they weren’t responding to focus questions–they were just voicing their thoughts aloud, unprompted, which gave us insight into the things they noticed most about the game, rather than what we just assumed were the most important elements.

OOPS, WE MESSED UP

Over the year-long course of the project, there was one incident which proved to us just how important it was to preserve the self-directed task structure of our research. Because of the multiphase progression of Spore, we believe it was important to carefully structure the sessions to give players a chance to play each phase for a predetermined amount of time, and in a set order as if they were experiencing the game normally.

Partway through the second session, we started having doubts: even though we weren’t telling players what to do within each phase, what if our rigid timing and sequencing is affecting the players’ engagement and involvement with the game?

To minimize this, between sessions, we made a significant change to the study design: instead of telling users to stop at predetermined intervals and proceed to the next phase of the game, we threw out timing altogether and allowed users to play any part of the game they wanted, for as long as they wanted, in whatever order they wanted. The only stipulation was that they should try each phase at least once. Each session lasted six hours spread over two nights, so there was more than enough time to cover all five phases, even without prompting users to do so.

Sure enough, we saw major differences in player feedback. We are unable to provide specific findings for legal reasons, but we can say that the ratings for certain phases consistently improved (as compared with previous sessions). Additionally, a few of the lukewarm comments players had made about certain aspects of the game seemed to stem from the limiting research format, rather than the game itself.

It became clear that when conducting game research, it was vitally important to stick to the actual realities of natural gameplay as much as possible, even at the expense of precisely structured research formatting. You have to loosen up the control a little bit; video games are, after all, interactive and fun. It makes no more sense to formalize a gameplay experience than it does to add flashy buttons and animated graphics to a spreadsheet application.

BRINGING GAME RESEARCH INTO THE HOME

There are a lot of ways to go with the native environment approach. Even with all efforts to keep the process as natural and unobtrusive as possible, there are still lots of opportunities to bring the experience even closer to players’ typical behaviors. The most obvious improvement is the promise of doing remote game research–allowing participants to play right at home, without even getting up.

Let’s consider what a hypothetical in-home game research session might look like: a player logs into XBox Live, and is greeted with a pop-up inviting him to participate in a one-hour user research study, to earn 8000 XBox Live points. (The pop-up is configured to appear only to players whose accounts are listed as 18 or older, to avoid issues of consent with minors.) The player agrees, and is automatically connected by voice chat to a research moderator, who is standing by. While the game is being securely delivered and installed to the player’s XBox, the moderator introduces the player to the study, and gets consent to record the session. Once the game is finished installing, the player tests the game for an hour, giving his think-aloud feedback the entire time, while the moderator takes notes and records the session. At the end of the session, the game is automatically and completely uninstalled from the player’s XBox, and the XBox Live points are instantly awarded to the player’s account.

Naturally, there are lots of basic infrastructure advances and logistical challenges to overcome before this kind of research becomes viable:

  • Broadband penetration
  • Participant access to voice chat equipment
  • Online recruiting for games, preferably integrated into an online gaming framework
  • Secure digital delivery of prototype or test build content
  • Gameplay screensharing or mirroring

For many PC users, these requirements are already feasible, and for games with built-in chat and/or replay functionality, the logistics should already be much easier to meet. Remote research on PCs is already viable (and, in fact, happens to be Bolt|Peters’s specialty). Console game research, on the other hand, would likely require a substantial investment by console developers to make this possible; handheld consoles present even more challenges.

We expect that allowing players to give feedback at home, the most natural environment for gameplay, would yield the most natural feedback, bringing game evaluation and gameplay testing further into the domain of good UX research.


Spore Research: Outtakes from bolt peters on Vimeo.


Science of Fun from bolt peters on Vimeo.

Comics for Consumer Communication

Written by: Rahel Anne Bailie

The rising popularity of the comic as an internal communication device for designers has increased our ability to engage our stakeholders as we build interfaces. Yet, social service agencies looking to provide services to hard-to-reach groups like immigrants, cultural minorities, and the poor have taken pride in innovative outreach methods. In situations where traditional printed matter is a barrier, graphical methods can be used very effectively to communicate with audiences.

From guerilla theatre to testimonials, posters to graphic instructions, users have benefited from alternative communication methods, particularly in situations where education or cultural barriers make it difficult for people to access services important to their well-being and safety. In some cases, the comic book format has been used as a way to help people get access to critical legal help. This case study from my time as a Publication Manager at the Legal Services Society (LSS) of British Columbia (BC) could inspire the use of comics outside the development process.

The Situation

BC has over 253 First Nations tribes (known as “Native Americans” in the United States), which is about one-third of all First Nations in Canada. Seven of Canada’s eleven unique native language families are located exclusively in BC. When BC joined Confederation (Canada) in 1871, the provincial policy of the day did not recognize aboriginal title to the land, so no treaties were signed with the First Nations unlike in other provinces.

Instead, the federal government made it a criminal offence for a First Nation to hire a lawyer to pursue land claims settlements, and removed a generation of children to residential schools, where many were abused and traumatized. As a result, many tribes were left in an ongoing state of economical and social upheaval, with rampant unemployment, social problems, and poverty.

The Legal Services Society (LSS) in BC is the provincial agency that provides legal aid to poor and marginalized residents of the province. Prior to the crippling budget cuts the government imposed in the late 1990s, LSS also provided public legal education material to people who didn’t quite qualify for legal aid but certainly needed it. They may not have been quite poor enough, or they were poor enough, but legal aid didn’t cover their particular problem.

LSS knew that solving some of the smaller problems up front would keep them from escalating into larger problems – problems that would then qualify them for legal aid, but also would be devastating for their lives.

In 1995, the LSS asked its Publishing Program, where I was the manager, to collaborate with them on some self-help material for First Nations women. The Native Services Department wanted to help these women understand their rights in the area of family law, specifically around the issue of spousal violence. Based on the number of women who came to social service agencies for help, LSS recognized that there were a number of issues that were not well understood and, if caught early, could be addressed to prevent larger legal problems.

The agency decided that it was within its mandate to produce a publication for this population segment, and the two departments began the process of creating the publication that would eventually be called Getting Out: Escaping Family Violence.

Why the Comics Format?

LSS produced all publications collaboratively. In this case, the two departments explored different formats, and ultimately chose the comic form. Comics’ graphical format could draw low-literacy women to pick up information off a publication rack. LSS had previously done one other publication in comic book format, which had worked for that audience.

The issue of family violence was a sensitive one, and the LSS had to be sure that the audience would not consider the graphical format of the publication condescending. To take the pulse of those who would use the publication, we conducted several focus groups in places where women would gather for learning (e.g. literacy, friendship, and women’s centers).

We used an approach that combined outreach, usability testing, literacy skills improvement, immigration intake, and legal education. We’d bring food and beverages, humbly ask questions, and be the learners instead of the teachers. Particularly with an all-women’s group, it was important to do something based around food. Participants would often bring their children, and they would ask us questions and giggle over our perspectives.

For this publication, a comic book format seemed to be a natural. The literacy levels in First Nations communities have been cited as being significantly lower than the general population, particularly in rural areas. Conveying the nuances of family law to a low-literacy population segment was one challenge; another was understanding specific cultural references that could be missed or become “localization” barriers.

Considerations similar to those for producing publications in different languages apply to those being translated from “majority culture English” to “minority culture English,” or same-language localization, so to speak. There may not be a language difference, strictly speaking, but significant dialectic differences apply, graphics are very culturally-specific, and emphasis differs between cultures. In this instance, we had to localize our content to make it relevant to our First Nations audience and not concern ourselves about whether the publication resonated with other people sitting in a legal aid waiting room.

Elements of Development

The commitment of the LSS to create effective material for our users extended to all aspects of the publication process.

The publication process included iterations of oversight, content creation, production, and user input.

Authoring–The content creation was undertaken by seeking out a subject matter expert in the topic area, usually a lawyer or case worker in one of the field offices. The author gathered profiles, based on cases from offices around the province, and distilled the important legal information that went into the publication. For this publication, I hired a television screenwriter named “Candis Callison”:http://www.cwy-jcm.org/en/aboutus/board/callison who was from the Tahltan band of First Nations to provide an authentic voice for the comic book.

Editorial–The editorial process was done in-house. For this project, the process included editing the script to fit the comic book genre. I also worked with the artist to ensure that the number of panels would fit the booklet format, and the dialog would fit the panels. Once the substantial edit was done, in-house staff did the copy edit. Then the Native Services lawyer, also First Nations, reviewed the publication for legal accuracy.

Production–As positions opened in the department, I was able to hire more culturally and ethnically diverse employees so that, eventually, we were able to produce and proofread material for diverse cultures and languages. (We produced material for recent immigrants, as well, in Chinese, Farsi, Spanish, Punjabi, and Vietnamese.) The new staffed helped greatly during the back-translation, where a publication is translated back into English to ensure translation integrity. In this case, the back-translation was not for language, but to ensure that cultural references were effective.

Art, A Critical Element

An LSS employee was friends with a budding artist named “Brian Jungen”:http://en.wikipedia.org/wiki/Brian_Jungen who was of Dunne-za (a First Nations tribe) and Swiss background. His artwork provided authentic visuals for the initial book. His work now hangs in the Vancouver Art Gallery, amongst other places, and I like to think he looks back fondly on the project.

How The Book Came About

The structure of the book took shape as the artist and I divided the script into chunks to fit the drawings, and then the drawings as necessary. As the Publishing Program manager, I took on the role of substantive editor for the writing and graphics. I also worked with the artist to figure out how to get exactly enough panels to fit the amount of print space allotted.

The structure of the book needed to be in multiples of four pages — minus both covers, the copyright page, and the title pages — and couldn’t exceed 8 pages of actual panels, to control costs. The story had to stay coherent within these constraints and couldn’t focus on the local color at the expense of delivering the legal message. All of that took quite a bit of balancing to keep the interest, use the right level of language, and keep the key legal phrases that would be important for someone to know. In the end, it worked.

Much like any other localized material, we had the material checked by a lawyer to ensure that no legal concepts were compromised during the “translation,” and then the material was tested with audiences to determine effectiveness. The Native Services Department fieldworker took copies of the storyboards out on a road trip to band offices and friendship centers.

We held our breath until word came back through the “moccasin grapevine” that the results were well received. This feedback loop was critical because it provided the opportunity to incorporate any changes that came up from the test.

In the End…

The publication story line opens with a guy in a plaid shirt (it has to be a plaid shirt) having an altercation with his wife. Then they’re at a pow-wow in a truck (it has to be a pick-up truck) where she warmly greets an old (male) friend. Then they’re at a party where he’s being abusive to his wife. By the end of the publication, the wife has identified that his verbal and physical violence is not acceptable, gotten a restraining order by following a few simple steps, and taken some basic legal steps without incurring huge legal costs.

click for cover detail of Getting OutClick for panel detail of Getting Out

One of the dialog bubbles states “… If he tries to do any of these things, we will arrest him again for breach of bail,” and then explains what the term “breach of bail” means. Another panel explains that, “If you live in a rural community far from the cities, Crown counsel [a prosecuting attorney] travels from community to community. You may have a different lawyer at the trial.”

The “insider” cultural perspectives made me feel a bit of a voyeur, but that very characteristic was what made it so effective. The 8.5” x 11” saddle-stitched booklet was immediately identifiable on the publication rack by its distinctive graphics. Also, the title, Getting Out, reflected the vernacular used by women in the community caught in situations of domestic violence so it was an instantly recognizable phrase.

The agency ran a modest print run of the publication, partly to contain printing costs in case of waste, and partly to gauge reaction to the publication. The booklets were distributed to legal aid offices, band offices, and other social service agencies where women were likely to go when they found themselves in marital distress. Offices and agencies were notified of its availability, and I mentioned it in passing during a radio interview.

The demand for the publication soon depleted the initial print run, and another was requested. The frontline workers liked the format, and handed it out to the women who didn’t quite qualify for legal aid but who clearly wouldn’t be able to afford a lawyer. Gathering post-production metrics was not a strong point at LSS, but by the measure of popular opinion, it was a winner, and the exercise was repeated with a companion publication entitled The Ministry Took My Kids, about parental rights when children are apprehended by social services.

Click for cover detail of The Ministry Took My KidsClick for panel detail of Getting Out

We Tried To Warn You, Part 2

Written by: Peter Jones

A large but unknowable proportion of businesses fail pursuing nearly perfect strategies.

In Part I of We Tried to Warn You, three themes were developed:

# Organizations as wicked problems,
# The differences of failure leverage in small versus large organizations, and
# The description of failure points

These should be considered exploratory elements of organizational architecture, from a communications information architecture perspective. While the organizational studies literature has much to offer about organizational learning mechanisms, we find very little about failure from the perspective of product management, management processes, or organizational communications.

Researching failure is similar to researching the business strategies of firms that went out of business (e.g., Raynor, 2007). They are just not available for us to analyze, they are either covered-up embarrassments, or they become transformed over time and much expense into “successes.”

In The Strategy Paradox, Raynor describes the “survivor’s bias” of business research, pointing out that internal data is unavailable to researchers for the dark matter of the business universe, those that go under. Raynor shows how a large but unknowable proportion of businesses fail pursuing nearly perfect strategies. (Going concerns often survive because of their mediocre strategies, avoiding the hazards of extreme strategies).

A major difference in the current discussion is that organizational failure as defined here does not bring down the firm itself, at least not directly, as a risky strategy might. But it often leads to complete reorganization of divisions and large projects, which should be recognized as a significant failure at the organizational level.

One reason we are unlikely to assess the organization as having failed is the temporal difference between failure triggers and the shared experience of observable events. Any product failure will affect the organization, but some failures are truly organizational. They may be more difficult to observe.

If a prototype design fails quickly (within a single usability test period), and a project starts and fails within 6 months, and a product takes perhaps a year to determine its failure – what about an organization? We should expect a much longer cycle from originating failure event to general acknowledgement of failure, perhaps 2-5 years.

There are different timeframes to consider with organizational versus project or product failure. In this case study, the failure was not observable until after a year or so of unexpectedly weak sales, with managers and support dealing with customer resistance to the new product.

However, decisions made years earlier set the processes in place that eventuated as adoption failure. Tracing the propagation of decisions through resulting actions, we also find huge differences in temporal response between levels of hierarchy (found in all large organizations).

Failures can occur when a chain of related decisions, based on bad assumptions, propagate over time. These micro-failures may have occurred at the time as “mere” communication problems.

In our case study, product requirements were defined based on industry best practices, guided by experts and product buyers, but excluding user feedback on requirements. Requirements were managed by senior product managers and were maintained as frozen specifications so that development decisions could be managed. Requirements become treated as-if validated by their continuing existence and support by product managers. But with no evaluation by end users of embodied requirements – no process prototype was demonstrated – product managers and developers had no insight into dire future consequences of product architecture decisions.

Consider the requisite timing of user research and design decisions in almost any project. A cycle of less than a month is a typical loop for integrating design recommendations from usability results into an iterative product lifecycle.

If the design process is NOT iterative, we see the biggest temporal gaps of all. There is no way to travel back in time to revise requirements unless the tester calls a “show-stopper,” and that would be an unlikely call from an internal usability evaluator.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

Here we find the seeds of product failure, but the organization colludes to defend the project timelines, to save face, to maintain leadership confidence. Usability colludes to ensure they have a future on the job. With massive failures, everyone is partly to blame, but nobody accepts personal responsibility.

The Roles of User Experience


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

As Figure 1 shows, UX reported to development management, and was further subjected to product and project management directives.

In many firms, UX has little independence and literally no requirements authority, and in this case was a dotted-line report under three competing authorities. That being the case, by the time formal usability tests were scheduled, requirements and development were too deeply committed to consider any significant changes from user research. With the pressures of release schedules looming, usability was both rushed and controlled to ensure user feedback was restricted to issues contained within the scope of possible change and with minor schedule impact.

By the time usability testing was conducted, the scope was too narrowly defined to admit any ecologically valid results. Usability test cases were defined by product managers to test user response to individual transactions, and not the systematic processes inherent in the everyday complexity of retail, service, or financial work.

* Testing occurred in a rented facility, and not in the retail store itself.
* The context of use was defined within a job role, and not in terms of productivity or throughput.
* Individual screen views were tested in isolation, not in the context of their relationship to the demands of real work pressures – response time, database access time, ability to learn navigation and to quickly navigate between common transactions.
* Sequences of common, everyday interactions were not evaluated.

And so on.

The product team’s enthusiasm for the new and innovative may prevent listening to the users’ authentic preferences. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable.

Many well-tested products have been released only to fail in the marketplace due to widespread user preference to maintain their current, established, well-known system. This especially so if the work practice requires considerable learning and use of an earlier product over time, as happened in our retail system case. Very expensive and well-documented failures abound due to user preference for a well-established installed base, with notorious examples in air traffic control, government and security, medical / patient information systems, and transportation systems.

When UX is “embedded” as part of a large team, accountable to product or project management, the natural bias is to expect the design to succeed. When UX designers must also run the usability tests (as in this case), we cannot expect the “tester” to independently evaluate the “designer’s” work. The same person in two opposing roles, the UX team reporting to product, and restricted latitude for design change (due to impossible delivery deadlines) – we should consider this a design failure in the making.

In this situation, it appears UX was not allowed to be effective, even if the usability team understood how to work around management to make a case for the impact of its discoveries. But the UX team may not have understood the possible impact at the time, but only in retrospect after the product failed adoption.

We have no analytical or qualitative tools for predicting the degree of market adoption based on even well-designed usability evaluations. Determining the likelihood of future product adoption failure across nationwide or international markets is a judgment call, even with survey data of sufficient power to estimate the population. Because of the show-stopping impact of advancing such a judgment, it’s unlikely the low-status user experience role will push the case, even if such a case is clearly warranted from user research.

The Racket: The Organization as Self-Protection System

Modern organizations are designed to not fail. But they will fail at times when pursuing their mission in a competitive marketplace. Most large organizations that endure become resilient in their adaptation to changing market conditions. They have plenty of early warning systems built into their processes – hierarchical management, financial reports, project management and stage-gate processes. The risk of failure becomes distributed across an ever-larger number of employees, reducing risk through assumed due diligence in execution.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged. Groupthink prevails when people conspire to maintain silence about bad decisions. We then convince ourselves that leadership will win out over the risks; the strategy will work if we give it time.

Argyris’ organizational learning theory shows people in large organizations are often unable to acknowledge the long-term implications of learning situations. While people are very good at learning from everyday mistakes, they don’t connect the dots back to the larger failure that everyone is accommodating.

Called “double loop learning,” the goal is learn from an outcome and reconfigure the governing variables of the situation’s pattern to avoid the problem in the future. (Single-loop learning is merely changing one’s actions in response to the outcome). Argyris’ research suggests all organizations have difficulties in double-loop learning; organizations build defenses against this learning because it requires confrontation, reflection, and change of governance, decision processes, and values-in-use. It’s much easier to just change one’s behavior.

What can UX do about it?

User experience/IA clearly plays a significant role as an early warning system for market failure. Context-sensitive user research is perhaps the best tool for available for informed judgement of potential user adoption issues.

Several common barriers to communicating this informed judgment have been discussed:

* Organizational defenses prevent anyone from advancing theories of failure before failure happens.
* UX is positioned in large organizations in a subordinate role, and may have difficulty planning and conducting the appropriate research.
* UX, reporting to product management, will have difficulty advancing cases with strategic implications, especially involving product failure.
* Groupthink – people on teams protect each other and become convinced everything will work out.
* Timing – by the time such judgments may be formed, the timeframes for realistic responsive action have disappeared.

Given the history of organizations and the typical situating of user experience roles in large organizations, what advice can we glean from the case study?

Let’s consider leveraging the implicit roles of UX, rather than the mainstream dimensions of skill and practice development.

UX serves an Influencing role – so let’s influence

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

User experience has the privilege of being available on the front lines of product design, research, and testing. But it does not carry substantial organizational authority. In a showdown between product management and UX, product wins every time. Product is responsible for revenue, and must live or die by the calls they make.

So UX should look to their direct internal client’s needs. UX should fit research and recommendations to the context of product requirements, adapting to the goals and language of requirements management. We (UX) must design sufficient variability into prototypes to be able to effectively test expected variances in preference and work practice differences. We must design our test practices to enable determinations from user data as to whether the product requirements fit the context of the user’s work and needs.

We should be able to determine, in effect, whether we are designing for a product, or designing the right product in the first place. Designing the right product means getting the requirements right.

Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications.

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

UX is a recursive process – let’s make recursive organizations as well

User experience is highly iterative, or it fails as well. We always get more than one chance to fail, and we’ve built that into practices and standards.

Practices and processes are repeated and improved over time. But organizations are not flexible with respect to failure. They are competitive and defensive networks of people, often with multiple conflicting agendas. Our challenge is to encourage organizations to recurse (recourse?) more.

We should do this by creating a better organizational user experience. We should follow our own observations and learning of the organization as a system of internal users. Within this recursive system (in which we participate as a user), we can start by moving observations up the circle of care (or the management hierarchy if you will).

I like to think our managers do care about the organization and their shared goals. But our challenge here is to learn and perform from double-loop learning ourselves, addressing root causes and “governing variables” of issues we encounter in organizational user research. We do this by systematic reflection on patterns, and improving processes incrementally, and not just “fixing things” (single-loop learning).

We can adopt a process of socialization (Jones, 2007) rather than institutionalization, of user experience. Process socialization was developed as a more productive alternative to top-down institutionalization for the introduction of UX practices in organizations introducing UX into an intact product development process.

While there is strong theoretical support for this approach (from organizational structuration and social networks), socialization is recommended because it works better than the alternatives. Institutionalization demands that an organization establish a formal set of roles, relationships, training, and management added to the hierarchy to coordinate the new practices.

Socialization instead affirms that a longer-term, better understood, and organizationally resilient adoption of the UX process occurs when people in roles lateral to UX learn the practices through participation and gradual progression of sophistication. The practices employed in a socialization approach are nearly the opposite (in temporal order) of the institutionalization approach:

# Find a significant UX need among projects and bring rapid, lightweight methods to solve obvious problems
# Have management present the success and lessons learned
# Do not hire a senior manager for UX yet, lateral roles should come to accept and integrate the value first
# Determine UX need and applications in other projects. Provide tactical UX services as necessary, as internal consulting function.
# Develop practices within the scope of product needs. Engage customers in field and develop user and work domain models in participatory processes with other roles.
# Build an organic demand and interest in UX. Provide consulting and usability work to projects as capability expands. Demonstrate wins and lessons from field work and usability research.
# Collaborate with requirements owners (product managers) to develop user-centered requirements approach. Integrate usability interview and personas into requirements management.
# Integrate with Product Development. Determine development lifecycle decision points and user information required.
# Establish User Experience as process and organizational function
# Provide awareness training, discussion sessions, and formal education as needed to fit UX process.
# Assessment and renewal, staffing, building competency

We should create more opportunities to challenge failure points and process breakdowns. Use requirements reviews to challenge the fit to user needs. Use a heuristic evaluation to bring a customer service perspective on board. In each of those opportunities, articulate the double-loop learning point. “Yes, we’ll fix the design, but our process for reporting user feedback limits us to tactical fixes like these. Let’s report the implications of user feedback to management as well.”

We can create these opportunities by looking for issues and presenting them as UX points but in business terms, such as market dynamics, competitive landscape, feature priority (and overload), and user adoption. This will take time and patience, but then, its recursive. In the long run we’ll have made our case without major confrontations.

Conclusions

The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.

Scott Cook, Intuit’s Founder, famously said at CHI 2006: “The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.”

Intelligent managers actually celebrate failures – that’s how we learn. If we aren’t failing at anything, how do we know we’re trying? The problem is recognizing when failure is indeed an option.

How do we know when a project so large – an organizational level project – will belly-up? How can something so huge and spectacular in its impact be so hard to call, especially at the time decisions are being made that could change the priorities and prevent an eventual massive flop? The problem with massive failure is that there’s very little early warning in the development system, and almost none at the user or market level.

When product development fails to respect the user, or even the messenger of user feedback, bad decisions about interface architecture compound and push the product toward an uncertain reception in the marketplace. Early design decisions compound by determining architectures, affecting later design decisions, and so on through the lifecycle of development.

These problems can be compounded even when good usability research is performed. When user research is conducted too late in the product development cycle, and is driven by usability questions related to the product and not the work domain, development teams are fooled into believing their design will generalize to user needs across a large market in that domain. But at this point in product development, the fundamental platform, process, and design decisions have been made, constraining user research from revisiting questions that have been settled in earlier phases by marketing and product management.

References

Argyris, C. (1992). On organizational learning. London: Blackwell.

Howard, R. (1992). The CEO as organizational architect: an interview with Xerox’s Paul Allaire. Harvard Business Review, 70 (5), 106-121.

Jones, P.H. (2007). Socializing a Knowledge Strategy. In E. Abou-Zeid (Ed.) Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research, pp. 134-164. Hershey, PA: Idea Group.

Raynor, M.E. The strategy paradox: Why committing to success leads to failure (and what to do about it). New York: Currency Doubleday.

Rittel, H.W.J. and Weber, M.W. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169.

Taleb, N.N (2007).The Black Swan: The Impact of the Highly Improbable. New York: Random House.