Control and Community: A Case Study of Enterprise Wiki Usage

by:   |  Posted on

The Balance of Power

There are a wide variety of uses for Wikis and a level of interest in using them that’s matched by an extensive range of Wiki software. Wikis introduce to the Internet a collaborative model that not only allows, but explicitly encourages, broad and open participation. The idea that anyone can contribute reflects an assumption that both content quantity and quality will arise out of the ‘wisdom of the crowd.’

There are, however, negative effects of this extreme openness. One problem is the deliberate vandalism of Wiki pages. Another is that even those with no destructive intent may yet degrade the quality of a Wiki’s content through lack of knowledge or skill. Anyone can write nonsense as though it were fact. Anyone can accidentally delete useful information. Someone with half-baked knowledge of grammar may change all the “its” to “it’s.” Of course, someone more knowledgeable may notice the problem and fix it … but then again maybe they won’t.

Wikis can impose various forms of control to protect against these risks, including user registration, moderation, enforced stylistic rules, and imposing prescribed topic structures and page layouts. These types of control, however, are typically seen as contrary to the basic Wiki concept.

Consequently, one of the central tensions when managing a Wiki is between centralized control and anarchy. In the public arena, the balance of power tends towards anarchy, but in a corporate environment a more centralized approach is often required.

In this article I describe one application of the Wiki way to a common corporate process and extract some guidelines for the effective use of Wikis in that context. In particular, I am seeking insight from this case study into the “balance of power” tension.

The example on which these reflections are based is a project within the software company CorVu [1] to improve the technical knowledge base related to the products we sell. Like many companies, CorVu has extensive knowledge of its own products and a desire to make that knowledge available to customers. A major block to achieving that desire has been a lack of people with the time to either record the internal knowledge or to fashion the knowledge into a customer-ready format. We needed to spread the load so that a broad range of developers, tech writers, professional service consultants and others could all contribute what time and knowledge they had to a shared goal. Our hope was that a process built around several Wiki sites would facilitate this collaborative approach.

There’s no guarantee, of course, that lessons learned in that context will transfer to others. But without documented cases such as this one, any theorizing about the balance of power issue is just speculation.

Three contexts for a Wiki

To start with, it is important to clarify the key differences between three contexts in which Wikis are used: public, team and enterprise Wikis. [2].

Public Wikis

By a “public Wiki,” I mean one where any Internet user can read and contribute to the collaborative effort. It may be that editing content is restricted to a registered user group (as is the case with Wikipedia), but anyone can register. Consequently, the size of the contributing community is potentially huge, there is a high level of anonymity, and the contributors do not typically relate to each other outside the confines of the Wiki.

In this context, very little centralized control is evident. You typically find some explicit guidelines for contributors, either formulated by the founders/hosts, or as an evolving page edited by the contributors themselves. There is also an implicit understanding of etiquette and an implied social contract that comes with joining the “community.” But in the end, anyone can edit anything … and anyone else can un-edit it. This is the essence of anarchy: not that anything goes, but that what goes depends on peer acceptance. In an anarchy, it is not the case that there is no control; rather, the control is exerted by peers (around the edges) rather than by an authority (in the centre).

Requiring registration prior to participation does not alter the anarchistic nature of the process. Registration has numerous benefits, not least of which is that contributors can be recognized and gain respect for their contributions. Registration may also increase the sense of belonging because it reflects each contributor’s conscious choice to join the community. That sense of belonging is essential to any viable anarchy.[3]

Moderation, on the other hand, inevitably moves the balance of power towards the centre. Moderation invests some users with the power to limit the contributions of other users. While moderation is sometimes seen as necessary in order to combat vandalism and dissension, this imposition of authority denies the libertarian aspirations of most public Wikis.

Team Wikis

A “team Wiki” is one where the people who read and contribute all belong to the same team or work-group. Perhaps the R&D team uses the Wiki to record evolving product specifications; or the members of a local church collaboratively documents its history; or a class of students collates the results of a research project. Membership of the team predates and takes precedence over membership of the Wiki community. A person joins the team and as a by-product may be requested or required to use the Wiki. The number of people participating tends to be small and the contributors are likely to relate to each other outside the context of the Wiki.

In contrast to public Wikis, where self-selection guarantees that the vast majority of users are technically savvy and keen to be involved, the people contributing to a team Wiki may not be doing so voluntarily or with much enthusiasm. It may well be a required part of their work that they would prefer to avoid. The need to make the Wiki as easy as possible to use becomes even more important in this context. This includes clear navigation and an effective search function, but more than anything else it means a simple, familiar user interface for editing text. Many team Wikis fail simply because the potential contributors refuse to learn Wiki markup or to use a non-wysiwyg editor.

In this context, registration is essential, but moderation is not. The restrictions on who can contribute protect against vandalism and, because the collaborators have pre-existing relationships and a common commitment to a higher cause, the community operates with a trust model. In fact, apart from the restrictions on membership, a team Wiki is unlikely to impose much control at all over contributions. Standards, structures, and conflicts will be resolved using the organization’s normal processes outside the Wiki. The collaborators will discuss and vote, or demand and threaten, or just do what the boss says, without that process being explicitly controlled by mechanisms within the Wiki.

Enterprise Wikis [4]

When it comes to implementing Wikis across a large enterprise such as a global corporation, a new set of concerns affect the balance of power. Management wisdom is required to maximize participation while keeping business objectives clearly in sight.

In my experience, it is rare that a single Wiki site within an enterprise is open to contributions by any employee. Where this is the case, moderation is likely to be required because of the large numbers of contributors who have no direct accountability to each other. The concerns at the enterprise level relate to how numerous organizational Wikis within the enterprise can be integrated into the IT infrastructure and how the use of Wikis can most effectively support corporate goals.

Rather than allow the proliferation of diverse Wiki projects throughout the enterprise, IT management is more likely to select the Wiki software that everyone is to use and perhaps host all instances centrally. It may be that some IT managers are “control freaks,” but there are good reasons for standardizing on Wiki software:

  • Risk. If many work groups host their own Wiki using their own choice of software, there is a significant risk of knowledge loss. It is hard to guarantee that each work group will secure the Wiki adequately or ensure appropriate disaster recovery. What happens if the work group’s server dies? Will they have an adequate backup procedure? What happens if the work group’s IT expertise leaves the company? Will the knowledge of how to run the Wiki be passed on to the remaining team? What happens if the Wiki software no longer operates when the server’s operating system is upgraded? Centralized Wiki management can avoid such problems.
  • Support. Most Wiki software is easy to learn (at least to us!), but some are certainly easier to learn than others. In a context where many employees participate in multiple Wikis within the enterprise, training and user frustration can be reduced by using the same software for all the Wikis.
  • Cost. Centralized IT management can also reduce the total cost of ownership of Wiki projects. That may be counter-intuitive given that most Wiki software is free. But the costs of running a Wiki include the cost of the hardware that hosts the Wiki, the time it takes to manage the Wiki (installation, user admin and support, backup, etc.) and the time it takes to teach people how to use the system. Although these costs may be small for each work group, the total across the enterprise can be substantial, and can be reduced by standardization and centralization.

In this context, the balance of power swings inevitably towards centralized control. The challenge is how to do so without stifling the free and creative contributions that are essential to a Wiki’s success.

The CorVu case study

The company I work for, CorVu, started using Wikis within its R&D group back in 2000 using the original WikiWikiWeb software. The project described below was based on MoinMoin, but we have also used DoKuWiki and have since standardized on Confluence.

CorVu produces software that assists other enterprises to implement their strategy and to track their performance against that strategy over time. CorVu has a variety of channels for making its internal product knowledge available to its customers, but the product functionality grows at a faster rate than the Tech Writers can keep up with. Apart from the fundamental description of each feature, a complex assortment of configuration details need to be documented – performance optimization, best-practice implementation techniques, interactions with third-party software, etc. A lot of knowledge at that level resides with the Professional Services team rather than the Product Development team. Often, the people with the knowledge do not have the time nor the writing skills to record it, and the people with the responsibility to deliver documentation to the customers do not have the knowledge. There’s nothing uncommon about that problem!

Since the goal of capturing and disseminating quality technical documentation requires collaboration, I thought that a Wiki might help. So we set up two independent Wikis to capture knowledge from two different groups of employees, and a third so that customers could access a sanitized version of that knowledge.

I’m not putting my own case forward as the paradigm of success. In fact, although the project yielded a significant improvement in capturing internal knowledge, we have not yet achieved the final goal of effectively disseminating that knowledge to our customers.

Wiki Workflow Diagram

Figure 1. Knowledge capture and dissemination using three Wikis

R&D Wiki
This Team Wiki is the home of internal coding standards, design documents, etc. Anyone on the product development team can contribute, while employees in other departments can only view.

Services Wiki

The Professional Services Wiki (actually called the ‘Internal Technical Knowledge Base’) is a Team Wiki for recording how the product is used in practice, for instance: internal discussion about bugs, compatibility with third-party software, implementation tips and techniques, performance optimization, etc.

Anyone in the organization can edit this Wiki, but the primary contributors are Professional Service staff (consultants and help desk). This Wiki has two intentions: to be the primary location for recording and accessing internal product knowledge, and to be the staging ground for knowledge that can later be released to customers.

We centrally imposed the top level of structure and navigation here, based on product modules. This makes it easier for contributors to know where new content should be added. Specific pages enable FAQs to be built over time. Where it is relevant, information from the R&D Wiki is incorporated into this Wiki.

We scrapped a commonly used set of email distribution lists in favor of a process whereby questions and answers are posted to this Wiki site. This means that problem solving previously lost in email trails is now captured and searchable.

Customer Wiki
The Customer Wiki has the same basic structure as the Professional Services Wiki. That is, nearly all of the pages in the Professional Services Wiki have a matching page in the Customer Wiki. The difference is that the content in the Customer Wiki is edited by professional technical writers.

Each page of the Professional Services Wiki includes a status block indicating who the primary author was, who has checked the accuracy of the technical content, and who has checked spelling, grammar and adherence to the corporate documentation style. Only when those steps have been completed can the page be copied over to the Customer Wiki. An important part of that process is to make judgments about what information should be kept internal and what the company wants to reveal to its customers.

The Documentation Department is the only group who can edit the Customer Wiki. Although customers can leave comments, they cannot modify the published content.

In this project, there was a clear business goal and a centrally-driven process to attain that goal. The Professional Services and Customer Wikis were seeded with pages that provided a structure for delivering accurate and accessible content to customers. While the ability to contribute was widespread, there were explicit “rules of engagement” around user registration, topic naming, page layout templates, content categorization, and navigation.

Although there was a degree of central control, we tried to balance that with encouragement for broad-based collaboration–otherwise, why use a Wiki? The distinction that guides this balance is between structure and content. Although the structure is imposed centrally, content is generated by a diverse range of people in a way that promotes openness, the recognition of contributors, editing of any content without fear of criticism, and shared responsibility for quality.

Since the quality of the documentation exposed to our customers is crucial, the process includes a QA step that is uncommon for Wikis. We did not want to constrain all contributors to adhere to strict grammar, spelling and style rules. Instead we left the knowledge capture stage free from those restrictions and used technical writers to edit the content before its dissemination to customers.

It may seem strange that we would use a Wiki to publish non-editable information, but this is a testament to the versatility of the software. Wikis provide a very fast means of building a web site, whether collaboration is the intention or not. In our case, we use one Wiki site to capture knowledge from one group of people and another Wiki site to disseminate the information to a different group of people. With regard to my categorization of Public, Team and Enterprise Wikis, the “Customer Wiki” is a hybrid: it is built by a specific team and hosted within an enterprise infrastructure in order to publish in the public arena. A more traditional approach to software documentation would have been to repackage the knowledge into some other HTML or PDF format for customer consumption. But the maintenance of that dichotomy would have been far more onerous than copying between two parallel Wikis.

Managing an Enterprise Wiki project

Embedding Wiki tools across an enterprise is an organizational change project and as such requires appropriate planning and project management, along both technical and cultural dimensions. I won’t go over those generic processes, nor repeat suggestions for Wiki adoption that are documented in places like WikiPatterns. But drawing from CorVu’s experience, I will highlight some advice for project managers in the enterprise Wiki context.


  1. Seek patronage at the highest possible level. That is, find a person with as much power within the enterprise as possible who will sponsor the project. The sponsor may do no more than ‘give the nod’ to your work, but that invests you with the authority to draw on other people’s time. In CorVu’s case, the CEO himself was a key supporter.
  2. Enthuse a champion. This needs to be a person who is well respected, who will lead by example, and in doing so enthuse others. The champion will need to be able to put a lot of time into the project and will often be the primary contributor to the Wiki, especially at the beginning. In our case, that turned out to be myself.
  3. Identify the group of people who can be expected to generate the majority of the Wiki content. These are typically subject matter experts. Discuss with them the value of writing down what they know or Wiki-izing what they have already written.
  4. Identify anyone whose participation is mandatory. Is there a key political player or subject matter expert who absence from the project will cause others to think, “Well, if she’s not involved, I’m certainly not going to waste my time?”
  5. Since our goal was to create a knowledge base for external consumption, it was important that the content generated by subject matter experts was checked for both accuracy and readability in the same way as other customer documentation. Consequently, the people involved in the project needed to include professional technical writers.


There are many different Wiki software tools in the market (Wiki Matrix lists over 100) but most are not adequate for an enterprise rollout. CorVu’s experience suggests that an enterprise Wiki requires at least the following:

  1. Administration tools to manage a large number of users, with integration to enterprise security mechanisms (e.g. LDAP and single sign-on).
  2. Separately secured spaces for different knowledge areas.
  3. Effective management of attachments that includes versioning and a built-in search function that indexes the attachments.
  4. Integration with other enterprise software such as portals, business intelligence, and content management systems.
  5. Many contributors in an enterprise context will be non-technical. This makes it essential that the Wiki has a familiar, WYSIWYG editing mode rather than forcing users to learn some Wiki markup language.
  6. An assortment of non-functional requirements such as good reputation, reference sites, some assurance of product longevity, and the availability of support.

Generating participation

All Wikis stand or fall based on whether an active community is formed. You can’t achieve the ‘wisdom of the crowd’ unless you have an active crowd. The means of achieving that across an enterprise are somewhat different from public Wikis.

  1. Build a critical mass of contributors. Since the contributors are employed by the enterprise, it is possible to make the Wiki part of people’s responsibilities. At CorVu we found this to be imperative. Unlike a public Wiki (where there are many people who contribute huge amounts of time as a hobby), in a work context (where everyone is probably too busy already), this isn’t going to happen. So write it into job descriptions. Get managers to send emails to their staff saying that one hour a week should be spent writing up their knowledge on the Wiki. Arrange a seminar on how to use the system. Use the company newsletter to promote the value of the project.
  2. Build a critical mass of topics. To be used, the site must be useful. To generate traffic to the site, make the most frequently required information available on the Wiki first, and make the Wiki the only source for that information. In CorVu’s case, for example, one significant page stored the latest product release information. When any software version was moved from internal QA to Beta, or from Beta to General Release, this page was updated. Once people learn that the Wiki contains a lot of useful information they will look there for answers to start with rather than wasting someone else’s time by phoning or emailing questions.
  3. Send links rather than information. Set an expectation that when anyone is asked for some detailed information, the response should be a link to a Wiki page. If the information has not yet been Wiki-ized, don’t type a lengthy answer in an email; instead, spend an extra minute typing it into a Wiki page.
  4. Provide recognition and rewards. As with most Wikis, the best way to encourage participation in the long term is to ensure that the efforts of the contributors are valued. This is easier in team and enterprise Wikis than in public Wikis because the contributors are known. Wiki pages can indicate explicitly who the primary authors were. There can also be rewards within the enterprise beyond the boundaries of the Wiki. For instance, some employees may have components of their annual review linked to their involvement in Wikis.

The future of enterprise Wikis

Our experience with Wikis at CorVu has been very positive and gives encouraging signs about the future potential of this approach to shared document workspaces. There are multiple offerings that meet enterprise IT standards, and the tools currently available are robust, simple to administer, simple to use, and inexpensive. The CorVu case also shows that enterprise Wikis can be used not only for internal purposes, but also as a means of publishing information to external stakeholders.

By putting minimal central control in place an enterprise can gain significant benefit from this simple technology, including improved knowledge capture, reduced time to build complex knowledge-based web sites, and increased collaboration. Although enterprise Wiki use requires a greater degree of centralized control than public Wikis, this need not impinge on the freedom to contribute that is the hallmark of a Wiki approach. The balance of power is different in an enterprise context, but fear of anarchy should not prohibit Wiki adoption.

Nevertheless, I predict that Wikis will disappear over the next 5 to 10 years. This is not because they will fail but precisely because they will succeed. The best technologies disappear from view because they become so common-place that nobody notices them. Wiki-style functionality will become embedded within other software – within portals, web design tools, word processors, and content management systems. Our children may not learn the word “Wiki,” but they will be surprised when we tell them that there was a time when you couldn’t just edit a web page to build the content collaboratively.

[1] CorVu is now a subsidiary of Rocket Software, but this case study pre-dates that acquisition.

[2] There is another form of Wiki that I have ignored here – the personal Wiki – but in that case, questions about the balance of control do not arise.

[3] In an editorial comment, Christina Wodtke offered the insight that if identity is essentially disposable, then registration does very little. Perhaps it is only when the link between registration and identity is persistent that protecting one’s reputation becomes an important motivation towards good behavior.

[4] What I call an ‘Enterprise Wiki’ others have called a ‘Corporate Wiki’. I prefer the former because it is not restricted to corporations in the business world, but also applies to government agencies, churches, and large not-for-profit organizations.

Wanted/Needed: UX Design for Collaboration 2.0

by:   |  Posted on

No current software supports the full process of collaboration.

That’s a bold claim, and I hope that someone can prove me wrong.

This article is more of a “Working Towards …” position paper than the final word; written in the hope that the ensuing discussion will either bring to light some software of which I’m not aware, or motivate the right people to develop what’s needed.

There is plenty of hype about “Collaboration 2.0” at the moment, but the bugle is being blown too loudly, too soon. Take, for instance, the Enterprise Collaboration Panel at last year’s Office 2.0 Conference. Most of the discussion was really about communication rather than collaboration, with only a hint that beyond forming a social network (“putting the water cooler inside the computer”) there was still a lack of software that actually helped groups of people get the work done. What’s missing from the discussion is any formulation of what the process of collaboration entails; there’s no model from which collaborative applications could arise. If we can figure out a model then we in the UX community should be able to make a significant contribution to it.

I want to start this discussion by proposing a model for collaboration1 that links the various elements of collaboration, comment on the so-called “collaboration software” currently available, and make some tentative suggestions about IA and UX requirements for a real collaboration platform.

A proposed model


Collaboration is a co-ordinated sequence of actions performed by members of a team in order to achieve a shared goal.

The main concepts in this definition are:

  1. Collaboration is action-oriented. People must do something to collaborate. They may exchange ideas, arrange an event, write a report, lay bricks, or design some software. To collaborate is to act together and it is the combined set of actions that constitutes collaboration.
  2. Collaboration is goal-oriented. The reason for working together is to achieve something. There is some purpose behind the actions: to create a web site, to build an office block, to support each other through grief, or some other human goal. The collaborators may have varying motivations, but the collaboration per se focuses on a goal that is shared.
  3. Collaboration involves a team. No-one can collaborate alone. Collaboration requires a group of people working together. The team may be any size, may be geographically co-located or dispersed, membership may be voluntary or imposed, but there is at least some essence of being part of the team.
  4. Collaboration is co-ordinated. That is, the team is working together in some sense. The co-ordination may follow some formal methodology, but can equally well be implicit and informal. There needs to be some sense at least that there are a number of things to be done, some sequences of actions, some allocation of tasks within the group, and some way to combine the contributions of different team members.

Components of collaboration

Any collaboration process involves interactions between six elements, as shown in the following diagram:

The basic components of collaboration.

Figure 1. The basic components of collaboration


The Artifacts are the tangible objects relating to the collaboration. They include the outcomes of the process – the office block that progressively gets built, the web site that finally gets commissioned – as well as a variety of objects that were used along the way to promote, direct and record collaboration – such as design documents, project schedules, and meeting agendas.


The Team element includes the collaborators and the interactions between them: Team membership and authorization, inter-personal dynamics, personal identity, decision making processes, and communication.


The Tasks element includes the list of things to be done in order to reach the goal, along with all the processes necessary to manage that list. How do tasks get formulated? How is their status recorded and tracked over time? How is the list prioritized and scheduled? How are tasks assigned to team members and how are personal ‘To Do’ lists presented?


Most collaboration is extended across time, and consequently requires some degree of time-management: setting deadlines, milestones and task completion dates; scheduling team meetings; and keeping an historical record of events.


Team members perform Actions based on the Tasks assigned to them. The Actions might just involve searching or viewing the Artifacts, but more typically mean modifying the Artifacts in some way. There might also be some meta-Actions such as maintaining the Artifact repository, keeping a log of Actions and commenting on the Artifacts.


Resources enable the Team members to perform the Actions. They include physical equipment, money, external advice, and all manner of software (project management, Wiki, spreadsheet, and content management systems, among others).

The current state of collaborative software

There are three primary ways in which humans interact: conversations, transactions, and collaborations. There is plenty of software that enables conversation–email, VOIP, chat, IM, forums–and plenty of software for transactions–eBay, PayPal, internet banking, shopping carts. But what is available for collaboration?

There are many software applications that seek to enable collaboration2. But let’s see what happens when they are evaluated under these three categories:

  • The extent to which the software provides the required functional components (i.e. the boxes in Figure 1)
  • The extent to which the software supports the interaction between those components (i.e. the lines in Figure 1)
  • The usual criteria that apply to all software , such as ease of interaction, security, integration with other applications, and so on.

It is true that there are software packages for most of the individual components of collaboration:

  1. Artifacts: we have software for maintaining and accessing a repository of digital Artifacts (e.g. any number of CMS applications–well-established ones like Documentum or Stellent, more recent one’s like Joomla! or any of the 925 others listed at The CMS Matrix), and we can easily construct databases for tracking the status of non-digital Artifacts.
  2. Team: software for maintaining team membership, facilitating group-based decision support, and managing remote meetings (e.g. WebEx) and video conferencing. There is even some possibility that virtual worlds like Second Life may provide an effective environment for team interaction. In Growing Pains: Can Web 2.0 Evolve Into An Enterprise Technology?, Andy Dornan quotes a business manager as saying that “Second Life allows more user engagement than traditional video or phone conferencing.” I know of one company whose preliminary experiments with Second Life found that there was a more relaxed and open interaction via avatars than when a team interacted face-to-face.
  3. Tasks: software for maintaining task lists (e.g. Jira, ScrumWorks); task dependencies and scheduling, Gantt Charts (Microsoft Project, @task); brainstorming; workflow and process modeling; and others.
  4. Calendar: Microsoft Outlook (along with Microsoft Exchange Server so that the calendar is shared), Google Calendar, among other similar software.
  5. Resources and Actions: Many software applications act as Resources for implementing diverse Actions. For instance, Wikis enable editing of shared documents, and there are any number of calculators, electronic dictionaries, encyclopedias, search engines, web design tools – software that team members might use as they do their work.

These ‘point’ solutions may address their targeted functions effectively and even showcase the core ideals of Web 2.0 – user-generated content and taxonomies, broad-based participation, software-as-a-service (SaaS), and rich user-interfaces within a web browser. But they can’t just be lumped together and called “Collaboration” (with or without the 2.0 suffix). If you buy into the definition and model described above, it should be clear that true collaboration software must go beyond a set of disconnected point solutions and reach for the broader goal of enabling the whole collaboration process.

A key shortcoming of current so-called “collaborative software” is that there is no compelling metaphor or unifying vocabulary. We have many of the necessary pieces, but they do not interact at either the backend or user interface levels.

Some major contenders

Computer-Supported Co-operative Work (CSCW) and Computer-Supported Collaboration (CSC)

CSCW and CSC both promised such systems, but where are the practical results? While these research areas from previous decades generated many novel and hopeful ideas, there seems to have been an overly academic orientation rather than much focus on software design. Although the theory made useful distinctions, such as the categorization of collaboration by time and space, the software that resulted from these efforts dealt more with communication and co-ordination than with real collaboration.


Google offers an assortment of products that promote collaboration: Google Calendar, Google Apps, and more are promised. I was hoping that their acquisition of JotSpot in 2006 might result in a broader Wiki-based collaboration platform that unified those other offerings. But to date JotSpot has been silent. At this stage, Google’s offering is still an “assortment” rather than a clearly-conceived package.


The Zoho suite encapsulates virtually all the point-solutions mentioned above. It includes the standard office tools (word processing, spreadsheet, presentations, email), remote conferencing, chat, meeting organizer, calendar, project management and a Wiki. All of that and more is delivered via a SaaS model through your web browser. Zoho is way ahead of any competition because of its unified user interface. However, there are still important aspects lacking in Zoho: not primarily additional modules but some key IA and UX characteristics that I outline below.


Perhaps the closest we have today is from Microsoft. Combine SharePoint, Outlook and the Office suite and this provides remarkably effective functionality for team management, scheduling meetings, communication and shared workspaces. Our organization makes heavy use of this combination, and it pushes teamwork and information sharing a long way ahead of where we once were. On the down-side, the Task management in that environment is quite simplistic, with little support for maintaining a complex task list, or prioritization, or comprehensive status reports. The Wiki facility shipped with SharePoint is very primitive3. Microsoft has implemented a “Collaboration 1.0” approach rather than “Collaboration 2.0”, by which I mean it requires a large degree of centralized control rather than drawing on the power of social networking. Of course, the content of email, announcements, uploaded documents, and so on is completely open to freedom of expression, but the constrained environment and heavy IT infrastructure make the system as a whole feels complex and unwieldy.

Multi-user editing

Perhaps something specific needs to be said about one type of so-called collaborative software – the type that enables multi-user editing of electronic documents. Most of these applications are primarily interested in version control: they maintain a repository of documents and control access to that repository. Authorized people can view documents and a subset of those can edit the documents. The software provides some process for giving each editor a copy of the document and when the changes have been made, the software merges the changes back into the master copy, while keeping some form of historical change log. Examples are clearspace and the various text-based code-management tools such as Subversion.

While revision control has an important role, it is a meager offering in terms of the extent of collaboration that it enables. In most cases, such applications assume that individuals work independently of each other. One user edits this part of the document and, as a quite separate task, another user may edit another part of the same document. Two people editing the same part of the document is treated as a problem, and typically the last person to submit changes trumps any previous changes.

A more significant level of collaboration requires the assumption that multiple people will be working together to edit the document simultaneously. That requires a single shared document rather than separate copies of a master document for each editor. See Wikipedia article for a list of such real-time collaborative editors.

XMPP (the Extensible Messaging and Presence Protocol) has extensions for both multi-user text editing and multi-user whiteboarding, so there is at least discussions about how such interaction can be standardized. But tools that use that protocol are few and far between.

The Challenge for IA and UX

There are many human and business activities mediated by computer systems where IA and UX practitioners have provided design guidance to make the interaction more effective. Given that collaboration is fundamentally about interacting effective to jointly achieve some goal, IA and UX can play an even more substantial role than usual.

So, what principles would you apply to collaboration software? Here are my suggestions:

1.      Build the user interface around a consistent, unifying metaphor.

  • The metaphor should be goal-oriented. That is, a stated goal should take center-stage, with the Team, Tasks, Calendar, Resources, and Artifacts being other players in the drama.
  • The user interface needs to enable and encourage interactions between collaborators. Perhaps the metaphor of a sport team would be effective.
  •  A “portal”/dashboard pattern allows simple movement between team management, task list, calendar, documentation management and the like. That approach can collate the answers to core concerns like: What collaboration projects am I part of? What’s the current status of each? What’s on my To Do list?

2.      Build an open, extensible, modular framework: a collaboration platform rather than a single application.

  • The scope of collaboration is too extensive to expect that a single vendor will be able to provide all the pieces. It is important to allow modules to be gathered from multiple sources and plugged into a shared framework.
  • For instance, Jira might be the first choice for the maintaining the Task list, but the framework should allow that to be substituted with alternatives. Similarly, in a basic system there may be a limited reporting feature (e.g. to view the change history for the Artifact), but it should be possible to plug in a more substantial reporting application later on.
  • Most importantly, it will be important to provide a standard API to the Artifact repository, so that any number of applications can view, add and modify Artifacts.

3.      Include at least the following functions “out of the box:”

  • Team management: functions to define and authorize team members, and for individuals to update their personal profiles
  • Task management: functions to add and prioritize tasks, allocate responsibilities to team members, and maintain current status
  • Calendar management: all team members can add events to a single shared calendar
  • Communication: integration with email, IM, and other technologies
  • Meetings: ability to schedule a meeting and invite specific team members, publish an agenda, record notes and decisions from the meeting.

4.      The platform itself should maintain a collaboration history rather than leave that function to the plug-in components. All meetings, decisions, changes to Artifacts, Task status changes and other events are recorded in that history. The history should be displayed as a journal along a time-line as well as being exposed as a life-stream via RSS/Atom.

5.      Connect to other enterprise applications and data stores. A collaboration application will gain significant value if it can interact with existing databases, content management systems, security mechanisms, and if it can exchange data with other applications via some standard like Web Services.

6.      Implement all this as a Rich Internet Application. The complexity of interactions between team members who are potentially geographically scattered indicates the platform needs to be web-based. The complexity of interactions between users and the system indicates that the user interface needs to be very dynamic, with near-real-time synchronization between all concurrent users and a shared Artifact repository.


Maybe all I’ve done here is scratch an itch. But I hope that the itch is contagious.

Collaboration is an essential part of human endeavor and information technology is at a stage where it should be able to add value to collaboration in more ways that just connecting people in a social network. We have many web-based applications that address parts of the process, but who’s going to create the framework to bring it all together?


1 This model was first presented at BarCamp Sydney in August 2007.

2 Capterra’s Web Collaboration Software Directory lists “174 Solutions”. See also the Wikipedia article on collaborative software.

3 Lawrence Liu comments that the SharePoint Wiki is not intended to be best-of-breed, just something that “is sufficient for a very large percentage of our customer base”. Even that is wishful thinking, but fortunately, the guys at Atlassian have made a SharePoint Connector for Confluence that can easily replace the default SharePoint Wiki.

The Information Architect as Change Agent

by:   |  Posted on

Some years ago I designed an expert system to advise cotton farmers about the appropriate choice of pesticides. We spent a lot of effort dealing with some major technical challenges to turn research techniques into a commercial product. Unfortunately, we didn’t spend as much effort dealing with how it would be deployed to the real target audience: farm managers with little experience of computers. It’s not (just) that we didn’t think enough about the software’s user interface, but we didn’t consider how the farmers would need to change their behavior to make effective use of the expertise that the software made available to them. As far as I can tell, this project became one of the 19% of IT projects that were never used.1

Several past articles on Boxes and Arrows have mentioned the idea that an IA is often an agent of change. It’s worth reading those previous articles in full, but here’s a summary:
* In “Succeeding at IA in the enterprise”:, James Robertson writes that, ideally, information architects would be part of a team in which someone else is responsible for change management, but that in practice the IA often does not have the support of such a team and needs some proficiency in organizational change.
* In “Enterprise Information Architecture: A Semantic and Organizational Foundation”:, Tom Reamy accepts that IA’s are often agents of change, but points out that so are many other people, and that role ought not be seen as essential to the definition of IA.
* In “Change Architecture: Bringing IA to the Business Domain”:, Bob Goodman introduces the term “change architecture” and neatly summarizes Kurt Lewin’s three-phase approach of Unfreeze, Transition, and Refreeze.

In this article I argue, with a bit of logic and a bit of experience, that IAs can do their jobs better if they understand organizational change management, even if they don’t need to be change management specialists. I’ll also suggest a variety of concepts and practices that can (hopefully) help IAs in their change agent role, and I promise to throw in something entertaining as well.

Speaking logically…

Premise: Information architects frequently introduce new technology into organizations.
Premise: Technological change inevitably causes behavioral change.
Premise: Organizations are systems that seek equilibrium and resist change.
Conclusion: A necessary condition for the successful implementation of new technology is the successful navigation of organizational change, and the information architect is often required to act as an agent of change within this context.

There’s Often No Choice

The kind of work IAs do leads to changes in the way people behave. We are in the business of providing tools and structures designed to allow people to do something in a different way (hopefully a better way!) than how they did it before. As Goodman wrote in the article cited above, “As IAs, we are not just architecting information; we are using information to architect change.”

Yet for all our concern about accessibility, usability and the user experience, we seem to think very little about the nature of change. How many projects have you worked on where the implementation team gave any consideration to the way people would be affected by the changes the new system would impose on them? If your experience is anything like mine, then the answer would be “bugger all”, to use a raw but expressive Australianism.

A software company I once worked for employed many outstanding people: a team of excellent programmers with a genius leader, hard-working and intelligent people in QA, dedicated and professional consultants, productive and dependable technical writers. Nevertheless, good IA was always crippled by non-technical, organizational factors: inadequate communications processes, inadequate specifications leading to frequent re-work, the wrong person doing the job (for instance at one point the Vice President of Marketing was personally doing the software’s graphic design) and scope creep caused by revenue imperatives, etc.

This business context, in which organizational factors contribute more to the success or failure of projects than technical factors, is far from unique. In such a context it is insufficient for the IA to contribute just their technical input to the system design: the effective IA must also play a role as an agent of change. Sometimes this role is within the product development team: educating and channeling the team to “take on board” good IA practices. At other times this role is oriented towards the customer: educating the end users and preparing the soil in which the new system will be planted.

Primer on Change Management

There is a large body of theory and expertise in change management and I don’t mean to suggest that IAs need to master that whole discipline. What’s important is to be sufficiently aware of the dynamics of change that you can work alongside other players to support organizational acceptance of new IT systems. On that basis, here’s a list of some core change management ideas as they relate to the role of an IA.

1. All change is stressful

Every change brings with it some balance of costs and benefits, but even when a change is entirely positive, at least two factors cause stress. Firstly, introducing a new IT system will require the users to learn something: perhaps a new user interface component, a new range of configuration options or a new workflow. It might mean a change in responsibilities that affects the way they relate to co-workers. Because of these effects, a software change often results in a short-term loss of productivity. Secondly, a transition to something new almost invariably necessitates that something is left behind. People undergoing change often experience a grief process, the extent of which depends on the size of the change, the length of time the person has been using the previous system, the level of personal comfort with the previous system, the individual’s social support network and probably a bunch of other psycho-social factors.
The stress of change is exacerbated when the change is involuntary. For most people, a change imposed by external forces is a source of disempowerment, reducing their feeling of control and increasing their stress.

2. Systems resist change

The stress of change is evident just as much in organizations as in individuals.2 An organization is a complex system, and like all complex systems it seeks equilibrium. Organizational behavior tends towards a point where inputs, outputs, and internal processes are all stable. Such systems react to change as a threat and act to restore equilibrium.

In some cases change is resisted and sabotaged so that the organization reverts to the known equilibrium of the past. In other cases, change is accepted and the organization moves on to a new equilibrium. What guides an organization towards the second scenario is effective change management. This is where Lewin’s “Unfreeze, Transition, and Refreeze” approach can provide a useful framework.

In Why resistance matters (available from “”:, Rick Maurer notes that “Resistance is not the primary reason why changes fail. It is the reaction to resistance that creates the problems.” The professional IA will understand that resistance to change is inevitable and should use some of the techniques below to pre-empt and respond to that resistance.

3. Communicate

The people who will be affected by an IT change are unlikely to be impressed if the change is just sprung on them without warning. IAs can reduce resistance by ensuring that the nature of the technological change and expectations of behavioral change are communicated ahead of time. Concerns to be addressed include “Why do we need to change?”, “How will the future state differ from the current state?”, “When will the change occur?”, “Will it happen all at once, or gradually?”, “Will I receive the training that I need to make the changeover?” and “How will the change benefit me?” The last question is perhaps the most important, because people who can see the benefits of a change are far more likely to support that change.
Even if you believe the change will benefit the users, they may still have their own reasons for subverting the process. There’s a saying that goes “You can lead a horse to water, but you can’t make it drink”. I once heard a psychologist add to that saying: “But you can put salt in the oats!” He meant that you might be able to make the horse thirsty enough that it will welcome the water.
The next few points suggest ways to “salt the oats”.

4. Use participative design to foster “ownership”

There are many forms of communication and not all will avoid the resistance to change. If communication is one-way – from the people imposing change down to the users – resistance is virtually guaranteed. And it’s no good faking two-way communication with a couple of open question and answer sessions and a suggestion box. What you want is real involvement throughout the process by the people who will be affected by the change.

In a “First Monday article”:, Marty and Twidale repeat a common claim that “the best way to evaluate an interface for usability is to test that interface with representative users”. I don’t disagree, but I believe that greatest benefit of user testing is not the feedback it provides about usability but the opportunity it provides to involve the users in the IA process. User testing is an important means by which the voice of the user can influence design decisions. The more participation there is by the user community, the more that community feels some control over the change.

This is a basic principle of participative design.3 When the people affected by a change feel ownership of the change because they were part of its design and development, they will more readily support the behavioral changes necessary to make the system a success.
Associated with this sense of ownership is the value of a shared vision. If the body of people who will be affected by a change understand the intended future state and are convinced of its benefits, then the energy and excitement within the group can drive the transition forward. This is even more so, of course, if the users created the vision in the first place.

5. Build relationships

The IA who is just a technical resource is far less valuable than one who can listen, build trust, and facilitate group interaction. The effective change agent is adept at forming relationships with business management, other technical contributors, and users. The IA is typically not the head of this team, but can be central to it, playing an empathetic and facilitative role as a conduit between the various stakeholders.
The IA can make a big difference to the outcome of a project by relating to users in a way that acknowledges the value of their contribution. That can be done by taking their opinions seriously (which is what user-centric design is all about), by personally thanking them, by giving public recognition of their ideas and by engendering a collaborative environment that encourages honesty.

6. Find a sponsor and a champion

In the team responsible for implementing a new system, two particular roles are worth special mention: the Sponsor and the Champion.
Some writers confuse or conflate these two roles. In “Think like a consultant”:, for instance, George Olsen considers the need for an IA to be an agent of change and suggests the priority of enlisting the CEO as a champion but I think he means a sponsor. A Sponsor (or Patron) is a high-ranking person whose support for the project will guarantee that others will co-operate. The Sponsor just needs to “give the nod” occasionally to vest the IA with authority.
In most cases, however, the IA will not be senior enough to call the shots. Even with the Sponsor’s blessing, the IA will need the support of other significant change agents. In many cases, it is an effective partnership between the IA and the Champion that drives change. A Champion is the one who will push the project forward; ensure that the right people attend meetings, hire the necessary consultants, talk to everyone about how important the project is, inspire the team, push aside the barriers etc. Whereas the IA is often an outsider, the Champion is a respected and trusted leader within the organization.
The Sponsor and Champion may not always be two separate people: they may be one or it may be that many such people are enlisted to help others to change.

7. The objective side of change management

Not all change management is as “soft” or subjective as the previous suggestions might imply. Insights from Enterprise Performance Management approaches, such as the Balanced Scorecard Methodology, can add elements of objectivity to change implementation.
* Document a set of clear goals, for example, “Decrease data entry error rates”. If an IA project doesn’t have goals, how will anyone know whether it succeeded?
* Define a set of measures that indicate the extent to which the goals are being met. In many cases, these need to be tracked over time or at least measured before and after the change. For example, “Number of times data validation errors are displayed” and “Percentage of transactions that are edited after initial submission”.
* Identify project risks–that is, internal or external threats to the stated goals. Categorize the risks according to their estimated likelihood and potential impact and then plan how to either avoid them or mitigate their effects. For the IA, one significant risk will always be lack of user acceptance of the new system.
* Reward behavior that supports the goals. That’s pretty obvious really but often overlooked. What do data entry operators gain from making fewer errors?

To understand how these suggestions can be quantified and systematized there’s a good overview of the Balanced Scorecard Methodology on “”: While these techniques are totally ineffective without the inter-personal dimension, they can add depth to the IA’s toolkit and help to position IA within the larger domain of organizational strategy.

Conclusion: Encourage Authentic Participation

Through this article I’ve focused on that aspect of the IA’s role by which they contribute to change management. This may not be their primary role, nor even an essential role but being an effective agent of change can often mean the difference between a successful IA project and a failure. An agent of change needs to understand organizational dynamics and use their inter-personal skills to facilitate, motivate and empower behavioral change. I believe the most important principle in this process is to encourage the authentic participation of the people affected by a technological change in the design and implementation of that change.
Oh, and here’s the promised entertainment … Change is inevitable, except from vending machines.


Further Reading

Some useful starting points for studying change management further might be:
* Fred Nickols’ “Change Management 101: A Primer“: and his lengthy “bibliography”:
* The white paper Organizational Change: Managing the Human Side, available from “”:
* Dagmar Recklies’ article “What makes a good change agent?“:
* Enid Mumford’s online book “Designing Human Systems: The ETHICS Method“:


1. See “How to Spot a Failing Project“: by Rick Cook for a discussion of IT failure rates.
2. Some consultants, such as “Sandy Fekete”:, push the analogy even further, evaluating “corporate personality” using psychological instruments like the Myers-Briggs Type Indicator.
3. The term “participative design” can be understood intuitively to mean “involving the users in the design process”, but I’m actually referring to the technical use of this term as it is employed by “Enid Mumford”: and others who follow the socio-technical approach to systems design.