The Trouble With Web 2.0

Written by: Alexander Wilms

Today, there is a lot of buzz around a number of topics labeled as "Web 2.0“. Consultants jump on the "Web 2.0" bandwagon and IT vendors desperately struggle to add “Web 2.0” features to their products. But the term is still unclear and nobody has a good definition of what “Web 2.0” is and what it is not. The term was originally coined by Tim O’Reilly in an article describing the changes in business processes and models that have been triggered by new and creative combinations of already existing technologies.

Other “social networking” services (like wikis and blogs) have been added to the Web 2.0 “genre”, generating a new end user experience on the Internet. Many large enterprises are now starting “Web 2.0” projects, their IT departments seemingly eager to show their technological abilities. The “new” is strangely attractive and everyone wants to be on board.

But what is the real core of this new phenomenon? According to the conclusions of Tim O’Reilly the core is not the technology (which has been there for some time) but the emergence of new “patterns” – new or changed business processes and a new concept of the user. These patterns have been put to good use in the World Wide Web. But can they be transferred to a corporate environment, at the enterprise level, as well? Let’s have a closer look on them.

  • the Web 2.0 platform breaks down borders between services
  • Web 2.0 utilizes the collective intelligence of its users
  • Web 2.0 cannot control the process of knowledge creation
  • Web 2.0 is constantly linking knowledge, thereby not protecting intellectual property

The web as platform.

Or the Web 2.0 platform breaks down borders between services

One pattern identified by Tim O’Reilly is a change in the business model of software suppliers. In “Web 1.0” times the web was used only as a transport medium, delivering predefined information (e.g. static HTML pages) to client based software products (e.g. the Netscape browser). In Web 2.0 companies use the web as a platform, using the enhanced technology offering to create web-based applications or services that do not require any installation on the user’s machine (e.g. the Google Search Engine). If the client does not install a program that allows tracking of usage then charging for usage become difficult.

Business models that rely on the sale of personalized or concurrent software licenses will not work anymore. The suppliers will be forced to find another way to charge for their services. To generate income they may use advertisements (like Google) or additional service or content offerings (like “land” purchases in Second Life). Internal IT departments however may have to rethink their funding models, especially if they are funded by various company departments for their services. They will not be able to allocate operating costs per license if they want to be Web 2.0 .

Let’s assume a departmentally funded IT department offers a blog or wiki as a new service. No client installation is necessary, but access can be restricted to the members of those departments who pay for the service. Restricted access, as we will see later, will affect the overall value of a collaboration service and yet is not a fair sharing model – it opens up all sorts of claims for charge reduction on the grounds that the other departments use the service more intensively or in a different manner. A per-capita cost allocation may be a solution, but then the departments might try to overextend service usage as they don’t pay extra for more intensive use. The means of measuring usage and paying for it will have to change in order for the IT department to get paid for its services.

But this new platform pattern implies more than just a change in software distribution mechanisms. Not being bound to a client means these new web services typically do not rely on a single source of data, like a classical application with its own database, but combine data from different sources, enabled by open source protocols and standards. The large number of mash-ups (data combined and displayed in a new and usually creative manner) that have emerged around the Google map function are a good example. With these new service offerings the core competency of the service provider shifts from software development to database management and the ability to combine data that is available on the Web to create meaningful information.

Many internal IT departments face a different situation where access to data and the means to combine them are restricted by departmental boundaries, technological incompatibilities or data security and protection rules. Person-related HR or sales data are among the most protected in a corporate environment, and their usage is highly restricted for good reason. Data security and protection rules will also be a showstopper for externally hosted services. According to Article 25 of the European Data Protection Guideline, issued by the European Commission, personal data may only be transferred to third countries (all non-EU countries) if the receiving country provides an adequate level of protection. US organizations have to qualify under the “safe harbor” program to be able to receive data from a European organization.

But even then the idea of having your company’s most valuable data in an externally hosted application (e.g. the Salesforce CRM), with data transfer over the Internet will be the ultimate nightmare of every company’s IT security officer. The required security prohibits the clever combination and reinterpretation of data that we see on Google maps or elsewhere.

Web 2.0 cannot control the process of knowledge creation.

Or utilizing and enabling collective intelligence

We have seen that the new Web 2.0 services are powered by the ability to combine data from different sources. But where does the data come from and who creates it? The Web 2.0 pattern of “collective intelligence” shifts the task of creating and maintaining data and content from centralized resources to a dispersed user community. The eBay selling platform would be useless without the activities of the millions of sellers and buyers, who are creating the content and a critical mass of offerings that attract other users into using the service. Wikipedia would be a completely empty shell without its users creating and maintaining the content. In his article Tim O’Reilly states that the value of the service is related to the scale and dynamism of the data it provides. The larger the number of articles in Wikipedia the more users will use it as reference, therefore the service gets better the more people use it and contribute to it. Tim O’Reilly calls this the “LOW” principle – “let other’s do the work”. This works perfectly in the Web with its huge user base (according to the statistics already more than one billion people) but will this pattern also work in a corporate environment?

The corporate user base, even in the largest enterprises, is smaller than the user base on the Web which limits the number of potential contributors. If the quality of the service depends on the number of users then companies have a disadvantage compared to the Web-based services. In many interviews end users said that they often prefer to research on the Web instead of using their corporate knowledge resources because they are confused by the complexity of internal search and because the information they can find on the web seems to be more recent. However the reliability of Web-based information that can be edited by everybody is in doubt when comes to hard legal or medical use. Would you be willing to bet your career on a Wikipedia article?

If the quality of the service is improved by the amount of user commitment it will only be successful if a critical mass of users can be attracted. While Web services like eBay, Wikipedia or Flickr have been soaring over the last years, driven by user commitment, corporate services often have a contribution pattern like this:

web2_knowledgecurve.gif

But what attracts users to donate their time and energy to contribute to Web services like Wikipedia or Flickr while not doing so to corporate services? Psychology and economics teach us that there is no such thing as altruism – whatever people do will create a personal return of value for them. This personal value is measured by individual criteria. Respect and prestige, personal reputation, political beliefs or desires, and of course monetary incentives influence the decision as to whether their contribution creates this value. People create an article in Wikipedia because they believe that the topic is interesting or important or because they want to see their name in print, and put pictures on Flickr because they want to share them with others, thereby influencing how they are perceived by others. The value of contributing must be higher than doing something else (e.g. watching a sports game on TV or adding to the corporate knowledge base).

In a corporate environment this might be different as a different set of values becomes dominant. Company vision, goals, or instructions will be added on top of the personal value criteria, together with given priorities changing the decision of what creates value. Of course one big driver for such decisions is the need for an employee to be externally “chargeable”, a typical situation in the consultancy business. If a consultant has to decide whether he spends his time generating direct revenue for the company (and therefore for himself) by working for a client; or having to explain to his supervisor why he choose to improve the internal knowledge base instead he or she will opt for the first alternative. As long as contribution is regarded as less important topic in the corporate hierarchy the priority of knowledge initiatives will go down over time simply because people start to value other things as more important.

So contribution is rational for people if there is a reward. But there is another rationale for people, especially in large organisations. Cooperation within the Web communities is mainly driven by non-monetary values as the contributors don’t receive any money for their input. These communities are networks of specialists who rely on each others knowledge. Investing time to create and contribute knowledge will pay off here because there is no direct competition and other people’s knowledge might help you tomorrow. Small consultant companies may be examples of those tight communities. If there is direct competition people normally tend to change their behaviour. They try to acquire and to protect their special knowledge. The German language even has a special term for this kind of behaviour: “Herrschaftswissen”, which means superiority through withheld or not communicated information or knowledge. Many corporations have answered these issues by centralizing knowledge management into special departments . But Web 2.0 requires involving the largest number of users possible and a centralized approach might not be the right answer anymore. Corporate projects that focus on providing Web 2.0 technologies alone will fail if the companies do not change their rewarding schemes and knowledge management processes. Such projects will need to rethink incentives for participators and to create the time slots necessary for people to contribute. If the corporation wants the genuine benefit from Web 2.0 then they must not underestimate the effort it takes to produce it. Finding a balance between corporate or proprietary knowledge and the free-for-all idea exchange of Web 2.0 services is critical if a corporate IT department wants be benefit from Web 2.0 style services.

Web 2.0 cannot control the process of knowledge creation.

Or the uncontrollable wisdom of the “blogosphere”

Judging by the hype they have created; wikis and web logs (blogs) seem to be an important part of the Web 2.0 patterns. As blogs have spread through the web like wildfire vendors of content management systems are struggling to add this functionality into their applications. Already the term “blogosphere” has emerged as a collective term encompassing all blogs and their interconnections. It is the perception that blogs exist together as an extended connected community or a complex social system. However it is common knowledge that a system composed from several component parts will act differently than its individual isolated components. An ant colony develops complex social behaviour and erects structures – a task a single ant could never perform. While a single nerve cell is only able to transfer electrical impulses the enormous network of synaptical references and trackbacks of the human forebrain enables conscious thought.

Tim O’Reilly states that the blogosphere creates a structure that resembles the human brain. Expressing an idea in a single blog might not change the world, but if this idea is picked up, discussed and commented in a large number of blogs it not only gets the attention of many people – it might get enhanced, developed, refined, challenged and eventually transformed evolutionary into something that might influence the way of the world. Like in the anthill or the human brain this process is not controlled by a single instance – it is driven by the participation and cooperation of many individuals with their individual motives. This absence of a controlling instance allows for creativity, progress of ideas and the expression of individual opinions. The old saying that the whole is more than the sum of the parts is true here. However it is a self-organized process that follows its own rules – forcing or securing this is not currently possible nor probably desirable.

The various discussions on the attempts of some nation states to restrict Internet access within their borders show that organizations that rely on its inhabitants to keep within the existing structures will only tolerate an “uncontrollable” environment up to a certain level and will try to erect restrictions if this environment starts to thread organizational foundations. Using the discussions within a blogosphere to enable the development of new consulting solutions might be welcome to a corporation while critical comments on the latest corporate policies and procedures might not. In some corporate areas where following predefined procedures and processes (e.g. accounting standards) is necessary or where secrets have to be protected an open exchange among employees might not be allowed at all to protect the company interests. And companies who start allowing employees to blog will experience that enforcing control on the blog or wiki contents will be detected and will create strong opposition. Companies have to be aware that open service offerings like blogs and wikis cannot be removed afterwards or restricted in scope without losing employee loyalty or making themselves look like fools in the process. And worse – as the Internet provides a means to get around those blockages that the enterprise might believe are comforting: people might take their internal discussions public.

One other aspect – the strength of a wiki is the presentation of content in a loosely structured, intuitive way, creating a hypertext structure resembling creative human thought processes. As there is no visible hierarchical structure in a wiki retrieving content mainly relies on search. That is why Wikipedia or Google show a large search field on their main pages. This unstructured organisation of content fails if the content itself is highly structured. A law commentary contains the text of the law structured by paragraphs, with some comments or additional materials for each paragraph, sometimes for each sentence. Having a search might help if materials related to a special topic are needed, but if an experienced lawyer needs the latest comment on a certain paragraph he needs a different navigation – he will prefer to select the paragraph directly from a list, browsing through the hierarchy of paragraphs and comments. So wikis are a great tool but not a cure-all.

The Web 2.0 is constantly linking knowledge, thereby not protecting intellectual property.

Or the perpetual beta, mash-ups and new intellectual property

Another pattern Tim O’Reilly points to – most of the emerging 2.0 services will (or should) forever wear a “beta” sticker on their homepages. As the role of the user moves from passive consumer to active participant; the quick and continuous implementation of user driven enhancements becomes a driver for the service provider, especially in a competitive market environment.

Release cycles tend to diminish as deployment does not count for Web based services – users will always get the latest version of the service when (re-)loading the site. While development cycles shrink to days or hours and pressure to continuously implement new features rises quality assurance procedures becomes less important – bug-fixes can be deployed instantly and as long as the users don’t pay for the service they will tolerate errors or will be willing to learn new features by themselves. In a corporate environment this might be different. Deployment delays do play a role, especially when permanent online access is not given or network traffic is limited. Quality of the service and at least a certain operational stability will be more important than speed of delivery, especially if software errors will cause financial damage or threaten the company in other ways. And as corporate applications tend to be more complex than Web services training efforts becomes an important part. So corporate applications cannot be developed and deployed using a “perpetual beta” mode.

One other thing is the focus – while most of the Web 2.0 companies focus on a single product or a small suite of similar services; an internal corporate IT service provider will have hundreds, sometimes thousands of services (and applications) to provide. Release management and portfolio management will be needed to ensure maximum value, which means that development resources might not be able to work on one service the whole time.

Another pattern that follows from this development mode is the emergence of so called mash-ups. As the rapid development relies on “lightweight programming models” (another pattern described by Tim O’Reilly), like scripting languages the security of the code is minimal, exposing the software to the users who are able to use or even “hijack” the existing interfaces to create their own solutions and mash-ups on the existing platforms, combining data from different sources to create new information and knowledge. In some cases (e.g. Google Maps) this is welcome to the service provider as it increases the spread of the service and the data quality it provides – all the users of Google Maps do provide Google with an enormous amount of location data, enabling Google to create the most detailed worldwide Yellow Pages ever seen. However if access to or re-use of the data should be limited (e.g. to paying customers) the Web 2.0 technology might not be safe enough. In business to business relations and also in a corporate environment data protection, security and the protection of intellectual property are issues of huge importance, so an open technology platform will be out of scope. On the other hand this limits companies in leveraging the know-how and creativity of its users. Even the internal use of existing Internet web-based services might cause issues as the company cannot control the service. What if the service provider decides to change, charge for or even discontinue an external service the company has come to rely on? Replacing the service will again create additional efforts to adapt the internal applications, which might outweigh the savings created by the free use of the service.

What now?

We have seen that there are differences between the Web and corporate environments. While the Web is a deregulated environment, with millions of users contributing and easy access to data, corporations have to restrict their users for many reasons, thereby limiting the potential of the Web 2.0 patterns. While Web 2.0 patterns work well in the Web there might be obstacles and issues when they are implemented in a corporate environment without adaptation. “Might” because every company is an individual organization and there are no easy, “one-size-fits-all” solution. On the other hand the Web 2.0 patterns have been proven to be too successful to be ignored.

There is no ready-made solution, only some good advice. The most important and most simple is that corporate behaviours and processes are not changed just by implementing a new IT service. Installing a blog in a formal and hierarchical corporate culture will not change the company in an open and informal community. Web 2.0 patterns will only work if the corporate and even national culture is already responsive to more collaboration and participation or if the implementation is accompanied by other measures to support cultural change. Creating and holding up motivation of users to contribute, seemingly no problem in the WWW with a billion users will be one of the success factors. So corporate Web 2.0 implementation projects have to broaden their scope, have to add structural and cultural change or process redesign to their charter. And those “soft” topics tend not to have easy solutions. So when your IT department or an external consultant excitedly tells you about how they are adding “Web 2.0” to the corporate computing environment: be prepared for a difficult birthing process and adjusting your expectations.

I would be happy to hear about your experiences.