We Tried To Warn You, Part 1

Written by: Peter Jones
I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure.

There are many kinds of failure in large, complex organizations – breakdowns occur at every level of interaction, from interpersonal communication to enterprise finance. Some of these failures are everyday and even helpful, allowing us to safely and iteratively learn and improve communications and practices. Other failures – what I call large-scale – result from accumulated bad decisions, organizational defensiveness, and embedded organizational values that prevent people from confronting these issues in real time as they occur.

So while it may be difficult to acknowledge your own personal responsibility for an everyday screw-up, it’s impossible to get in front of the train of massive organizational failure once its gained momentum and the whole company is riding it straight over the cliff. There is no accountability for these types of failures, and usually no learning either. Leaders do not often reveal their “integrity moment” for these breakdowns. Similar failures could happen again to the same firm.

I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure. We must try to stop the train, even if we are many steps removed from the larger decision making process at the root of these failures.

h2. Organizations as Wicked Problems

Consider the following scenario: A $2B computer systems integrator provider spends most of a decade developing its next-generation platform and product, and spends untold amounts in labor, licenses, contracting, testing, sales and marketing, and facilities. Due to the extreme complexity of the application (user) domain, the project takes much longer than planned. Three technology waves come and go, but are accommodated in the development strategy: Proprietary client-server, Windows NT application, Internet + rich client.

By the time Web Services technologies matured, the product was finally released as a server-based, rich client application. However, the application was designed too rigidly for flexible configurations necessary for the customer base, and the platform performance compared poorly to the current product for which the project was designed as a replacement. Customers failed to adopt the product, and it was a huge write-off of most of a decade’s worth of investment.

The company recovered by facelifting its existing flagship product to embrace contemporary user interface design standards, but never developed a replacement product. A similar situation occurred with the CAD systems house SDRC, whose story ended as part two of a EDS fire sale acquisition of SDRC and Metaphase. These failures may be more common that we care to admit.

From a business and design perspective, several questions come to mind:
* What were the triggering mistakes that led to the failure?
* At what point in such a project could anyone in the organization have predicted an adoption failure?
* What did designers do that contributed to the problem? What could IA/designers have done instead?
* Were IA/designers able to detect the problems that led to failure? Were they able to effectively project this and make a case based on foreseen risks?
* If people act rationally and make apparently sound decisions, where did failures actually happen?

This situation was not an application design failure; it was a total organizational failure. In fact, it’s a fairly common type of failure, and preventable. Obviously the market outcome was not the actual failure point. But as the product’s judgment day, the organization must recognize failure when goals utterly fail with customers. So if this is the case, where did the failures occur?

It may be impossible to see whether and where failures will occur, for many reasons. People are generally bad at predicting the systemic outcomes of situational actions – product managers cannot see how an interface design issue could lead to market failure. People are also very bad at predicting improbable events, and failure especially, due to the organizational bias against recognizing failures.

Organizational actors are unwilling to acknowledge small failures when they have occurred, let alone large failures. Business participants have unreasonably optimistic expectations for market performance, clouding their willingness to deal with emergent risks. We generally have strong biases toward attributing our skills when things go well, and to assigning external contingencies when things go badly. As Taleb (2007)1 says in The Black Swan:

bq. “We humans are the victims of an asymmetry in the perception of random events. We attribute our success to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living. Ninety-four percent of Swedes believe that their driving skills put them in the top 50 percent of Swedish drivers; 84 percent of Frenchmen feel that their lovemaking abilities put them in the top half of French lovers.” (p. 152).

Organizations are complex, self-organizing, socio-technical systems. Furthermore, they can be considered “wicked problems,” as defined by Rittel and Webber (1973)2. Wicked problems require design thinking; they can be designed-to, but not necessarily designed. They cannot be “solved,” at least not in the analytical approaches of so-called rational decision makers. Rittel and Webber identify 10 characteristics of a wicked problem, most of which apply to large organizations as they exist, without even identifying an initial problem to be considered:

# There is no definite formulation of a wicked problem.
# Wicked problems have no stopping rules (you don’t know when you’re done).
# Solutions to wicked problems are not true-or-false, but better or worse.
# There is no immediate and no ultimate test of a solution to a wicked problem.
# Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
# Wicked problems do not have an enumerable set of potential solutions.
# Every wicked problem is essentially unique.
# Every wicked problem can be considered to be a symptom of another [wicked] problem.
# The causes of a wicked problem can be explained in numerous ways.
# The planner has no right to be wrong.

These are attributes of the well-functioning organization, and apply as well to one pitched in the chaos of product or planning failure. The wicked problem frame also helps explain why we cannot trace a series of decisions to the outcomes of failure – there are too many alternative options or explanations within such a complex field. Considering failure as a wicked problem may offer a way out of the mess (as a design problem). But there will be no way to trace back or even learn from the originating events that the organization might have caught early enough to prevent the massive failure chain.

So we should view failure as an organizational dynamic, not as an event. By the time the signal failure event occurs (product adoption failure in intended market), the organizational failure is ancient history. Given the inherent complexity of large organizations, the dynamics of markets and timing products to market needs, and the interactions of hundred of people in large projects, where do we start to look for the first cracks of large-scale failure?

h2. Types of Organizational Failure

How do we even know when an organization fails? What are the differences between a major product failure (involving function or adoption) and a business failure that threatens the organization?

An organizational-level failure is a recognizable event, one which typically follows a series of antecedent events or decisions that led to the large-scale breakdown. My working definition:

“When significant initiatives critical to business strategy fail to meet their highest-priority stated goals.”

When the breakdown affects everyone in the organization, we might say the organization has failed as whole, even if only a small number of actors are to blame. When this happens with small companies, such as the start-up I worked with early in my career as a human factors engineer, the source and the impact are obvious.

Our company of 10 people grew to nearly 20 in a month to scale up for a large IBM contract. All resources were brought into alignment to serve this contract, but after about 6 months, IBM cut the contract – a manager senior to our project lead hired a truck and carted away all our work product and computers, leaving us literally sitting at empty desks. We discovered that IBM had 3 internal projects working on the same product, and they selected the internal team that had finished first.

That team performed quickly, but their poor quality led to the product’s miserable failure in the marketplace. IBM suffered a major product failure, but not organizational failure. In Dayton, meanwhile, all of us except the company principals were out of work, and their firm folded within a year.

Small organizations have little resilience to protect them when mistakes happen. The demise of our start-up was caused by a direct external decision, and no amount of risk management planning would have landed us softly.

I also consulted with a rapidly growing technology company in California (Invisible Worlds) which landed hard in late 2000, along with many other tech firms and start-ups. Risk planning, or its equivalent, kept the product alive – but this start-up, along with firms large and small, disappeared during the dot-bomb year.

To what extent were internal dynamics to blame for these organizational failures? In retrospect, many of the dot-bombs had terrible business plans, no sustainable business models, and even less organic demand for their services. Most would have failed in a normal business climate. They floated up with the rise of investor sentiment, and crashed to reality as a class of enterprises, all of them able to save face by blaming external forces for organizational failure.

h2. Organizational Architecture and Failure Points

Recognizing this is a journal for designers, I’d like to extend our architectural model to include organizational structures and dynamics. Organizational architecture may have been first conceived in R. Howard’s 1992 HBR article “The CEO as organizational architect.” (The phrase has seen some academic treatment, but is not found in organizational science literature or MBA courses to a great extent.)

Organizations are “chaordic” as Dee Hock termed it, teetering between chaotic movement and ordered structures, never staying put long enough to have an enduring architectural mapping. However, structural metaphors are useful for planning, and good planning keeps organizations from failing. So let’s consider the term organizational architecture metaphorical, but valuable – giving us a consistent way of teasing apart the different components of a large organization related to decision, action, and role definition in large project teams.

Let’s start with organizational architecture and consider its relationships to information architecture. The continuity of control and information exchange between the macro (enterprise) and micro (product and information) architectures can be observed in intra-organizational communications. We could honestly state that all such failures originate as failures in communications. Organizational structure and processes are major components, but the idea of “an architecture,” as we should well know from IA, is not merely structural. An architectural approach to organizational design involves at least:

  • *Structures*: Enterprise, organizational, departmental, networks
  • *Business processes*: Product fulfillment, Product development, Customer service
  • *Products*: Structures and processes associated with products sold to markets
  • *Practices*: User Experience, Project management, Software design
  • *People and roles*: Titles, positions, assigned and informal roles
  • *Finance*: Accounting and financial rules that embed priorities and values
  • *Communication rules*: Explicit and implicit rules of communication and coordination
  • *Styles of interaction*: How work gets done, how people work together, formal behaviors
  • *Values*: Explicit and tacit values, priorities in decision making

Since we would need a book to describe the function and relationships within and between these dimensions, let’s see if the whole view suffices.

Each of these components are significant functions in the organizational mix, all reliant on communication to maintain its role and position in the internal architecture. While we may find may have a single communication point (a leader) in structures and people, most organizational functions are largely self-organizing, continuously reified through self-managing communication. They will not have a single failure point identifiable in a communication chain, because nearly all organizational conversations are redundant and will be propagated by other voices and in other formats.

Really bad decisions are caught in their early stages of communication, and become less bad through mediation by other players. So organizations persist largely because they have lots of backup. In the process of backup, we also see a lot of cover-up, a significant amount of consensus denial around the biggest failures. The stories people want to hear get repeated. You can see why everyday failures are easy to catch compared to royal breakdowns.

So are we even capable of discerning when a large-scale failure of the organizational system is immanent? Organizational failure is not a popular meme; employees can handle a project failure, but to acknowledge that the firm broke down – as a system – is another matter.

According to Chris Argyris (1992), organizational defensive routines are “any routine policies or actions that are intended to circumvent the experience of embarrassment or threat by bypassing the situations that may trigger these responses. Organizational defensive routines make it unlikely that the organization will address the factors that caused the embarrassment or threat in the first place. (p. 164)” Due to organizational defenses most managers will place the blame for such failure on individuals rather than the consequences of poor decisions or other root causes, and will deflect critique of the general management or decision making processes.

Figure 1 shows a pertinent view of the case organization, simplifying the architecture (to People, Process, Product, and Project) so that differences in structure, process, and timing can be drawn.

Projects are not considered part of architecture, but they reveal time dynamics and mobilize all the constituents of architecture. Projects are also where failures originate.

The timeline labeled “Feedback cycle” shows how smaller projects cycled user and market feedback quickly enough to impact product decisions and design, usually before initial release. Due to the significant scale, major rollout, and long sales cycle of the Retail Store Management product, the market feedback (sales) took most of a year to reach executives. By then, the trail’s gone cold.


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

Over the project lifespan of Retail Store Management, the organization:

  • Planned a “revolutionary” not evolutionary product
  • Spun off and even sequestered the development team – to “innovate” undisturbed by the pedestrian projects of the going concern
  • Spent years developing “best practices,” for technology, development, and the retail practices embodied in the product
  • Kept the project a relative secret from rest of the company, until close to initial release
  • Evolved technology significantly over time as paradigms changed, starting as an NT client-server application, then distributed database, finally a Web-enabled rich client interface.

Large-scale failures can occur when the work domain and potential user acceptance (motivations and constraints) are not well understood. When a new product cannot fail, organizations will prohibit acknowledging even minor failures, with cumulative failures to learn building from small mistakes. This can lead to one very big failure at the product or organizational level.

We can see this kind of situation (as shown in Figure 1) generates many opportunities for communications to fail, leading to decisions based on biased information, and so on. From an abstract perspective, modeling the inter-organizational interactions as “boxes and arrows,” we may find it a simple exercise to “fix” these problems.

We can recommend (in this organization) actions such as educating project managers about UX, creating marketing-friendly usability sessions to enlist support from internal competitors, making well-timed pitches to senior management with line management support, et cetera.

But in reality, it usually does not work out this way. From a macro perspective, when large projects that “cannot fail” are managed aggressively in large organizations, the user experience function is typically subordinated to project management, product management, and development. User experience – whether expressing its user-centered design or usability roles – can be perceived as introducing new variables to a set of baselined requirements, regardless of lifecycle model (waterfall, incremental, or even Agile).

To make it worse (from the viewpoint of product or requirements management), we promote requirements changes from the high-authority position conferred by the reliance on user data. Under the organizational pressures of executing a top-down managed product strategy, leadership often closes ranks around the objectives. Complete alignment to strategy is expected across the entire team. Late-arriving user experience “findings” that could conflict with internal strategy will be treated as threatening, not helpful.

With such large, cross-departmental projects, signs of warning drawn from user data can be simply disregarded, as not fitting the current organizational frame. And if user studies are performed, significant conflicts with strategy can be discounted as the analyst’s interpretation.

There are battles we sometimes cannot win. In such plights, user experience professionals must draw on inner resources of experience, intuition, and common sense and develop alternatives to standard methods and processes. The quality of interpersonal communications may make more of a difference than any user data.

In Part II, we will explore the factors of user experience role, the timing dynamics of large projects, and several alternatives to the framing of UX roles and organizations today.

The Limitations of Server Log Files for Usability Analysis

Written by: Karl Groves

Introduction

One of the challenges faced most often by those of us in the field of usability is finding good data about user behavior quickly, accurately, and, in most cases, cheaply. In an environment where many stakeholders question the return on investment in usability, some in the industry have developed interesting ideas aimed at gathering user data. One such idea is the analysis of server log files to gather information about user behavior. On the surface, it is easy to understand the gravitation towards server logs: They’re supposedly a data source which portrays what people are doing on a site. Server logs supposedly show what people click on, which pages they view, and how they get from page to page.

Unfortunately, practitioners who espouse such methods seem to lack important technical knowledge regarding the nature of the web, the Hypertext Transfer Protocol (HTTP) and the process of caching within networks, proxies, ISPs, and browsers. These technical details greatly limit the types and quality of information that can be retrieved from server logs.

In addition to the technical limitations of server log file analysis, without information regarding exactly what the user expects to find and why he makes the choices he makes, there’s no way for us to know whether he was successful in his quest and whether that quest was satisfying. Ultimately that is the usability information we seek.

Server log files are inappropriate for gathering usability data. They are meant to provide server administrators with data about the behavior of the server, not the behavior of the user. The log file is a flat file containing technical information about requests for files on the server. Log file analysis tools merely assemble them in a conjecture-based format aimed at providing insight into user behavior. In the commentary below, I will explain why the nature of the web, the HTTP Protocol, the browser, and human behavior make it impossible to derive meaningful usability data from server logs.

First, some technical background information is needed.

What is a Server Log File?

Server traffic logs are files generated by the server in order to provide information about requests to the server for data. When a computer connects to a site, the computer, browser, and network will deliver some data to the site’s server itself to create a record that a file was requested. Here’s what an entry into a log file looks like:

86.42.132.114 – – [31/Oct/2005:18:15:16 -0500] "GET /styles/style.css HTTP/1.1" 200 5194 "http://www.example.com/links/links.php?cat=css" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.12) Gecko/20050915 Firefox/1.0.7"

The format above is from an Apache log. Depending on the type of server the site is on, the log entries may look different. Thousands (or even hundreds of thousands) of entries such as the one above are placed into a plain text file, called the server log.

The above log entry includes the following information:

  1. IP address of the requesting computer: 86.42.132.114. This is not the user's IP address, but rather the address of the host machine they've connected to.
  2. Date and time of the request: [31/Oct/2005:18:15:16 -0500]. That's October 31, 2005 at 6:15:16pm and the time zone is 5 hours behind GMT, which is Eastern Standard Time in the USA (this is because the server is in that time zone, not the user.)
  3. The full HTTP request: "GET /styles/style.css HTTP/1.1"
    1. Request method: GET
    2. Requested file: /styles/style.css
    3. HTTP Protocol version: HTTP/1.1
  4. HTTP Response Code: 200. This particular code means the request was ok.
  5. Response size: 5194 bytes. This is the size of the file that was returned.
  6. Referring document: http://www.example.com/links/links.php?cat=css. The links.php file is referring to its embedded style sheet.
  7. User-Agent String (Browser & Operating system information):"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.12) Gecko/20050915 Firefox/1.0.7". The user’s computer is using Firefox browser on Windows XP and the language is set to English – US.

Of all of the information in the log entry, only the time and date, HTTP Request, and the response information should be regarded as accurate. The IP address, referrer, and user-agent string should be regarded as unreliable, as it can be faked in some way by the user. For example, the user has the option with Netscape 8 to publicly identify the browser as Internet Explorer during the setup process and many other browsers offer this option in their “Options” or “Preferences” menus as well.

Analysis Tools

Many organizations use an analysis program to parse server log files so that they’re much easier to understand. Imagine trying to cull anything of value from a huge text file of entries like the one above when hundreds of thousands (or even millions) of entries are present in the log file! Essentially, the analysis tool treats the log file as a flat-file database, processes it, and generates the “statistics” that are discussed throughout the rest of this commentary. In other words, “Web Analytics” software does little more than provide its own interpretation of data contained in the log file which, as stated above, could be at least partially faked.

[Note: I realize that some analytics software gather data by other means than parsing log files and may in fact contain some features meant to overcome one or more of the criticisms I outline throughout this article. I do not discuss such programs, primarily because there is little consistency between them and ultimately they are just as poor at gathering real usability data as analytic tools which parse log files.]

How the Web Works

In order to understand how Web server log files work and why they are considered inappropriate for measuring usability, it is important to get a brief background on how the web itself works.

The HTTP Protocol

HTTP, Hypertext Transfer Protocol, is “A protocol used to request and transmit files, especially webpages and webpage components, over the Internet or other computer network.” [1] HTTP dictates the manner in which computers, servers, and browsers transfer data over the Web. HTTP is known as a “stateless” protocol, meaning that an HTTP connection to a site is not continuous. The steps involved in requesting a Web page are:

  1. The user requests a page by following a link or typing an address in the browser’s address bar.
  2. The browser requests a page.
  3. The server responds by delivering the page, which is displayed in the user's browser window.
  4. The connection between the user’s computer and the Web server is severed. At this point, the transaction between the user and the website is considered "done" as far as the server is concerned.

Every request for a new page on the server initiates these steps. In fact, this process occurs for every element requested on that page. Therefore, if there is a page with 10 images and a style sheet, the above routine will repeat 12 times (1 page + 1 style sheet + 10 images = 12 requests).

The reasoning behind all this is important to keep in mind, because in the early days of the Web, computers were slow. Servers were slow and networks were slow as well. This slowness was compensated for in two ways: by opening and closing connections for each request as described above (rather than as one big stream), and by "caching" the data as well.

Caching

Caching is defined as “local storage of remote data designed to reduce network transfers and therefore increase speed of download. The cache is a ‘storeroom’ where the data is kept.” [2] To put it more simply: Computers store data in a cache on the user’s computer so that they can get to that data again without downloading it each time it’s needed. For the Web this is important because, as mentioned above, the historical slowness of the Web connections and computers was a severe limitation on the overall usability of the Web.

Caching, even with the speed of today’s Web connections and the power of modern personal computers, enhances performance. For example, in a very simple site where most pages have only two images and a style sheet, each page generates only four requests – again, 1 page + 1 style sheet + 2 images = 4 requests. Without caching, every page viewed on the site would require the server to send four files to the computer when only one is really necessary: the new document. This not only goes for the individual elements embedded into the page, but also the page itself. If a user visits a page, then another, then wants to return to that first page, the computer should not have to download the same page again when it has already been downloaded once. Even with the increasing number of broadband users, if everything had to be downloaded repeatedly for every page viewed, the computer, the server, and the Internet would get bogged down transferring and receiving this enormous volume of redundant data.

Who caches?

Everyone caches. Every browser of every user throughout the world has a cache generated while browsing. Default settings for the cache are set rather high by browser manufacturers in an attempt to increase the performance (and therefore usability) for the user. The screen capture below shows the default cache size for Internet Explorer 7 as 124 Megabytes:

Default cache settings

As a browser’s cache fills up, more and more pages which the user views frequently (such as on their favorite sites) will be retrieved right from cache, rather than arriving via new requests to the server.

In addition, almost all corporate and institutional networks have caches (most users would experience a cache like this at their job), as do almost all Internet Service Providers (ISP), such as AOL or MSN. The larger the network or ISP, the larger the cache.

A quote from Stephen Turner, developer of Analog Stats, explains it this way: “This means that if I try to look at one of your pages and anyone else from the same ISP has looked at that page, the cache will have saved it, and will give it [the page] out to me without ever telling you [the source of the web page] about it. (This applies whatever my browser settings). So hundreds of people could read your pages, even though you’d only sent it out once.”

Why cache?

Caching saves on bandwidth and hardware needs, and provides overall usability for everyone on the web. This means HTTP is really more complicated than the previous description of four steps. It now becomes:

Caching workflow

Caching Wreaks Havoc on Statistics

From the image displayed above, we begin to realize the problems with looking at server logs. There may never be a connection between the user’s computer and the site’s server in order to fulfill the user’s request to see the page.

Caching is definitely a good thing. Its importance for the overall usability of the Web cannot be overstated. But for understanding log file analysis, the purpose of this article, caching makes all site traffic stats completely inaccurate. To further invalidate the stats, the most popular pages of the site are likely to be cached more often, thus creating more problems in getting accurate log data.

Caching’s Effect on Margin-of-Error

It’s reasonable to take the position that we don’t need 100% accurate logs to get reliable data. We just need an acceptable margin of error. If the stats in question are from a small sized sample (read as: a less popular site with little traffic), then our margin of error in these statistics will be very high. To overcome the problem of a large margin of error, a larger sample size (a more popular site) might seem appropriate. Unfortunately, the more popular the site, the more likely it is that the site will be cached by networks and proxies, and the more likely it is that AOL users will be swapping Internet Protocol (IP) Addresses with each other (more on that later). Sites with large numbers of visitors may never achieve an acceptable margin of error because so much caching occurs that the data are not accurate.

Usability Data Cannot Be Found In Log Files

It is helpful to remember that the ONLY things that log files can report are regarding requests to the server. So if the user’s network or ISP has a cached copy of your page, then there is no request. If there was no request, then no log entry is created. If no log entry is created, the user’s desire for that page is not accounted for in the stats.

Usability Data: Who Is Coming To Your Site?

As demonstrated in the beginning of this article, log entries do not include any demographic data about a user. The log entry cannot tell the user’s age, sex, race, education, experience with computers and the Internet, experience with the organization’s product or service, or any of the other information usability practitioners typically include when generating a “persona.” The closest thing to “who” information is the logged IP address.

In the log entry example provided in the beginning of this article, 86.42.132.114 is an address owned by Eircom IP Networks Group from Dublin Ireland. But even this does not equate to definitive information about the identity of the user. Eircom’s own web site says they offer services to businesses and individuals. So, this could be a home surfer, a secretary, or the president of a company – all three of whom could have very different demographic qualities and reasons for coming to the site. The only possible assumption one could make is that the person came from somewhere in Ireland. But even that may not be true. The IP address recorded in the log file is the IP address of the host machine that the user was connected to when they accessed the site. The user could actually be very far away from that host machine or could even be using a program which “anonymizes” him, hiding his location, and making him seem like he’s in an entirely different country. For usability purposes, there simply is no way of knowing anything accurate about “who” is visiting the site – not even where he is located.

Usability Data: What Information is Requested?

Log files cannot tell how many actual times a page from the site was viewed because caching causes some "traffic" to not get counted. Even if we pretend that the page counts are accurate, the bigger issue is that "what information they're requesting" provides no data of value for usability.

  • Log files don't indicate what information users want.
  • Log files don't indicate what information users expected to find.
  • Log files don't indicate whether that request fulfilled the user's needs.
  • Log files don't indicate whether that information was easy to find.
  • Log files don't indicate whether that information was easy to use once users found it.

At most, a measure of page requests can indicate that users thought they could get what they wanted there, but we still don't know exactly what 'it' was they wanted. Without information about what they wanted, there's no way of knowing whether their page view satisfactorily gave the users what they came for.

Furthermore, some page views could be artificially inflated by users going from page-to-page looking for something they cannot find. Page views of some pages could also be inflated as users drill down into a site's architecture as they seek what they really want. No matter whether the request is a success or failure, pages in the site could easily seem to have large requests even if those pages are just on the way or in the way.

Let’s imagine that the user of a newspaper site wants a medicine-related story published June 20, 2004 and that page is located at: Archives -> 2004 -> June -> 20th -> Medicine -> Story [goal]. Further, let's imagine the user's response is to follow the path above until the June 20th edition has been found. In doing so, this increases the page views to five pages to find the story in question.

If a large number of users who peruse the archives of the site use a similar scheme, in the overall traffic measure they have the affect of "artificially" increasing hits to all of the pages at the top levels as user after user branches off of them. Ultimately that data does not provide usability information. The measure of a site's usability is whether the user succeeded in doing that which he set out to do and whether he felt it was an easy thing to do. Page requests have no direct bearing on any true aspects of usability and indicate neither success nor satisfaction.

Usability Data: Can Log Files Indicate How People Navigate on a Site?

Another misunderstanding is found in claims that log files can describe “how people navigate” on a site. As before, the problem with this claim lies in the issue of caching. Even in a best-case scenario the only requests that would be counted would be requests for pages the user had not seen already. Caching means that anytime the user goes back to a page they’ve already seen, that page view is not counted. So, with that understanding, how can log files tell us “how people navigate”?

Tracking a participant's usage of the Sears.com website's tool section, we saw this example played out in real time. The pages visited were as follows:

  1. Home
  2. Tools
  3. Air Compressors
  4. Automotive Air Tools
  5. Drills
  6. Craftsman 1/2 in. Professional One-Touch Drill
  7. "Back" to Drills – comes from cache
  8. Chicago Pneumatic 3/8 in. Angle Drill
  9. "Back" to Drills – comes from cache
  10. Craftsman 1/2 in Heavy Duty Reversible Drill
  11. "Back" to Drills – comes from cache
  12. "Back" to Automotive Air Tools – comes from cache
  13. Grinders
  14. Craftsman 7 in. Angle Grinder
  15. "Back" to Grinders – comes from cache
  16. Craftsman 1/4 in Die Grinder kit
  17. "Back" to Grinders – comes from cache
  18. Craftsman Die Grinder
  19. "Back" to Grinders – comes from cache
  20. "Back" to Automotive Air Tools – comes from cache
  21. Sanders
  22. Craftsman Dual Action Sander
  23. "Back" to Sanders – comes from cache
  24. Chicago Pneumatic Dual Action Sander
  25. "Back" to Sanders – comes from cache
  26. Craftsman High Speed Rotary Sander

As the page list above demonstrates, as a user browses through the site, the number of actual requests may very well diminish because more and more pages are getting cached as the user navigates through the site. Thus, as they browse and eventually return to pages they've already seen their requests for those pages are not counted in the server's log file. Using log files to track a user's visit as a way to tell how people navigate would result in numerous gaps between what pages the log files indicate were visited and what pages the user actually did visit. So, log files are not a way to identify trends in how visitors navigate.

Moreover, what usability data could a list of pages visited provide? Even if there was no caching anywhere, all we'd have is data about a series of requests that a user made during a visit to the site. To repeat we are left with the following stark realities:

  • The user's path does not tell us what the user wanted.
  • The user's path does not tell us what information they expected to find.
  • The user's path does not tell us whether that request fulfilled their needs.
  • The user's path does not tell us whether that information was easy to find.
  • The user's path does not delineate distractions that interfered with an initial purpose.
  • The user's path does not tell us whether that information was easy to use once it was found.

Usability Data: What Element(s) Did the User Click?

Some sources state that you can determine which element (link, icon, etc.) a visitor clicked on the page to go to the next page. There is simply no component of web sites, web servers, or server log analysis programs which would tell exactly which link/ icon/ button the user clicked to navigate.

Instead, analytics programs "guess" at this information by interpreting the referring pages for a request. For example, if a user follows a link to the site's "News" page, and the referring document was the "Home" page, then the analytics program will locate the "News" link on the "Home" page and say that's where the user clicked. This data is rendered completely unreliable in any instance where there are two links on the page that go to the same destination.

There are those who've proposed methods using DOM Scripting or Ajax to gather this information. Unfortunately, such proposed methods typically involve means which would cause duplicate requests to the server. Such methods – while interesting – are inappropriate for a production environment as these duplicate requests are likely to cause performance problems for the site. These methods would be excellent for a very brief A/B test, but long-term data gathering in such a manner should be avoided.

Usability Data: How Long Did the User View the Page(s)?

Caching again makes it impossible to reliably generate such a statistic. Using the trip to Sears.com as an example again, we can see numerous times when our user visits a new page, and then returns to a page he has already seen, then a new page again. What can happen is that, as far as the logs are concerned, the time the server thinks the user spends viewing page(s) will be artificially increased because it isn’t counting the time he spent re-reading a page he has already seen. The traffic analysis tool then makes an assumption that the gap between new requests (because the other pages were in cache) is the time that was spent on the previous page. This can make it seem like users are spending longer on the pages than they really are, and not actually counting the pages on which they actually spent their time.

If you could tell how long the user "viewed" the page, what conclusion could you draw?

When our user was on the Sears.com web site looking for a part for his air compressor, he realized that he also needed a new pressure gauge because the old one wasn't working. So, he went to Sears.com and clicked on "Parts" on the left. The user then arrived at a new site for the Sears Parts Store. As soon as he got there, he saw that he could enter the actual part number for the gauge itself rather than browsing for it. He then left his computer, went to the basement, and spent approximately the next 5 minutes going through a stack of owner's manuals to get the part number for the gauge. Returning, he entered the part number, went about finding the part on the site and continued with his visit.

With the web server logs as the only data about what our user did, what conclusion could be drawn? As far as the server logs go, he spent 5 minutes at the home page of the site before going to the page listing the part.

  • Does that 5 minute page view time mean he had difficulty understanding the page? Or,
  • Does that mean he found the page particularly interesting?

Of course, neither is true. He wasn’t even at the computer for those five minutes. But the server logs don’t register, “User walked away to look for something.” Server logs don’t know whether the user went to get a cup of coffee, walk the dog, write something down, print something off, or left the house to go shopping. How long a user spends on a page means nothing without additional information about what the user was doing, why the user was doing it, and whether the experience was successful and satisfying. If someone isn’t with the user, there is no way to know what is contributing to the length of time users spend between new requests.

Usability Data: When Do Visitors Leave Your Site?

Server log analysis tools report the last page the user has requested during their visit as their "exit point." These programs establish a certain amount of time that they assume to be long enough to constitute a single user’s session. Most analysis tools allow the server administrator to set what is called a "Visit Timeout" after which point if there is no more activity from that IP address, it regards the user’s visit as over. Any additional activity from that IP address will be regarded as a new visit and likely to be counted as a "repeat visit" by the analysis tool. Most analysis tools’; timeouts are set, by default, to 20 or 30 minutes. Even if there weren’t any issues with caching, the analysis tools make an assumption that a user’s interaction with the site is over when, in the user’s mind, it might not actually be over. If our user’s trip to find the owner’s manual had taken 31 minutes, Sears.com would have registered him as a repeat visit, but in his mind it was the same event. If the logs can’t reliably tell use when the user left the site, we can't possibly come to any conclusion about why they left the site, and that is really what we really want from that data.

Stats Do Not Represent How Many Visits (or Visitors) the Site Had

Essentially, everything said above means any statistics from log files about number of "visits" are going to be unreliable. If we’re faced with caching issues and visit timeouts, that means we can’t expect to get a reliable statistic of how many visitors have come by or how many visits have happened. In fact, Stephen Turner, developer of Analog Stats refuses to generate visit stats for this exact reason.

Stats Cannot Tell You Where the Visitors Came From or Where They Entered

Because the site's pages may or may not be cached, a user may have viewed several pages on the site before actually needing to request a page from the server. The server logs will show that first request as the user's entry point, which may not be the user's actual entry point. This is all the more true for visitors who frequently come from bookmarks or who have the site entered as their "home" page, for these first pages are very likely to be cached in the user's browser. The site's most loyal visitors may very well be skewing the statistics by not generating an accurate count of these popular entry pages. Instead, data might make it look like other pages are more popular than they really are.

Further complicating this matter, referrer data may not even be available. If the site is running under the secure HTTPS protocol (as should be the case for web stores and any page which contains a form to collect information), referrer data will be unavailable, as this is a feature of that secure protocol. Even without the secure connection, many browsers on the market do not pass referrer data. Even in cases where they do, this capability can often be turned off in the browser as a feature intended to protect users' privacy. A user could also be using a "de-referrer" service such as UltiMod to hide their referrer.

Stats Cannot Measure Users' Online Success

There are those who would claim that stats can tell you how "successful" the site's users are. Their claim is that items purchased, files downloaded, and information viewed are concrete indicators of user success. By stating this, they're making an assumption that every user comes to a site with a singular, specific goal in mind – buy something or look for some very specific information – and then leave.

Realistically, nobody uses the Web only to complete a singular task on each site they visit. Most users do not come to a site for only one purpose and leave when they're done. The user's goal may be as well-defined as "order a new gauge for my air compressor" or as open-ended as "look at all the neat tools I could buy if I won the lottery". Frequently, a user will come for one reason and stay for a completely different one. In fact, most organizations (should) hope that users do exactly that. Amazon.com is the undisputed champion of facilitating this type of interaction.

amazon sample

In the screen cap above, you can see that while the user is viewing a specific watch, Amazon is recommending alternate items to view in case the user has determined that the item he’s viewing does not fit his needs. If Amazon assumed that the user only had one goal (look for a specific item and buy it), then Amazon would miss out on a ton of sales, cross-sales, and follow-up sales. What if the user found the specific watch he was after, but decided not to buy it? Amazon loses money. However, by offering alternatives, they support users who don’t have only one specific goal.

Website Success Cannot Be Measured Online Even If Such Stats Were Possible

Different types of users have different goals, many of which can not be measured online. Based on the author’s personal experience as well as surveys conducted by the Pew Internet & American Life Project, there are three different types of users:

  • Users who prefer to deal with you in person. They would rather come in to a brick and mortar location and transact their business and/or gather their information face-to-face.
  • Users who prefer to deal with the organization over the phone. They would rather call than spend all that time traveling to and fro. The telephone is close to them, dialing is easy, the interaction feels comfortable, and they still enjoy the benefit of speaking with someone who can give them personal attention.
  • Users who prefer to deal with the organization online. The Internet is always there regardless of how early or late it is, and they do not need or want the assistance of another human.

We shouldn't assume, however, that people strictly stay in one of the categories above at all times. Surveys show that people will often switch between these modes based upon a variety of factors such as the type of product, their level of expertise, proximity of the brick and mortar location (if one exists), their level of desire, and their overall goal. One study by Pew indicates that even people who frequently make purchases online are not more likely to do things like manage their investments online than those who don't buy online as frequently. [3] The reason the survey respondents gave for this different approach is easy to understand. They say that handling their entire financial well-being online isn't exactly the same as buying a book, CD, or gift for someone.

Generally, people will choose what type of interaction they want based on what is needed and available for their particular situation and how comfortable they are doing it online. If a user only goes to the Financial Consultant Locator Tool on the Solomon Smith Barney web site to find out where to visit the local office, then subsequently has SSB handle his investments, then the site was successful – but the log files do not track that. Likewise, saying "X amount of people visited the Locator Tool" is certainly no measure of success. Some people, after visiting that page, may decide not to go. Other people might feel the nearest consultant was too far away. Further other people might not be able to use the Locator Tool. Measuring requests to the Financial Consultant Locator Tool – which is simply an imaginary "end-point" – is no measure of success without knowing what the user's criteria for "success" actually is, and without also knowing what contributed to any failures. In the case I've just outlined, it is certainly not the site's fault if the nearest consultant is too far away. In that case, the Locator Tool would have performed perfectly. The lack of generated business may still be considered a “failure,” but not one attributable to the site.

Tracking Usage Trends Provides No Useful Data

Some people who recognize the weaknesses of web stats still argue that the log files can be used to look for trends in site usage. Unfortunately, this is also untrue. Again, caching is just as much of a spoiler here as it is elsewhere. The problem is compounded by the fact that network administrators and ISPs are constantly working to improve their system's performance. Their end-goal is to ensure their networks run quickly and efficiently for the benefit of their customers. This could mean that they could suddenly choose to place a larger (or smaller) amount of data in their cache and may choose different items to cache as well. Such activity could result in vast swings in the number of requests your site receives and in the number of the hosts making requests.

monthly stats

The figure above is a "Monthly history" generated by AWStats for a semi-popular personal site. Notice that traffic in August, September, and October is almost double that of April, May, June, and July? Despite the apparent evidence, absolutely nothing changed about the site at that time. Nothing was added and nothing was taken away. This could happen if some network administrator or ISP tweaked how their system was caching pages, resulting in more requests to the site.

Actions by a site's own management or the management of the organization can also change trend data. Anytime a site's structure changes, grows, or shrinks, the "trends" will as well. If the organization adds a new section and puts a big announcement about it on the home page, there will be a surge in traffic to that area of the site. If a new product is announced in the company's newsletter or on commercials then the site's overall traffic will grow. If the settings on the search engine are tweaked, pageviews may increase in some sections and decrease in others. Ignore the site for several months and the traffic will lull. Completely redesign the site or re-organize the content, and "trends" are no longer trends. Merely by managing the site, an organization invalidates the usefulness of the trend data by creating variation in the site. The more frequently new content is added to the site, the more opportunities exist for variation in the “trends.”

The Special Case of AOL

AOL further complicates statistics by assigning new IP addresses to their users in mid-session. Since AOL is the largest online service in the world, AOL is often a site's largest source of users. Since server log analysis tools often rely on the visitor’s IP address and/or hostname as a unique identifier, this means that a site’s logs can show data attributed to multiple "users" from AOL that actually belong to one person. Depending on how much time the person spends on the site, one AOL user may look like dozens of users.

One person, in a post to the Usenet Newsgroup alt.www.webmaster posted: "Here's a section of my access log that shows an AOL user requesting one page, followed by requests for the images on that page:" (edited for privacy)

195.93.21.98 – – [15/Mar/2006:12:44:37] "GET /xxxx/…
195.93.21.42 – – [15/Mar/2006:12:44:37] "GET /images/…
195.93.21.3 – – [15/Mar/2006:12:44:37] "GET /images/…
195.93.21.36 – – [15/Mar/2006:12:44:37] "GET /images/…
195.93.21.36 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.99 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.68 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.135 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.73 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.38 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.132 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.137 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.137 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.69 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.34 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.106 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.72 – – [15/Mar/2006:12:44:38] "GET /images/…
195.93.21.130 – – [15/Mar/2006:12:44:38] "GET /images/…

As you can see above, one visitor registers 16 different IP addresses in the log by requesting only one page. This is because the individual requests for each image increase the number of requests significantly, therefore seeming to indicate multiple users. Caching confuses traffic data on how long people view the pages, but AOL confuses it even more. AOL's use of dynamic IP addresses changing mid-session muddles everything in a site's traffic statistics. AOL even admits to the challenges this creates for site statistics. Located on AOL's Webmaster FAQ, they state:

Q. Can I use the IP address of the request to track a member's access to my site?
A. No. Because AOL uses proxy servers to service the requests made by members, webmasters see the IP address of the server, not the Dynamically Assigned Host Address (DAHA) of the member in their web site log files. The problem with trying to use the IP address to track access is that there may easily be multiple members assigned to a proxy server. All of the member requests would appear to be coming from one member if you assumed a relationship between member and IP address. In addition, members may be reassigned to a different proxy server during a session.

Problems created by AOL's dynamic IP addressing practice can include:

  • The analytics tool may show many more users and visits than the site actually had.
  • In the case of AOL, it may also show less users than the site actually had. As they state above, multiple users could potentially use the same IP address.
  • The data on entry and exit points will be unreliable. The more AOL users the site has, the less reliable this data. Users will look like they've left when they've simply gotten a new IP address. Their next page view will make them look like a new user.
  • The data on length of visit and time on each page will also be unreliable.

Conclusion – Server Log Analysis Is an Unreliable Tool for Usability

It is recommended that an organization not spend extensive amounts of time and money to gain usability data from server logs. An organization would be better served by hiring an experienced human factors engineer to perform an expert review or conduct a formal study with users. The results would be much quicker, more accurate, and more informative.

[1]. HTTP. (n.d.). The American Heritage® Dictionary of the English Language, Fourth Edition. Retrieved September 30, 2007, from Dictionary.com website: http://dictionary.reference.com/browse/HTTP

[2]. Cache Planet Science. Retrieved September 30, 2007 http://www.scienceyear.com/about_sy/index.html?page=/about_sy/help/glossary.html

[3].Pew Internet and American Life Project. http://www.pewinternet.org/PPF/r/131/report_display.asp

Blasting the Myth of the Fold

Written by: Milissa Tarquini

The Above-the-Fold Myth

We are all well aware that web design is not an easy task. There are many variables to consider, some of them technical, some of them human. The technical considerations of designing for the web can (and do) change quite regularly, but the human variables change at a slower rate. Sometimes the human variables change at such a slow rate that we have a hard time believing that it happens.

This is happening right now in web design. There is an astonishing amount of disbelief that the users of web pages have learned to scroll and that they do so regularly. Holding on to this disbelief – this myth that users won’t scroll to see anything below the fold – is doing everyone a great disservice, most of all our users.
First, a definition: The word “fold” means a great many things, even within the discipline of design. The most common use of the term “fold” is perhaps used in reference to newspaper layout. Because of the physical dimensions of the printed page of a broadsheet newspaper, it is folded. The first page of a newspaper is where the “big” stories of the issue are because it is the best possible placement. Readers have to flip the paper over (or unfold it) to see what else is in the issue, therefore there is a chance that someone will miss it. In web design, the term “fold” means the line beyond which a user must scroll to see more contents of a page (if it exists) after the page displays within their browser. It is also referred to as a “scroll-line.”
Screen performance data and new research indicate that users will scroll to find information and items below the fold. There are established design best practices to ensure that users recognize when a fold exists and that content extends below it1. Yet during requirements gathering for design projects designers are inundated with requests to cram as much information above the fold as possible, which complicates the information design. Why does the myth continue, when we have documented evidence that the fold really doesn’t matter in certain contexts?

Once upon a time, page-level vertical scrolling was not permitted on AOL. Articles, lists and other content that would have to scroll were presented in scrolling text fields or list boxes, which our users easily used. Our pages, which used proprietary technology, were designed to fit inside a client application, and the strictest of guidelines ensured that the application desktop itself did not scroll. The content pages floated in the center of the application interface and were too far removed from the scrollbar location for users to notice if a scrollbar appeared. Even if the page appeared to be cut off, as current best practices dictate, it proved to be such an unusual experience to our users that they assumed that the application was “broken.” We had to instill incredible discipline in all areas of the organization that produced these pages – content creation, design and development – to make sure our content fit on these little pages.

AOL client application with desktop scrollbar activated

AOL client application with desktop scrollbar activated

As AOL moved away from our proprietary screen technology to an open web experience, we enjoyed the luxury of designing longer (and wider) pages. Remaining sensitive to the issues of scrolling from our history, we developed and employed practices for designing around folds:
* We chose as target screen resolutions those used by the majority of our users.
* We identified where the fold would fall in different browsers, and noted the range of pixels that would be in the fold “zone.”
* We made sure that images and text appeared “broken” or cut off at the fold for the majority of our users (based on common screen resolutions and browsers).
* We kept the overall page height to no more than 3 screens.

But even given our new larger page sizes, we were still presented with long lists of items to be placed above the fold – lists impossible to accommodate. There were just too many things for the limited amount of vertical space.
For example, for advertising to be considered valuable and saleable, a certain percentage of it must appear above the 1024×768 fold. Branding must be above the fold. Navigation must be above the fold – or at least the beginning of the list of navigational choices. (If the list is well organized and displayed appropriately, scanning the list should help bring users down the page.) Big content (the primary content of the site) should begin above the fold. Some marketing folks believe that the actual number of data points and links above the fold is a strategic differentiator critical to business success. Considering the limited vertical real estate available and the desire for multiple ad units and functionality described above, an open design becomes impossible.

And why? Because people think users don’t scroll. Jakob Nielsen wrote about the growing acceptance and understanding of scrolling in 19972, yet 10 years later we are still hearing that users don’t scroll.
Research debunking this myth is starting to pop up, and a great example of this is the report available on ClickTale.com3. In it, the researchers used their proprietary tracking software to measure the activity of 120,000 pages. Their research gives data on the vertical height of the page and the point to which a user scrolls. In the study, they found that 76% of users scrolled and that a good portion of them scrolled all the way to the bottom, despite the height of the screen. Even the longest of web pages were scrolled to the bottom. One thing the study does not capture is how much time is spent at the bottom of the page, so the argument can be made that users might just scan it and not pay much attention to any content placed there.

This is where things get interesting.

I took a look at performance data for some AOL sites and found that items at the bottom of pages are being widely used. Perhaps the best example of this is the popular celebrity gossip website TMZ.com. The most clicked on item on the TMZ homepage is the link at the very bottom of the page that takes users to the next page. Note that the TMZ homepage is often over 15000 pixels long – which supports the ClickTale research that scrolling behavior is independent of screen height. Users are so engaged in the content of this site that they are following it down the page until they get to the “next page” link.

Maybe it’s not fair to use a celebrity gossip site as an example. After all, we’re not all designing around such tantalizing guilty-pleasure content as the downfall of beautiful people. So, let’s look at some drier content.
For example, take AOL News Daily Pulse. You’ll notice the poll at the bottom of the page – the vote counts are well over 300,000 each. This means that not only did folks scroll over 2000 pixels to the bottom of the page, they actually took the time to answer a poll while they were there. Hundreds of thousands of people taking a poll at the bottom of a page can easily be called a success.

AOL News Daily Pulse with 10x7 fold line and vote count
AOL News Daily Pulse with 10×7 fold line and vote count

But, you may argue, these pages are both in blog format. Perhaps blogs encourage scrolling more than other types of pages. I’m not convinced, since blog format is of the “newest content on top” variety, but it may be true. However, looking at pages that are not in blog format, we see the same trend. On the AOL Money & Finance homepage, users find and use the modules for recent quotes and their personalized portfolios even when these modules are placed well beneath the 1024×768 fold.

Another example within AOL Money & Finance is a photo gallery entitled Top Tax Tips. Despite the fact that the gallery is almost 2500 pixels down the page, this gallery generates between 200,000 and 400,000 page views depending on promotion of the Taxes page.

It is clear that where a given item falls in relation to the fold is becoming less important. Users are scrolling to see what they want, and finding it. The key is the content – if it is compelling, users will follow where it leads.

When does the fold matter?

The most basic rule of thumb is that for every site the user should be able to understand what your site is about by the information presented to them above the fold. If they have to scroll to even discover what the site is, its success is unlikely.

Functionality that is essential to business strategy should remain (or at least begin) above the fold. For example, if your business success is dependent on users finding a particular thing (movie theaters, for example) then the widget to allow that action should certainly be above the fold.

Screen height and folds matter for applications, especially rapid-fire applications where users input variables and change the display of information. The input and output should be in very close proximity. Getting stock quotes is an example: a user may want to get four or five quotes in sequence, so it is imperative that the input field and the basic quote information display remain above the fold for each symbol entered. Imagine the frustration at having to scroll to find the input field for each quote you wanted.

Where IS the fold?

Here is perhaps the biggest problem of all. The design method of cutting-off images or text only works if you know where the fold is. There is a lot of information out there about how dispersed the location of fold line actually is. Again, a very clear picture of this problem is shown on ClickTale. In the same study of page scrolling, fold locations of viewed screens were captured, based on screen resolution and browser used. It’s a sad, sad thing, but the single highest concentration of fold location (at around 600 pixels) for users accounted for less than 10% of the distribution. This pixel-height corresponds with a screen resolution of 1024×768. Browser applications take away varying amounts of vertical real estate for their interfaces (toolbars, address fields, etc). Each browser has a slightly different size, so not all visitors running a resolution of 1024×768 will have a fold that appears in the same spot. In the ClickTale study, the three highest fold locations were 570, 590 and 600 pixels—apparently from different browsers running on 1024×768 screens. But the overall distribution of fold locations for the entire study was so varied that even these three sizes together only account for less than 26% of visits. What does all this mean? If you pick one pixel location on which to base the location of the fold when designing your screens, the best-case scenario is that you’ll get the fold line exactly right for only 10% of your visitors.

So what do we do now?

Stop worrying about the fold. Don’t throw your best practices out the window, but stop cramming stuff above a certain pixel point. You’re not helping anyone. Open up your designs and give your users some visual breathing room. If your content is compelling enough your users will read it to the end.

Advertisers currently want their ads above the fold, and it will be a while before that tide turns. But it’s very clear that the rest of the page can be just as valuable – perhaps more valuable – to contextual advertising. Personally, I’d want my ad to be right at the bottom of the TMZpage, forget the top.

The biggest lesson to be learned here is that if you use visual cues (such as cut-off images and text) and compelling content, users will scroll to see all of it. The next great frontier in web page design has to be bottom of the page. You’ve done your job and the user scrolled all the way to the bottom of the page because they were so engaged with your content. Now what? Is a footer really all we can offer them? If we know we’ve got them there, why not give them something to do next? Something contextual, a natural next step in your site, or something with which to interact (such as a poll) would be welcome and, most importantly, used.

References

fn1. Jared Spool UIE Brain Sparks, August 2, 2006:”Utilizing the Cut-off Look to Encourage Users To Scroll”:http://www.uie.com/brainsparks/2006/08/02/utilizing-the-cut-off-look-to-encourage-users-to-scroll/

fn2. Jakob Nielsen’s Alertbox, December 1, 1997: “Changes in Web Usability Since 1994”:http://www.useit.com/alertbox/9712a.html

fn3. ClickTale’s Research Blog, December 23, 2006: “Unfolding the Fold”:http://blog.clicktale.com/2006/12/23/unfolding-the-fold/

It Seemed Like The Thing To Do At The Time

Written by: Joe Lamantia

This is Part One of our “Lessons From Failure”:http://www.boxesandarrows.com/view/lessons-from-failure Series.

“Failure is instructive. The person who really thinks learns quite as much from his failures as from his successes.” JOHN DEWEY

Several years ago, I changed careers, moving from designer to entrepreneur starting a dot com company. The experience taught me many lessons in the basics of how—and how not—to successfully build an Internet business. But the most valuable lesson I learned—one applicable to any business model, design challenge, technology, or industry—was in the powerful links connecting state of mind, self-definition, and failure. Startlingly, these same links appear no matter what size the group of people or the venture: from design projects and startup teams, to cultures seeding colonies abroad, state of mind and self definition are closely connected to how well a group responds to failure.

In the midst of the exuberant rush to (re)create communities on the Internet for a dizzying array of peoples and purposes, we should understand and respect this underlying pattern, whatever our role: founder, designer, or member. For though the growing wave of technosocial media may change how we conceive of and relate to the Internet by offering abundant opportunities to create and join new societies, these societies will remain driven by fundamental elements of state of mind and self definition.

To illustrate these ideas, I’ll briefly discuss three examples of new societies—the entrepreneurial ventures of their respective cultures—that faced failure: first, the small Internet company I founded, then two cultures facing environmental challenges. Two of these societies failed, and one succeeded.

It Seemed Like the Thing To Do at the Time

In the winter of 1999, I decided to start a business with two partners. I was working as an Internet strategy and design consultant at the time, so moving from designing online businesses for clients to designing one for myself felt like a natural step. We had a talented group of founders with the right mix of experience, and we had a good idea. We needed money in order to build substantial business and technology infrastructure, but capital for a good idea was easy to obtain in early 2000. Becoming an entrepreneur genuinely seemed like the thing to do at the time, since it offered a good opportunity to apply my skills and experience at a new level, and to my own vision.

We worked diligently to build the company for the next twelve months. Our team grew from 3 people to 10 people in the U.S. and China. We recruited a (bad) CEO. We recruited a (good) CTO. We assembled an impressive roster of critical business partners and advisors on both continents. We were fortunate—given the terrible business climate for online companies after the dot com crash—to receive several funding offers from the very beginning. But none of them were sufficient, and some were downright shady (I met a number of “unusual” people during this time 1).

In March of 2001, after a year of unpaid overtime, I left my regular full-time position to dedicate all of my time to the new company. In this, I was joined by several other team members. Based on our previous successes, we believed proper funding was literally around the corner. Our business plan was exquisite, our financial projections were meticulous, we had customers and staff in place, and our execution strategy was finely honed. Like a Broadway production awaiting the audience on opening night, we were ready to go. All we needed was capital.

By the summer of 2001, despite considerable success during difficult times, we were at a financial breaking point. Lacking strong revenue, we could not continue without help from outside in the form of legitimate funding. The attacks of September 11th, 2001 shut down the New York capital markets, closing the door on any hope of venture funding shortly afterward. We closed up shop, my partners went their various ways, and I took another full-time position.

A Moment for Reflection

After the team disbanded, I reflected on the experience to understand why we had failed.

Vizzini’s Advice

In retrospect, as Vizzini from the Princess Bride would say, we made a series of classic blunders:

  • We had a complex concept
  • We sought too much money during a difficult funding climate
  • We hired the wrong CEO (beware of business men who dress like Cuban drug smugglers)
  • We were not willing to compromise or modify our plans
  • We grew the team too quickly
  • We relied on unrealistic financial projections
  • We underestimated the operational challenges

As a once and future entrepreneur, I interpreted these as straightforward lessons for my next venture: begin with an idea that is easy to understand, be flexible, don’t fear change, involve only trustworthy and talented people, make realistic financial assumptions about revenue and income etc.

In summary, I understood that our failure was driven by the fact that we focused too much effort on securing external funding, and not enough on growing essential day to day operations. Vizzini would say our true blunder was that we did not get involved on the ground in Asia!

The Power of State of Mind

“I have not failed. I’ve just found 10,000 ways that won’t work.” THOMAS ALVA EDISON

Staying the Course…

People often ask why we made the decisions that took us from our first to our final steps. Why didn’t we change our plans? Why didn’t we put more effort into other ways to build infrastructure? I always answer, “It seemed like the thing to do at the time.” Meaning because of our state of mind and the progress we’d made, this course of action seemed the best way to reach our goal. We certainly didn’t intend to fail!

State of mind is an umbrella term for the common outlooks and framing assumptions that define the ways people perceive and think about situations and themselves. State of mind also sets boundaries for what people can and cannot consider. In practice, individuals and groups interpret the world through a state of mind that defines their understanding of:

  • Cultural concepts and ideas
  • Their needs and goals
  • The situations and environments around them
  • Their roles and the roles of others (both groups and individuals)
  • Available choices and actions
  • The results of those choices and actions

In retrospect, it is clear our team shared a common state of mind that we were unwilling or unable to change. In this state of mind, underlying all the decisions we made from beginning to end was a single goal: seeking external funding was the best thing to do for the business. Based on our shared understanding, we pursued this goal far past the point when a heavily venture-funded model became invalid, because the environmental conditions that sustained it had collapsed.

A glance at the headlines provides abundant examples of similar responses to failure driven by state of mind, such as the heated debate between the U.S. Congress and the Bush administration over different approaches to the ongoing U.S. involvement in Iraq. President Bush’s state of mind is epitomized by his dictum to “stay the course,” a view that substantially determines the choices considered possible by his administration.

Waiting for Rescue: Self vs. Other

Some time ago, I came upon a quotation from an 8th century Buddhist philosopher named Shantideva that changed my perspective on my experience as an entrepreneur. In “Entering the Path of Enlightenment,” 2 Shantideva writes, “Whoever longs to rescue quickly both himself and others should practice the supreme mystery: exchange of self and other.” When Shantideva says, “exchange of self and other,” he is advising us to change our self-definition, one of the most basic components underlying state of mind.

Shantideva, or Manjushri

So I came to see that my team of entrepreneurs had set out on the wrong path from the beginning, and never wavered, because our state of mind rested on defining ourselves as venture funded entrepreneurs. We never considered changing our self-definition. Obtaining funding became part of our identity, rather than a pragmatic business activity. There is a second parallel with Shantideva’s words: we were unable to consider other courses of action even after we recognized that we were in danger of failing, because we were waiting for rescue from outside. We believed outside funding would save us.

We never considered how our self-definition was leading us to failure. Nor did we consider that we might find another way to succeed if we changed our self-definition. President Bush would be proud: we managed to stay the course!

Easter Island: A Machine for Making Statues

My experience as an entrepreneur shows the power of state of mind in societies on the small scale of a closely focused startup team. The Easter Island society that collapsed in the 18th century clearly demonstrates the strong connections between self-definition and failure on the much larger scale of a complex society of approximately 15,000 people. (The discussion that follows draws upon the work of Jared Diamond in Collapse. 3)

Easter Island

Easter Island was settled approximately 1200 A.D. by Polynesians from islands further to the west. 4 The small (64 miles square) island remained essentially self-contained due to its remote location in the Pacific Ocean. 5 The population increased quickly as settlers rapidly cleared forests for farming. Based on common Polynesian religious practices, the Easter Islanders began carving the immense volcanic stone statues (Moai) that make the island famous, mysterious, and photogenic.

Easter’s Statues

Over the next 500 years, in a remarkable demonstration of the power of a common state of mind and self-definition, Easter Island’s religious and ceremonial practices effectively turned the entire society into a machine for the construction of statues. 6 The Easter Islanders built their social and political system around the creation of statues. Reward mechanisms offered prestige and power to chiefs who competed to carve and erect ever larger statues, on ever larger platforms. Driven by this institutionalized self-definition, the population collectively invested massive effort into carving and transporting thousands of tons of stone for each burial platform and for the hundreds of giant Moai placed upon them. 7

Wood from the island’s forests was literally the fuel that kept this statue-making machine running. Farming to produce the food needed to feed large groups of workers required ever increasing amounts of cleared land. Moving statues required large wooden carriers and hundreds of miles of rope. Funerary rites mandated cremation and burial in the gigantic stone platforms. As Easter Island’s human and statue populations grew rapidly, estimates of the island’s forest coverage declined precipitously, as this comparison chart shows.

Figure 1: Forest Cover vs. Population 8

This self-reinforcing cycle of statue creation, deforestation, and population growth created a recipe for environmental collapse that lead to comprehensive social failure. 9 Conservationist Rhett A. Butler summarizes the findings of Terry Hunt, an anthropologist who studied Easter Island’s history of habitation:

“With the loss of their forest, the quality of life for Islanders plummeted. Streams and drinking water supplies dried up. Crop yields declined as wind, rain, and sunlight eroded topsoil. Fires became a luxury since no wood could be found on the island, and grasses had to be used for fuel. No longer could rope by [sic] manufactured to move the stone statues and they were abandoned. The Easter Islanders began to starve, lacking their access to porpoise meat and having depleted the island of birds. As life worsened, the orderly society disappeared and chaos and disarray prevailed. Survivors formed bands and bitter fighting erupted. By the arrival of Europeans in 1722, there was almost no sign of the great civilization that once ruled the island other than the legacy of the strange statues. However, soon these too fell victim to the bands who desecrated the statues of rivals.” 10

Lessons from Easter Island

Easter Island Today, Deforested

The tragic pattern is clear to see: though institutionalized practices and goals based on a narrow self-definition were leading to comprehensive failure, the Easter Islanders refused (or were unable) to change their state of mind and goals, and their entire society collapsed. To this day, Easter Island is almost totally deforested, with the exception of small patches of trees from recent plantings, and the ~400 stone statues that remain. In a potent instance of irony, the Easter Islanders succeeded in constructing dramatic and enduring stone testaments to those things their society valued, even as the act of constructing those monuments consumed their society. President Bush would be proud of the Easter Islanders, too—they stayed the course.

A Tikopial Paradise

It is on our failures that we base a new and different and better success.HAVELOCK ELLIS

Tikopia Today

The Pacific island society of Tikopia is a good example of a culture that successfully responded to failure, by changing how its members define themselves. Tikopia differs from Easter Island in ways that make the challenges its inhabitants faced more pressing. Tikopia has been inhabited far longer (since ~900 B.C.), is much smaller (only 1.8 miles square), has fewer natural resources, and supports a much higher population density than Easter Island. 11 Yet photographs of Tikopia today show a lush, green landscape that is well-forested, while the island is populated by closely spaced communities of villages, supported by well-tended gardens and farm fields.

Over the history of human habitation on Tikopia, three different economic and social models governed the production of food and management of the island’s environment. For the first 100 years of habitation, the Tikopians relied on a slash and burn style agricultural model that severely damaged their environment through deforestation. They also mined the nearby shellfish and bird colonies for needed protein.

Recognizing that this model was unsustainable on a tiny island, the Tikopians changed agriculture and food production practices to a mix of forest orchards and pig farming, wherein livestock made up ~50% of their protein sources. This new model retained a two-tiered social structure, allocating scarce protein to a ruling class of chiefs. Under the forest garden model, Tikopia’s environment continued to degrade, albeit more slowly than before.

Such a quick and comprehensive shift in economic and agricultural approaches across a whole culture—even a small one—is rare. By around 1600 A.D., the Tikopians again faced environmental and social breakdown driven by resource use. They again deliberately changed all aspects of their sustenance model and social structure in a single, closely coordinated effort:

  • Switched from unsustainable agriculture to a sustainable permaculture model 12
  • Completely eliminated expensive and inefficient livestock (pigs)
  • Substituted fish for large land animals
  • Removed social and economic distinctions—no more chiefs
  • Adopted stringent population management practices

Lessons from Tikopia

The dramatic changes in Tikopia’s social and economic model dating from ~1600 equate to a concerted shift of identity (self-definition) and state of mind for all of Tikopian society, a moment they commemorate to this day through oral storytelling. Unlike Easter Island, Tikopia’s society makes no distinction between the resources allocated to leaders and to the populace. Tikopian society does not reward environmentally destructive activity. The result is a stable population, kept carefully in balance for approximately 400 years by a range of practices that limit growth. All of these decisions were driven by a state of mind based on matching human impact with the island’s limited resources for the entire society.

Shantideva would surely say the Tikopians are remarkably flexible and resilient: instead of waiting for rescue, they averted failure (through environmental and social collapse) by redefining themselves not once, but twice.

Heed Shantideva

As an entrepreneur, I was one member of a small group making decisions about a single business venture which affected only our own lives. But as designers, architects, technologists, business owners, or anyone involved in building the new virtual societies emerging under the banner of social media, we have the power to affect many lives, by shaping self-definition and state of mind in a community from the very beginning.

We can’t predict every situation a starting society will face. But we can assume that potential failure is one challenge that may arise. And so—based on these three examples of societies facing failure—it seems wise to heed Shantideva’s advice about the exchange of self and other, thereby making our efforts now a part of the solution to future unknown problems. We can do this by allowing for changes to self definition, and by encouraging awareness of, and reflection on, state of mind, whether in our own venture or when we design a society for others.

Footnotes and References

1 They ran the gamut from debased expatriate executives, to corrupt former politicians (with gout), to alcoholic ex-CIA operatives, to the founder of a major mainframe computer maker, to veterans of anti-communist coups in Africa during the 70’s. Or so they said…

2 Bodhicaryavatara, ch. 8, v. 120

3 Diamond, Jared. Collapse: How Societies Choose to Fail or Succeed. Penguin Books: 2005.

4 Terry L. Hunt; Rethinking the Fall of Easter Island.

5 Easter Island is 1,400 miles from its nearest neighbor (tiny Pitcairn Island), and 2,500 miles from the nearest large land mass, Chile.

6 Competing clans and chiefs received social status and rewards, such as farmland and food resources, from the successful construction of more and larger statues, giving them clear incentives to continue carving and erecting Moai. In effect, Easter Island’s cultural / political / economic system was built around an unusual positive feedback loop, in which more statues for a clan meant more people and more power, which meant more statues, which meant more people and more power… Similar carving traditions exist among other societies elsewhere in Polynesia, but on much smaller scales.

7 A recent count shows 300 platform and burial sites (ahu) around the island, with approximately 400 statues. There are 300 tons of stone in a small ahu, and 10,000 tons of stone in the largest. The average moai is 13 feet tall and weighs 10 tons, the larger moai reach up to 32 feet tall and weigh 75 tons. Another 400 moai sit partly completed in quarries, reaching heights of up to 75 feet tall, and weighing 270 tons.

8 Simon G. Haberle, “Can climate shape cultural development?: A view through time,” Resource Management in Asia-Pacific Working Paper No. 18. Resource Management in Asia-Pacific Project, The Australian National University: Canberra, 1998 Working version obtained at http://coombs.anu.edu.au/Depts/RSPAS/RMAP/haberle.htm

9 Diamond writes, “The overall picture for Easter is the most extreme example of forest destruction in the Pacific, and among the most extreme in the world: the whole forest gone, and all of its tree species extinct.”

10 Rhett A. Butler, Easter Island settled around 1200, later than originally believed

11 Tikopia; Tikopia.

12 Permaculture Permaculture.

Lessons From Failure (Series Introduction)

Written by: Christian Crumlish

At the IA Summit this year, a few of us presented a panel where we hung out our dirty laundry in front of a room full of voyeurs, many of whom accepted our invitation to come to the mic and tell their own tales of woe.

We talked about our failures—individual, structural, institutional, societal—and not just “failure” in the abstract, but specific situations, specific projects, where we personally failed. We also strove to hold back from blaming stakeholders and clients for these disasters. We owned our catastrophes and spoke about what we learned and why we are doing better information architecture today because of these painful, harsh lessons.

Each panelist addressed a different level of failure: the project level, the organizational level, the institutional level, the global level, but we all talked about why and how we fail, to what extent failure can and cannot be prevented, and how failure is an inevitable byproduct of creativity and experimentation.

With four panelists and a room full of fellows, we felt we only scratched the surface. In the welcoming pages of Boxes and Arrows, we can really let it all hang out, so we are starting a series of articles on failure. We begin with the four case studies we presented in Las Vegas, but also, we hope to include your failures and the lessons you learned. Contact me or one of the B+A editors if you’d like to contribute to this series.

On the panel we worked from the micro to the macro, but here we are going to turn that around and start with Joe Lamantia’s observations about enterprise-level failure and some intriguing parallels from the catastrophic failure of an entire society.

“Take it away”:http://www.boxesandarrows.com/view/it-seemed-like-the, Joe.