Printing the Web

by:   |  Posted on
“Consider how extraordinary paper is: lightweight and flexible, it supports thousands of typefaces, as well as black-and-white and color illustrations, and its high-resolution and high contrast facilitates reading.”Despite predictions to the contrary, it doesn’t seem that the advent of networked information sharing has reduced human consumption of paper. In fact, given the amount of printouts modern offices and homes produce, one is inclined to say that even MORE paper is generated today than ever before. A “paperless society” feels a long way off.

Works such as “The Myth of the Paperless Office” by Abigail J. Sellen and Richard H. R. Harper confirm this. The authors show how paper use often increases after the introduction of computers in an office. Such observations have been commented elsewhere (see Information Week). The paper industry is doing fine as well: the largest paper producers in the United States have continued to grow economically over the last five years. In short, there is no real evidence of a world without paper.

Consider how extraordinary paper is: lightweight and flexible, it supports thousands of typefaces, as well as black-and-white and color illustrations, and its high-resolution and high contrast facilitates reading. David Gelernter, Yale professor and computing visionary, aptly summarizes: “The ‘paperless office’ is a bad idea because paper is one of the most useful and valuable media ever invented. …‘On paper’ is a good place for information you want to use; a bad place for information you want to store.”

David Gelernter, “The Second Coming: A Manifesto

The reverse is also true: computers are good for storing information, but generally bad for using it. Research shows that difficulty in reading from a computer screen stems from poor resolution: compared to paper, monitors—even of the highest quality—offer only low-resolution reading.

On the web there are additional complications. Jacob Nielsen offers insight here:

  • Web users feel that they have to move on and click on things.
  • Each web page competes with millions of other pages for attention, potentially reducing users’ ability to focus on one content source at a time while online.

Jacob Nielsen. (1997) “Why Web Users Scan Instead of Read.

It is no surprise that many people print information from the web. Rather than overlooking this common behavior, it may be advantageous to plan for and support printing when designing a website.

Designing web pages with printing in mind
For some websites the user experience already extends onto paper, like it or not. Ignoring this may result in lower overall user satisfaction. Consider the following factors when designing web pages that will be printed:

  1. No Alternate Version
    Sometimes web developers do not have the time, money, or know-how to offer alternate print-friendly versions of web content. Creating online pages that also work on paper is still possible.

    After a user selects “print” from the browser, the page is formatted before it is sent to the printer. The width of the layout is reduced to about 650 pixels for 8.5″ x 11″ paper, or 630 pixels for A4, assuming normal margins.

    If all the elements of a page can’t wrap around to fit within this 630-650 pixel area, content on the right will simply be cropped off. This is often caused by absolute positioning of page elements, such as fixed table widths, or large images. A web page with a fixed size of 800×600 pixels may look great online, but will lose its right edge completely when printed.

    Flexible layouts relying on relative positioning are better for printing, allowing the page to compress down to fit onto paper. I believe using flexible positioning and relative table widths (i.e., percentages) constitutes good web design practice in general and should always be considered (see “The Myth of 800×600” in Web Review,

    Frames may complicate the printing process or sabotage it completely. Flash and other rich media formats may also be problematic and may not print at all. Additionally, including content in a DHTML layer essentially hides it from the printer. If you want your users to be able to print, reconsider these technologies.

  2. Alternate Print-Friendly Version
    Print-friendly pages can eliminate the above-mentioned problems and yield higher quality printouts. Programming a print-friendly function is indeed more work, but CSS makes it relatively easy for an experienced programmer (more below).

    The “print this page” button, however, shouldn’t just duplicate the print function on the browser. Instead it should do something with the content to make it more appropriate for paper. Here are few ideas for creating a print-friendly version of a web page:

    1. Remove navigation. Unless the site navigation is somehow important for the text itself, it is rather useless on paper.
      Example: International Herald Tribune,
    2. Remove or change graphical ads. Banners may not make sense on paper, particularly animated images, which are generally meant to be clicked.
      Example: NY Times online, – changes full-sized, animated ads to small static images.
    3. Remove absolute page widths and change to relative positioning. This will ensure that the browser can scale down a page to fit on paper without losing any text. Removing fixed widths for printing means you can still have fixed width web pages on screen.
    4. Change fonts from sans serif to serif. Sans serif fonts are better (i.e., more comfortable and quicker) for online reading, while serif fonts are easier to read from paper.
      Example: Boxes and Arrows,
    5. Add citation information to the print version. Most browsers print the page title and URL on the top of a printed page by default. However, you may want to offer a clearer, thorough citation at the top of the page.
    6. Remove dark backgrounds. Though most browsers don’t print backgrounds by default, it’s best to change any online color combinations to black on white for the print version to ensure a readable printout.
      Example: Evolt,
    7. Write out links to show the URL. On paper an underlined word (i.e., a link) in the middle of a text is not very helpful. Instead, show the URL after the link in parentheses. This can be very problematic with long URLs (e.g,. dynamic pages with unconverted parameter strings) and may require manual examination of links within a text body.
    8. Display the print-friendly version before printing. This could be in a new window, offering users feedback as to what the document will look like and giving them more control. It is not recommended to start printing immediately after clicking the “print this page” button.
      Examples: International Herald Tribune, (displays print-friendly version and invokes browser’s print command at the same time)
    9. Collate information into the final print version. Related documents are sometimes spread out over multiple web pages. The print version should consolidate necessary content.
      Example: IBM articles – you can print the current article or the entire section.
    10. Ensure that color coding is not required to understand content. It is safe to assume that most users will be printing with a black and white printer. Charts with colored bars, for example, are useless in black and white. Add text labels for clarification for pages that are likely to be printed.

CSS to the rescue
Unless there is a very good reason, you really don’t want to maintain two different versions of a web page. CSS allows you to reformat content for a print version without having to maintain multiple separate documents.

Features of CSS take printing into account and offer a great deal of power and flexibility. Most of the suggestions above can be implemented with style sheets. For example, with CSS you can:

  • Add or remove any defined page element, such a navigation menu or images
  • Change font type, size, and measurement (from pixel to point, for instance)
  • Define page breaks
  • Define page margins (separately for the first page if you want)
  • Write out URLs for links

W3C has recommended a set of extensions to CSS to better support printing from the web, referred to in the specification as “paged media” (see For those who are looking for more details, see CSS guru Eric Meyer’s excellent article in A List Apart, which explains the nuts and bolts of code needed for print-friendly pages (

Additionally, XSL Formatting Objects, also a W3C recommended technology, is extremely powerful. With this you can print from XML with highly controlled layouts. See “What is XSL-FO?” for an introduction (by G. Ken Holman,

Capabilities of both CSS and XSL-FO point to the need for better control of printing formats and underscore the fact that people do indeed print from the web. Web design, in turn, should take advantage of these technologies to ensure a consistent user experience on and offline.

Remember, as well, that the tendency to print from the web is often a desire to capture information for later use. That is, often users want to walk away with content. In addition to or instead of creating a print version, offering a downloadable version of a document also addresses this need. PDF format, for instance, is great for downloading and printing, and can even be generated “on the fly.”

Those of us concerned about the environmental impact of increased paper use may argue that adding a print button encourages people to print, thus wasting paper. I would argue they will print anyway – with or without an added button. But consider how a print-friendly version may actually be better:

  1. Print-friendly versions avoid problems with printer reformatting and normally use fewer pages than directly printing a web page.
  2. Removing navigation, banners, dark images and backgrounds saves ink.
  3. Print versions could also be offered as downloads, thus avoiding printing all together while still allowing users to capture online information.
  4. Cutting and pasting text is easier from print-friendly pages. For the small group of web users (myself included) who cut and paste text from the web, print versions are advantageous.

Finally, designing with print in mind necessarily forces website creators to conceptually separate content from presentation. Lessons learned from developing alternative print versions of web pages could be applied to other situations, such as creating PDA versions of web pages. As designing for multiple formats becomes increasingly more important, such skills are all the more valuable.

This is not at all to say you should ignore creating attractive and useful displays of information to be read online. My point is this: perhaps we are heading toward the paperless society and are just in state of transition. Maybe as the quality of computer screens gets better and people start reading online in the first grade, we will loose our need for paper. This surely won’t happen, however, within the lifespan of your next web project. Therefore, consider how users interact with other formats and media, particularly paper, and address the reality that people print web pages. With a little planning and foresight creating printable pages is relatively easy and extends a positive user experience to paper.

James Kalbach is currently head of Information Architecture at Razorfish, Germany and has a masters degree in library and information science. Previously he established a usability lab at I-D Media, a large German digital agency.

Putting a Face on B2B Websites

by:   |  Posted on
It doesn’t matter what part you play on the web design team, understanding and embracing the strategy set forth by either the client or your team is paramount to a successful website.B2B isn’t boring… at least not to everyone. As web designers (writers, information architects, graphic designers, creative directors) we are often asked to work with various unfamiliar topics like hydraulics and pneumatics components. Or commercial flooring. Or even wastewater treatment products. Whatever is the going widget of the day, you have to deal with it, make sense of products you barely understand yourself so that you can build a compelling, useful site and—most important to today’s economic climate—create a site that contributes to your client’s bottom line. At the same time, you want to be creative, free, and cutting-edge with your work.

How do you lead your business-to-business clients down the right path without sacrificing every design, usability, and information design principle you’ve ever learned? And how do you make these websites engaging for their users? There are several keys to being successful: understand your audience and the strategy, apply technology and design principles appropriately, and present your information logically.

Are you talkin’ to me?
It doesn’t matter what part you play on the web design team, understanding and embracing the strategy set forth by either the client or your team (or ideally, both your client and your team, together) is paramount to a successful website. If you’re working on a business-to-business site, remember that you’re not presenting sports scores or the weather to Joe Everyman. You’re speaking to a specific audience of professionals. Most likely, it is made up of engineers, plant managers, distributors or dealers, and maybe even end users. Whatever the case, these are members of a highly targeted community who share certain job titles, responsibilities, backgrounds, educations, or interests (think industry-related interests). They expect certain things from a website: that the information is relevant to their needs, that it’s easy to find, and that it’s useful to some part of their job function. It’s also vital that they can easily communicate with the company via the website.

And don’t forget about the competition. Jim Everhart, vice president of strategic business at Godfrey Advertising, recently shared this insight: “It goes without saying that any new feature competitors add to their site ups the ante for you. Usually, that means you have to respond in kind, or at least have a good reason why you can’t or shouldn’t do what the other guys are doing. A good example: a key competitor begins to sell products online, and you can’t, because your company continues to sell through distribution. But you’d better have an acceptable alternative… and fast.”

Hopefully, your extended team will have thought about situations like the one mentioned above in addition to other site goals. Is the goal to create awareness about the company and its products? Or is it more complex than that—say, to eliminate paper by making all of the product datasheets available online? Or is it to capture prospect data to create offline sales? Or is it to drive online sales? Whatever the goals and objectives, your job is to design the site and “architect” the information to create a user experience that supports and enhances those goals. Marketing is about making it easy for current and potential customers to do business with your client. This principle carries over to your website and all its content and functionality, despite the complexity of information you are presenting. Bottom line: the site’s design and functionality must be dictated by these business objectives.

“A great example is your corporate positioning,” says Everhart. “In the early days of the Web, we could put your positioning line on the home page, just like you do in an ad or brochure. But now, you can do so much more. You can live your positioning on the Web, by taking your core values and embodying them in a way that illustrates and even delivers your company’s sustainable competitive advantage.”

Too cool for the Web?
As designers and information architects work together, the constant challenge is the fine line between using cutting edge technologies well, and using them because a) it’s cool, b) it will be great for your portfolio and c) it’s easier than figuring out what is the right thing to do.

Take Macromedia Flash, for example. Usability guru Jakob Nielsen was quick to say that 99% of Flash is bad… and in most cases, he’s absolutely right. Many companies have tried and failed, mostly in the form of fluffy, ethereal mission statement splash screens that send users scrambling for the “skip intro” link faster than you can say, “loading…” For business-to-business applications, the real beauty of Flash lies in its ability to show, rather than tell, what a company’s products can do for their customers.

With its incredible versatility, slim file sizes, and near saturation of web users, Flash is an ideal way to present highly technical industrial information. Whether it’s illustrating how products can streamline manufacturing processes through a virtual plant tour, or animating a cutaway of hydraulic motors in action, showing how things work gives customers the inside scoop on why your products outrank the competition.

The key is to take the product differentials (again, part of the strategy) and showcase them to put competitors at a disadvantage. For example, a large hydraulics manufacturer I worked for released a new technology that combined electronic drives with hydraulics to create the fastest, most durable systems for mobile applications. When a company is the technology leader in one area, it’s the perfect time to illustrate that with Flash—showing how the hydraulics components integrate with the electronics, and illustrating the resulting operating efficiencies so that the engineers can understand why they simply must specify your client’s products. We leveraged their innovation by creating a Flash kiosk for display at their biggest tradeshow of the year. It was both “flashy” (no pun intended), and very functional, in that show attendees could interact with it as it showed which products were used in each different piece of construction equipment.

Armstrong World Industries, a commercial and residential flooring, ceiling, and cabinet manufacturer, was selected to be in the top ten of BtoB Magazine’s “2002 NetMarketing 100 Best B-to-B websites.” Jesse Engle, general manager of eMarketing at Armstrong, has strong convictions on the topic: “Flash is a means, not an end, “ says Engle. ”It’s a very effective tool web designers can use to illustrate competitive advantages, product benefits, or even corporate positioning. But it needs to be used wisely.”

Who’s doing it right?
Siemens uses a Flash movie to showcase its lifecycle factory automation solutions. An interactive demo shows the plant manager or engineer how they can optimize operations when designing, building, and running their plant. From a usability standpoint, the concept is great, but the execution leaves a little to be desired as the fonts are far too small for readers.

For their automotive segment, Motorola uses Flash to embody their “intelligence everywhere” tagline (there’s that strategy again) with a wireframe illustration of a car. Users can click on products and are shown what part of the car makes use of Motorola’s advanced technologies.

Using rich-media formats like Flash to convey information on the Web is a convenient, economical, and dynamic way to keep visitors coming back to the site. Macromedia’s latest MX suite of technologies allows more interactivity and can incorporate real-time information. In fact, Macromedia’s “Executive Presentation” is a prime example of business-to-business communication at its finest. It incorporates video into a slick interface that illustrates exactly how their product can provide its customers with their own rich-media applications.

Please help me make up my mind
If there’s one thing I’ve learned from the customer feedback I’ve gotten on behalf of clients over the years, it’s that website users want the site to help them make a purchasing decision. According to BtoB Magazine, 91 percent of industrial buyers go to their supplier’s website for information before they pick up the phone. Further, 80 percent of buyers are likely to select another supplier if they don’t find the information they need on the site. How do you help them out? By presenting information logically, usefully, and appropriately for the web medium. There are a few elementary principles that apply to parsing and presenting technical information:

  1. Don’t be stingy.

    The phrase “too much information” only applies to things your friends might tell you under the influence. When it comes to the business-to-business customer, you simply can’t give them too much data. However, how you structure that information is key, and here are a few ways to do that:

    Give users multiple navigation schemes to get to your deeper information. For example, include the technical data sheets for every product, but make them searchable by product families, model codes, part numbers, or any other iteration that users might use to search.

    While you’re at it, don’t try to force users to register before they can get any information from your site. That is a huge turnoff, and there’s no proof that it’s effective. Remember, the company’s job is to service customers, not make their lives more difficult. If your client is concerned about competitors getting their information, remind them that proprietary information is stored on their extranets, so public information should be left unencumbered by forced registration. And remind them that you will help them interpret their site statistics better or build an online survey to collect the type of information that they are seeking through registration.

    If you do decide to have site users register, make it as simple as possible, in direct proportion to the kind of information they will be getting from you. If you have an advanced specifier with specialized product selection capabilities, or special training video available online, a short registration is acceptable.

    The best way to think of the Web is as a huge information exchange. As the prospective seller, you have to give and give and give to your buyer before you can expect anything in return. And only after giving them something really significant can you ask prospects to invite you into their world and exchange information with you freely. That’s basic permission marketing.

  2. Group information wisely.

    Don’t put your model codes on the homepage. Instead, give users product families and groups that are based on what they will understand, and not how your business is structured. Try to limit the number of choices to a set that is easily scannable. While the good old “7 plus or minus 2” rule is an acceptable starting place, I think between 10 and 15 items grouped logically will work just as well. For example, I once helped a client whittle a list of markets down from nearly 45 to 13 by grouping specific segments like education, government, healthcare, and office properties under the main header “Commercial and Institutional.”

    This client, GEBetz, a manufacturer of water treatment chemicals, is also a good example of giving users many ways to find information. Users can find information right from the main navigation by selecting application, industry, product family, and even case histories that are sorted by all three of those categories.

    A less than desirable example of categorization appears on the Ametek website. Check out the list of industries. Not only is it contained in a scroll-box, but the choices number nearly a hundred. And I have to wonder why Metal Casting, Metal Forming, Metal Pickling, and Metal Plating weren’t grouped under one header—Metal. The product brand dropdown menu has the same problems. One might expect brand names—names that would be recognizable in the industry, I’m sure—but instead there are choices like “500 Series” and “Series 90” alongside such brands as “Windjammer” and “Jofra.” Huh? With names like these, new customers aren’t able to choose the right product for their needs unless they already know those brands. To make matters worse, a few product family names, like “fluoropolymer tubing” and “heat exchangers,” expand the confusion. What this site really needs is a good information architect to lock themselves in a room with the obviously varied business units until some kind of consensus on site structure and operations is achieved.

  3. Show me the difference.

    When presenting technical information, they key is to show benefits and differences. Comparison charts are an excellent way to give engineers a tool to make a choice between multiple, similar products. Remember, a comparison chart shows the differences between products. If every product that appears in the chart has the same size, speed, shape, color, or specs, then it’s not showing customers the differences between products—and not helping them make a choice.

    A good example of a comparison chart is located at the website of JLG, a manufacturer of industrial material handlers. In each product category, a chart shows the platform height, capacity, horizontal reach, vehicle weight, and power source for each model in the line.

  4. Give me the tools.

    Finally, convince your client that they will not be eliminating the sales force by putting specifiers, configurators, and calculators on their site. These tools, if designed properly, help customers choose the right product for them, so that when they do contact the company they are much closer to making a purchase. This will save the sales force time—and they may even use the tool themselves!

    For example, Dell’s business site (as well as the consumer side) allows users to customize their PCs with a clean, easy-to-use computer “Finder,” or configurator. Further, they give their business customers an ROI calculator to help them figure out how much money their company is going to save by using Dell’s products.

    For GE Lighting’s business customers, they offer a Virtual Lighting Designer that takes a darkened picture and lights it with various configurations of lights. Users may also select energy efficiency, product life, light output, safety, and environmentally friendly as categories when selecting lighting. If customers still aren’t sure, GE Lighting offers a Lighting Auditor that collects user information and then calculates annual energy costs and project energy costs if users switch to GE products. A much simpler suite of estimators called the GE Lighting Tool Kit offers help with lighting layout, fixture replacement, lifecycle costs, and dimming system savings.

Selling sand in the desert
Talking a business-to-business company into any of the solutions above can be very challenging. Here are a few pointers:

  • Put together a solid creative brief that shows how this project embodies their corporate strategy.
  • As an addendum to the creative brief, do a “potential ROI” summary that shows how the added functionality is going to contribute to things like attracting and retaining customers, making it easier for customers to do business with them, or helping customers make purchasing decisions.
  • Develop solid storyboards and functional specs to help show your client exactly what they are going to get. Have the client approve them and sign off on them.
  • Have examples of how you’ve done this for others. Or, if you work for the company, provide examples of how your competitors have done it (or haven’t, so there’s opportunity to be the leader).
  • Develop and present a measurement plan for how you are going to define the program’s success.

Whether you work for an agency, yourself, or for one of the business-to-business companies, your creativity and versatility can always be practical, useful, and fruitful for your employer. Don’t be afraid to “live on the fault line” of emerging technologies, but keep pragmatism and business objectives at the forefront. That way, you’ll be sure to gain friends in your user communities, and hopefully turn those people into loyal customers.

Best of luck!

Nancy Carl is the Residential eMarketing Specialist for Armstrong World Industries, a leading manufacturer of flooring, ceilings, and cabinets. Her job involves working with internal business units to determine their interactive needs and implement them on Armstrong’s award-winning website. Working in the interactive field since 1996, Nancy began as an Information Designer and held various positions since then in the nebulous, ever-changing IA realm. Her specialties include eBusiness strategy, usability, content management, and information organization.

The Politics of User Experience

by:   |  Posted on
Politics of User Experience: Attempts to influence site visitors’ view of government policy by controlling their site interactions and perceptions.As user experience professionals, we’ve all been hit by the heavy hand of organizational politics. I received my first whack as an intranet producer at MCI in 1996. When I asked why we had a link to a weather site posted so prominently on one of our intranet gateway pages, my boss responded that the CEO liked to check the weather first thing in the morning. Uh-huh.

But organizational politics sometimes pales in comparison to the inside-the-beltway-Republicans-vs.-Democrats brand of politics that has such a significant impact on government websites.

Governments hire thousands of employees and spend millions of dollars on contractors to design, build, and operate websites. Chances are good that you will have some exposure to government work, and therefore, some exposure to the politics of user experience.

In this paper, I’ll detail a few of the more ubiquitous political influences, most of which are not seen outside of government. I’ll then explain why they have a negative impact on user experience and finish by explaining how to mitigate.

Before going further, let’s define a few terms:

Politics. I like this definition from Webster’s: the art or science concerned with guiding or influencing governmental policy.
User Experience. My own definition for purposes of this article: the sum total of a visitor’s website interactions and perceptions, influenced by visitor characteristics (knowledge, personality, demographics, etc.) and site characteristics (content, information architecture, visual design, performance, etc.).
Politics of User Experience. Attempts to influence site visitors’ view of government policy by controlling their site interactions and perceptions.

While user experience is impacted by politics at all levels of government, my focus for this paper is the U.S. federal government. But many of the issues discussed below are also relevant at the state and local level. For example, many U.S. state and municipal governments promote the agendas of governors and mayors respectively — sometimes at the expense of user experience.

Al Gore may have invented the internet, but the Bush administration is much savvier in its use of the web to promote its agenda. (Please don’t read this as a criticism of the current administration; all future administrations will see federal sites as major policy promotion vehicles.) Features that combine editorial content and promotional material are common, as are advertisements that seem to meld into content or navigation areas.

Promotional activities manifest themselves in two main forms:

The cult of the president
Federal websites are peppered with stories that highlight the president’s activities (“Today President Bush unveiled…”). These “news” items often impact a relatively small audience but command disproportionate attention in the page layout.

For example, the U.S. Department of Labor site recently featured a story about Bush signing the Nurse Reinvestment Act. This was important enough to once command a prominent spot on the homepage, but now it’s difficult to find information on the topic anywhere on the site.

It’s also common to see link labels that highlight the administration first and the content of the link target second as in:

“Bush Administration announces homebuyer protections to curb predatory lending”

The first three words of this label add minimal user value because they don’t help describe the content of the link, making this label less scannable than it could be, and forcing users to work harder to determine the meaning of the link.

Banner ads gone wrong
Over the past year, there has been an increase in the use of small banner ads or icons to promote other government sites.

Most sites link to, the government-wide directory site, and this makes some sense. But increasingly, agency sites are posting icons for the White House and USA Freedom Corps sites — administration-specific sites that promote the current administration agenda and do not relate to agencies’ missions.

Because there are no standard formats or placement guidelines for these ads, they often end up compromising the site. The icons are frequently commingled with other site navigation elements, as demonstrated in the placement of the White House icon on the Housing and Urban Development site. This presents another piece of information for users to process as they try to navigate already complex government sites.

In each of these instances, there was a conscious attempt to promote the Bush administration and its policy agenda on agency sites — with minimal consideration of user goals. (I’ll go head-to-head with anyone who says that visitors to the any of these sites have indicated the need for a direct link to the White House site or to see a photo of W. signing a minor law.)

Why does this happen? It’s not because federal web managers don’t understand experience design — most who I’ve talked to seem to have a good understanding of usability, information architecture, interaction design, and branding. While I can’t pinpoint exactly who is driving the “advertorializing” trend (it’s difficult to reliably extract that type of information), I can make an educated guess about what drives this trend.

In all agencies, there are political appointees and other senior executives whose job responsibilities include supporting and promoting the administration’s policy agenda. Since the web has become a vitally important channel to government agencies, these senior managers naturally view agency websites as ideal mechanisms for advancing that agenda. And because they’re usually not well-versed in user experience issues, they usually don’t consider the potential impact of their decisions on site visitors. Even when made aware of user experience issues, the more senior managers will often prevail over the web managers — but that’s not unique to government.

To serve and protect
Laws like the Children Online Protection Act and the Digital Millennium Copyright Act have received the bulk of attention in the internet community. But a number of laws (such as the Government Paperwork Elimination Act and the Rehabilitation Act Amendment) and “official directives” (such as Office of Management & Budget announcements) that fly below the public radar screen have a significant impact on the user experience of a federal website. Here are three such examples:

Empty cookie jar
Based on its intimate understanding of internet technology (ahem!), the Office of Management and Budget (OMB) has issued a directive prohibiting use of persistent cookies. This happened despite guidance by another agency that use of cookies posed minimal threat. Virtually no federal sites use persistent cookies, although a few use session cookies to good effect.

The knee-jerk OMB prohibition of persistent cookies was a political decision made to position the government as a concerned party that wouldn’t do anything to compromise the privacy of its citizens. But the government doesn’t have the best track record of dealing with online privacy issues: the exposure of sensitive information on the Social Security Administration site a few years ago and the more recent debate surrounding “Carnivore” (the FBI’s controversial tool for “online wire-tapping”) don’t exactly engender public confidence.

You are now leaving the town of…
There’s an unwritten rule on government sites that links to other sites must first redirect users through a disclaimer page. Legend has it that this policy originated with some Department of Justice attorneys and was picked up by attorneys at other departments and agencies who “encouraged” federal web managers to follow suit. Since the public relies on government agencies to provide “official” and “authoritative” content, there is justifiable concern about liability arising from referring users to erroneous information.

The net effect is that you have attorneys, most of whom are probably not knowledgeable about user experience, determining navigation flows.

Accessibility is not the same as usability
Section 508 of the Rehabilitation Act Amendments is a watershed piece of legislation; among other things it requires federal agencies, and some state and municipal governments, to make their websites accessible to people with visual, auditory, and motor skill impairments. Section 508 essentially puts into law many of the guidelines from the W3C’s Web Accessibility Initiative (WAI). This legislation was needed, but it’s had a bit of a numbing effect on federal website user experience.

Especially during the period leading up to the regulation’s effective date (June 2001), agencies focused so much on complying with Section 508 that they diverted resources from more fundamental user experience issues such as developing needs-based information architectures. While Section 508 presents technical guidelines, it provides no guidelines for effective interface development (i.e., how to implement the guidelines). This has resulted in some clumsy workarounds that degrade user experience.

That seems to be true of many laws and regulations — they outline what you must do, but don’t provide guidance on how to do it, or don’t reflect technical or practical reality. For example, language in the Government Paperwork Elimination Act requires agencies or their representatives to submit to a months-long OMB approval process if they want to collect information from more than ten members of the public. So much for obtaining information on user needs from… actual users.

These regulations and others like them — well-intentioned but created without sufficient input from the people they impact — can make it difficult to achieve user experience goals.

And my point is?
Okay, all this seems to make sense — but so what? What’s the big deal with a few extra icons here and there or a feature story about the president? Why make a big stink about cookies? And why care when government web managers’ hands are virtually tied when it comes to these issues — after all they can’t say no to their bosses, right? These issues won’t disappear, right?

Well, right. But we’re no strangers to working around these types of problems. And we need to care because these issues have an impact on the user experience of citizens (hey, that includes all of us!) as I’ll outline below.

Impact on the user
Political influences can impact user experience in multiple ways. Some, such as information hierarchy problems, are significant. Others, such as link redirections, may be mere nuisances. But all impact users’ website experience.

  • Information hierarchy problems: Because these politically driven “features” are often emphasized, through design and positioning in a webpage layout, they often appear more important than other content on the site. This makes it more difficult for users visiting the site to accomplish a specific task unrelated to a political feature — if for no other reason than they must process additional information choices.

    Government sites frequently have confusing information hierarchies; in a heuristic usability study I led earlier this year, we found that two-thirds of sites suffered from this malady (e.g., it was difficult for an expert reviewer to discern importance or prioritization of content elements). Because of their inherently broad scope and diverse audience base, government sites already present information hierarchy challenges; political influences only exacerbate the problem.

  • Download delays: Server calls to download the referral link icons take time, as does the download of the images themselves as they render on the webpage (the images can total 15k+, depending on file format, image quality, and image dimensions). Add in gratuitous photos of the president, and dialup users may see a perceptible delay in page loading, which negatively impacts user experience.
  • Disclaimus interruptus: Offsite link redirections interrupt users’ navigation flow — they essentially have to perform an extra step to confirm their intent. This extra step does not enhance user task performance, and therefore, can negatively impact user experience.
  • Preference selection repetition: As a result of cookie prohibition, users lose their ability to set and retain preferences (ZIP code, text-only browsing, etc.) easily. Site managers lose the opportunity to more easily provide some types of navigation support, such as recent (prior session) page visits, and instead, resort to more complex user registration mechanisms to support simple tasks.
  • Extraneous accessibility navigation: Section 508 compliance requirements give rise to some quick fix solutions such as “skip navigation” links at the tops of pages and an increase in “text-only” versions. Text-only versions are often only partially implemented, requiring a user to toggle back and forth between views. These responses to accessibility requirements add extra links to already crowded layouts and often throw “text-only” users into a cycle of extra clicks to access content.

While many of these glitches seem minor, we all know that small problems collectively result in significant — if not always understood or even consciously perceived — downgrades in user experience.

Action items
It’s obvious that these political influences can negatively affect the user experience of a site. Given the political and organizational realities at federal government agencies, what can be done to address these issues? Whether you’re a government webmaster or a consultant to governments, you’ll find some remedies below.

While you can’t avoid including referral icons and administration highlights, you can minimize their impact by:

  • Reducing the number of similar page components and simply eliminating “voluntary” referrals.
  • Reducing the size of icons and highlights sections and eliminating gratuitous signing ceremony graphics.
  • Limiting these items to the homepage.

Develop a “referral” layout that places all referral content in specific sections of the site. (A simple idea, but you’d be amazed at how many sites don’t do this.)

For example:

  • Place icons linking to other sites at the bottom of the layout and visually separate them from other layout elements.
  • Treat administration highlights as you would any other news item, both in terms of content (link, descriptive text) and location on the page — they will blend into the information hierarchy and reduce confusion.
  • Set up a disclaimer section at the bottom of the homepage containing the same text as offsite redirection pages, and then begin to eliminate the redirections (note: clear with legal first!).

Improve the overall user experience of the site to minimize the impact of politically-driven features and eliminate the need for accessibility workarounds:

  • Redesign the homepage to improve the information hierarchy.
  • Develop and enforce (through content management systems, server-side includes, and/or policies) consistent visual identity and navigational elements that make it obvious to users that they have exited your site.
  • Use session cookies instead of persistent cookies, where applicable, to track ZIP codes, user-supplied preferences, and session navigation paths.
  • Address accessibility compliance as part of a complete page redesign to avoid “tacking on” compliance fixes that degrade user experience.

Most federal sites are developed using a hodgepodge of authoring tools. They often contain a mix of proprietary tags and even FrontPage themes that can increase page sizes (and load times) and make it difficult to support a wide array of browser types, including assistive technologies like screen readers.

It will require some time and effort, but moving toward a standards-based site (HTML 4.01, XHTML, CSS) that degrades gracefully will help deliver a much better user experience, in addition to making the site more efficient and easier to manage. Use of standards-compliant code could be made mandatory for all new content, and older content could be retrofitted as it is updated.

The politics of user experience represent a small subset of the many challenges faced by web managers in designing, producing, and managing a federal government site. Inadequate funding, small staffs, far-reaching legislation, and directives from on high all conspire to test the mettle (and sanity) of even the most experienced federal web managers.

But I firmly believe that we need to push ahead on these incremental improvements since it can be very difficult to effect large-scale change. Incremental improvements, over time, can add up to major improvements in user experience of government sites. And that benefits all of us.
Steve “Fleck” Fleckenstein is an independent consultant focused on strategy and user experience consulting for government and non-profit clients. A webhead since 1994, he’s managed site strategy and development projects for tech startups, non-profits, a large telecom firm, and government agencies. He lives just outside Washington D.C. with his wife and two young boys. Reach him at .

Beauty is Only Screen Deep

by:   |  Posted on
“People use the web to buy things, find information, make contacts, and what they notice is whether they can successfully buy things, find information, and make contacts.”I admit it. Up until fairly recently, I didn’t really “get” the web. I thought my job as a web designer was all about looking good, delighting the eye, and imposing established design conventions on the user. I knew what color combinations worked best, what line length was comfortable for reading, and what type sizes produced the best balance and proportion, so I designed good-looking pages according to convention, and I did my utmost to make sure that my designs could not be altered by the users.

But I am starting to realize that, ultimately, looks don’t matter —that beauty is only screen deep. I have seen enough people delighted by horrendously designed pages —just thrilled as they squint to read pink type on a red background —because the site has something they want. And I have seen users utterly frustrated by attractive sites that use elaborate drop-down menus and rollover buttons to “enhance” the user experience.

The fact is that most people do not use the web for visual stimulation. People use the web to buy things, find information, make contacts, and what they notice is whether they can successfully buy things, find information, and make contacts. They do not notice the well-thought-out tag line or the expensive logo —they’re just window dressing, just frosting on the cake. In fact, all the fussing we designers do to draw attention to our work often winds up just getting in the way.

Take graphic text. Many of us use graphic text instead of plain text, particularly when designing navigation. We do this because we want to use a non-standard typeface, or because we want to create rollovers, or because we don’t like the way link underlining looks, or because we want to apply special effects to our text. Often we use graphic text to make sure users cannot mess up our layouts by resizing the text on the page. All of these reasons have to do with our concern about how text looks.

But text is not for looking at. Text is for reading, and there are many instances when people cannot read graphic text. People who need large text for reading cannot enlarge graphic text. People who use text-to-speech software to read web pages cannot read graphic text, unless the developer supplies alternate text. People who need to customize their view of the web, for example, by applying a custom text color, cannot change the color of graphic text. And graphic text generally does not work well in flexible layouts, which allow people to access the web on different devices. In the end, the care and attention we pay to having good-looking text interferes with its primary purpose: reading. This means our choice to use graphic text is one of form over function: a determination that the way text looks is more critical than whether it can be read.

On the other hand, text that is truly text is wonderfully functional. Content in text format can be resized, recolored, reformatted, read aloud, searched, indexed, categorized, copied, pasted, translated, analyzed. Sure, it is difficult to style text online the way it is styled in print, with control over details such as leading, kerning, and measure. And yes, it is maddening the way each flavor of system and browser renders text differently, and that, with a simple click of the mouse, users can change text size and send carefully crafted layouts into disarray. The solution, however, is not to try to exert control using roundabout methods such as graphic text, or by fixing text sizes and page layouts for “optimal” readability. The solution is to let go. One person’s optimal text is another’s Flyspeck 3, and the power of the web is that it can accommodate them both.

This is not to say that we should all go natural and build text-only sites. Today’s web can accommodate conservative good looks. The trouble lies in the emphasis on looks above all else: the homepages where the only text content on the page is the copyright statement or the sites authored completely in Flash. These efforts to fix designs on the page oppose the very nature of the technology. The web was built for flexibility, and what we have been doing is trying to wrestle it into submission. We use various methods, legitimate and hacked, to secure our designs down to the pixel. This approach allows us to stay within the box —to apply to the web what we already know about design. The trouble is, the web was built to flex and flow, and our efforts to hold it in place wind up stifling its potential.

The web was conceived as a means to exchange documents that could be read by anyone, anywhere, anyhow. Trouble is, the early web pages were a little nerdy-looking, so we designers came in and took over and produced documents that could be read by anyone [sighted], anywhere [on a T1 line], anyhow [on a Windows machine running Internet Explorer and the Flash plug-in]. And though the web is now a much better-looking place, it is also less welcoming and accommodating than in those early, ungainly days.

The flexibility that has been sacrificed in this passage from nerdy to swank has undermined the capabilities of the medium. The web is supposed to be a space that people can mold to fit their preferences and accommodate their needs. With access to tools like browsers and screen readers, and with the wealth of information published on the web, people should have unprecedented access. However, when we build pages that rely on pixel-level precision, we lock out people who require a view other than the one we offer.

The measure of quality in web design should not be good looks, but graceful transformation: pages that can be accessed under different conditions and keep their integrity. A “real” web designer is one who can delight the sighted user with an elegant, attractive layout, and can make the same page legible to low-vision users who have their fonts set large for reading, and can make the same page clearly written and organized so it is understandable to all users, and can make the page navigable from the keyboard for people with mobility problems, and can write the page code so it makes a good read for blind people using screen reader software. A real web designer embraces the medium and designs for maximum inclusivity. I am not a real web designer, but I aspire to be.

It used to be that we thought we needed to pretty up the web so people would use it. Those days are long gone. Today’s web user is after a meaningful experience, not just a good time, and has little need for adornments. Maybe it’s because we’ve grown up some: the technology, and those who use it. I can now see that the beauty of the web lies in its function, not its form, and I would rather that my sites attract attention because they are widely useful and usable than because they are pretty.

Sarah Horton is a web developer with Academic Computing at Dartmouth College, where she helps faculty incorporate technology into their teaching. Together with Patrick Lynch she authored the best-selling Web Style Guide, recently released in its second edition. Sarah regularly writes and speaks on the topic of accessible web design.

Mobile: The State of the Art

by:   |  Posted on
“…it will certainly become more and more commonplace to see people transacting with their phones at arm’s length instead of speaking into them.”A few months ago, I was on my way home from the San Francisco Airport in one of those door-to-door shuttle vans. At some point I casually noticed the woman beside me thumbing the keys of her mobile phone. Dialing a number, I assumed—a common enough sight. But then she kept dialing and dialing and dialing.

I had just spent the previous three months in Germany working on a mobile messaging application. While there, I had seen many a teenager and young professional similarly engaged with a “handy” and I had become savvy enough about these things to know that the Germans were sending text messages to each other, whereas my neighbor on the shuttle van was surely just having some kind of problem with her phone. Short Message Service (SMS) is so popular in Europe, they’ve turned the acronym into a verb, but on this side of the Atlantic, our “cell phones” are for talking.

To make a long story short, my professional curiosity got the better of me. I asked, and it turned out she was indeed composing a text message to send to her friend. SMS had apparently hit the States while I was away.

In Europe, non-voice (mostly messaging-related) services account for around 10% of mobile operator revenue. The figure is much higher in Japan, where many mobile phones are full-fledged multimedia devices, featuring color displays, integrated digital cameras, and stereophonic ringtones. The U.S. market is a different story, but things are changing.

The popularity of SMS is not likely to explode here like it did in Europe and Asia. But as more powerful devices and better services become available, it will certainly become more and more commonplace to see people transacting with their phones at arm’s length instead of speaking into them. Mobile devices already represent a significant channel—and they will no doubt become the primary channel—for a number of common human-computer interactions. These interactions often take place during brief pauses in transit, in distracting environments, on devices that are difficult to use, and where mistakes can be expensive. Obviously, a highly usable interface is key, and there is a growing demand for IAs who specialize in mobile.

But try to find a good book on the subject.

The world of mobile phones is a jungle of proprietary technologies with few established standards that, in some ways, resembles the early days of personal computing. I intend in this article to paint a kind of impressionistic landscape of this world; to present a survey of the markets, technologies, devices, and key applications, along with some examples of successes and failures, a glimpse of the near future, and some thoughts on what all of this might mean for IAs.

Key markets

Nowhere has the mobile phone industry burgeoned like it has in Japan. With mobile phones, as with other things electronic, the Japanese have lived up to their reputation for embracing new gadgets as quickly as manufacturers can conceive of them. It remains to be seen whether Japan is a year ahead of the rest of the world or simply a unique market. For the moment, it is safe to assume both are true. The proven success of Japan’s most popular mobile services seems to promise their broader appeal, but the multitude of niche offerings is largely ignored by the rest of the world.

It is worth noting that in their latest generation, Japanese phones are actually a little bit bigger than their predecessors, marking the first reversal of what has been a steady trend toward smallness. The demand for processing power seems to have subjugated the demand for shirt-pocket-sized convenience. Japanese mobile phones double as portable game consoles, music players, and cameras, among other things. They employ an always-on network connection, with data speeds roughly equivalent to dialup modems, so users can download small pictures, sound files, applets, and games cheaply, quickly, and easily.


The Generations of Wireless

1(st) G(eneration): The analog radio cellular phones that first appeared in the 1970s

2G: Digital voice encoding introduced

2.5G: Increased bandwidth; packet routing

3G: Broadband data speeds; global roaming; enhanced multimedia

Europe has the highest average mobile phone penetration rate in the world, and it’s not uncommon for Europeans to own more than one mobile. European mobiles use removable SIM cards – the small chips inside the phones that store the subscribers’ personal information (phone number, contacts, saved text messages, etc.) and identify the subscribers on the network. The European SIM card is universal, meaning it’s easy to remove a SIM from one phone and insert it into another. Therefore, it’s easy for Europeans to upgrade to a new phone or own a whole drawer full of them.

With non-voice services, Europe is following Japan’s example. Operators are scrambling to introduce new color devices and accompanying services to run on their relatively new 2.5G infrastructures. “i-Mode,” the packet-based service for mobile phones offered by Japan’s leader in wireless technology, NTT DoCoMo, has had recent launches in Germany, the Netherlands, and Spain, and European operators are touting Multimedia Messaging Service (MMS) as the next big thing.

The United States
The United States has lagged behind, but the latest service offerings from the biggest American providers suggest the gap is closing—the technology gap, that is. Adoption rates in the U.S. are a different story. The mobile phone penetration rate here is about 45% (compared to about 75% in Europe and 65% in Japan). American consumers have not embraced mobile like their Japanese or European counterparts for a variety of reasons.

U.S. providers, however, are boldly charging forward. AT&T will reportedly offer i-Mode to its customers this year (NTT DoCoMo holds a 15% stake in AT&T wireless), and Sprint recently launched their “PCS Vision” service, which includes color browsing, downloadable stereophonic ringtones, and MMS.

Mobile technologies

This is where things get messy. There is little in the way of consistency at any level. Developers are faced with a huge number of unique client devices running any of the huge number of proprietary operating systems and integrated browsers, supporting any of a handful of development technologies and markup languages, and communicating with the network via any of several digital data transmission standards.

From the bottom up then…

Transmission standards
The selection of transmission technologies has been somewhat regional. The actual mess of acronyms doesn’t warrant a detailed discussion here, but suffice it to say that when it was time to migrate from analog to digital, Europe took a consensus approach. They chose a standard called Global System for Mobile Communication (GSM) to cover the continent. Japan too, allowing politics to intervene, chose a national standard.

In typical fashion, however, the United States decided to let the market drive the decision, resulting here in the deployment of a mishmash of semi-compatible standards. For voice calling, this is not an issue, but transmission of other data across the various standards has been hindered by a number of roadblocks.

Application development technologies and markup languages
With the exception of DoCoMo’s i-Mode, which uses a language called CHTML (Compact HTML—essentially just what it sounds like), WML is the markup language of choice for the wireless web. WML is simply an XML Document Type specific to mobile devices. HDML, the predecessor of WML, is still mentioned occasionally, but for all intents and purposes it is obsolete. The version history of WML can be a little confusing, especially since WML (the language) and WAP (the protocol) are often used interchangeably in the context of version support (e.g., “device A supports WAP/WML version x.x”).

Significantly, XHTML, XSLT, and even Flash have been gaining support (although Flash more slowly), and many new devices will render a familiar range of image formats. This means that from a technical standpoint, developing for mobile phones will become more and more like any other web development, so the burden will be on IAs to design channel-appropriate interfaces.

One other mobile development technology bears mentioning: Java. Sun created a slimmed-down version of its language and called it J2ME. It is designed to accommodate the limited computing power of mobile devices and allow them to run small, self-contained applications. Computing power isn’t always the only limitation to be accommodated, however. Bandwidth, as well, is an issue for applications that are to be delivered for over-the-air downloading.

Operating systems and browsers
The mobile world is in the midst of its own browser wars, and most often a device’s built-in browser is tightly integrated with its operating system. The biggest players are Nokia/Symbian, Ericsson and Openwave, although Microsoft has recently begun to move into the wireless space.

Nokia, the leading handset manufacturer, has recently begun to peddle a productized version of its software to other device manufacturers. Alternatively, Openwave, whose main business is software, has seen their browser installed in a wide range of handsets. That, however, has far from guaranteed any kind of consistency. Openwave’s software has been deeply customized for certain manufacturers, and there are even different customizations of the software for different handsets by the same manufacturer.

This means the same markup is rendered differently on different devices. It also means the interaction between the hardware, the software, and the remote application—the physical mapping of the phone’s keys to the application’s functions—cannot easily be predicted or specified.

The range of devices on the market continues to expand, with new devices being introduced much more rapidly than old devices are being retired. The best way to make some sense of it all is to take a zoological approach, to impose a classification system on the multitude of species.

There are two useful facets of such a system: degree of mobility and amount of computing power. Focusing on mobile phones, I divide these into two classes. The more common, and therefore more familiar, of these is the set of monochrome data-capable phones. The other class is the set of more powerful phones with large, full-color displays. My chosen differentiators in this case are rendering capability and navigation method (as expressions of computing power).

This classification system obviously has its limitations. Some monochrome phones, for example, support Java, and some don’t. Some color-capable phones don’t support four-way scrolling. And there are always anomalous devices that defy easy classification altogether, like the Handspring Treo 180—a powerful “Smartphone” that happens to have a large monochrome screen.

Key applications

At the moment, a common perception is that mobile computing is little more than a poor imitation of desktop computing. Critics wonder why anyone who has access to a computer would bother to agonize their way though an m-commerce (ecommerce on a mobile device) transaction. The simple answer is: they wouldn’t.

The mobile applications most likely to succeed will be those that take advantage of their mobile-ness. I have mentioned i-Mode as an example of an application that has succeeded, but it’s useful to focus on the broader categories that this example and others represent.

By far, the most successful non-voice application in the mobile world is SMS. Remarkably, it was originally conceived not as a consumer product but as a way for mobile service providers to send data—anything from promotional messages to technology upgrades and patches—to their subscribers. These subscribers quickly embraced it as an inexpensive way to send short messages (originally 160 bytes maximum, at about 15 to 25 cents each) from mobile to mobile. According to the GSM Foundation, Europeans send as many as a billion text messages every day (compared to 12 million in the United States).

More recently, various enhanced messaging services are gaining popularity, including different mobile implementations of popular Instant Messaging services like Yahoo! Messenger and AOL Instant Messenger, and Multimedia Messaging services.

Outside Japan, browsing content via the wireless Web has arguably flopped. “WAP is crap” goes the saying. However, the introduction of new color devices and the rollout of higher-speed networks have brought renewed hope for the future of mobile browsing in general. Most people believe that the browsing applications most likely to succeed are those that provide targeted, on-demand information (e.g., sports scores, stock quotes, and weather reports) quickly and easily, and obvious and immediate utility (e.g., travel and event booking, auction bidding, and gambling).

Research has shown that people who use their phones for non-voice applications often do so as a way of killing time while commuting, for example, or waiting in line. Games provide an ideal distraction. Some amazingly simple games have been a hit with mobile phone users, demonstrating that people who expect their PCs to immerse them in minutely-rendered 3D worlds are nonetheless willing to spend 15 minutes a day playing “Snake” while they ride the bus.

Games can be delivered in several ways. They can ship with the phone as built-in applications; they can reside on the network to be played during active sessions; or they can be delivered as complete applications via one-time over-the-air downloads.

Personal information management (PIM)
Most phones on the market today include a suite of built-in PIM applications such as an address book and calendar. Some phones also include email and synchronization support for Outlook or other PC clients, and WAP (Wireless Application Protocol) portals like Yahoo! and MSN provide mobile support for their popular Webmail clients, as well as POP support. Mobile PIM applications show special promise for enterprises looking to support a mobile workforce, and PIM applications are primary candidates for full, frequent multichannel use.

Location awareness
Location awareness is an application enhancement, not an application category. The architecture of the mobile telecom environment makes subscribers locatable geographically, though not with GPS-like precision. Operators are adding location features to messaging services and games, as well as to more utilitarian applications like restaurant and club finders. Obviously, privacy protection is a key concern for services that incorporate locatability.

The Role of the IA

All the basic tenets of our profession certainly apply to the discipline of mobile user interface design, but these underlie a number of unique considerations.

Mobile usage patterns are distinctly different from what we associate with the desktop PC. Mobile sessions often occur in public places, during brief pauses. Unless the user is idly browsing or playing a game to pass the time, she is probably seeking a piece of very specific information or trying to accomplish a single very specific task. Time is often—literally—money, so there is effectively no margin of error. If the user navigates down the wrong path or downloads the wrong file, she pays for the mistake.

IAs obviously need to understand the contexts within which a given application will be used, to understand the physical environments, the motives and circumstances, and the target devices. There are many questions that apply especially to mobiles: Will the user be moving or standing still? Will he be operating the device with one hand or two? What are the most likely distractions or obstacles? What if the connection is dropped?

It is important to remember that in many cases, users’ attention will be divided. They will interact with an application while walking down a flight of stairs or while half listening for their flight number to be called.

Because of the variety of device capabilities currently in use, IAs must frequently decide whether to exploit the advantages of a given device or to design something more generic to accommodate a broader range of devices. Screen sizes on mobile phones range from the tiny to the miniscule, so IAs must abandon notions of point-and-click in favor of click-and-flow.

Since users are presented with so little information at any given point, it becomes especially important for them to know where they are within the system (and where they were, and where they can go). Wireless data speeds are usually equivalent to a dialup connection or slower, and devices have very little storage capacity or processing power. Finally, users don’t have the luxury of familiar input devices like a mouse or alphabetical keyboard, and many users are only roughly familiar with the behavioral quirks of their chosen clients.

Most mobile phones currently in use support only one-color graphics. This severely limits visual branding opportunities, and while it may be possible to an extent to brand interaction design, it is more important to stick to familiar user interface conventions and metaphors as much as possible. Users are likely to encounter more than enough uncertainty without our help. We don’t need to create more uncertainty in the pursuit of distinctiveness or innovation.

We declare all the time that less is more. With mobile phones, one would think perhaps we don’t have a choice. Even so, the maxim applies. The simplest interfaces are the most successful. Wizards, for example, generally work better than forms because of their one-step-at-a-time simplicity. A login process requiring a username and password, then, should be a three-step, three-screen process (1. username 2. password 3. submit). On the other hand, perhaps such extreme simplification would be maddening to power users. There’s only one way to find out…

Test. Conducting usability tests on mobile applications is difficult. There are few software or hardware tools designed for testing mobile phones, and there are few documented guidelines or best practices. But that’s also part of what makes it exciting. As with all frontiers, we are required to imagine, to innovate. I worked quite a bit with a firm that used an awkward-looking setup involving a miniature spy camera and duct tape, but it gave us exactly what we needed.

Any of the points above could of course warrant at least an article all its own. I look forward to the opportunity to discuss in much greater detail some of the particulars of mobile user interface design.Acronym Soup

ARPU Average Revenue Per User
CDMA Code Division Multiple Access (a digital voice encoding format)
CDMA-2000 The broadband CDMA standard developed by Quaalcom and Lucent for 3G
GPRS General Packet Radio Service (a packet-switching protocol designed to improve data speeds on GSM networks)
GSM Global System for Mobile (the most common worldwide mobile communications standard)
HDML Handheld Device Markup Language
J2ME Java 2, Micro Edition
LBS Location Based Services
MMS Multimedia Messaging Service (mobile-to-mobile transmission of images, video, sound)
OTA Over-the-air
PCS Personal Communications Services
SIM Subscriber Identity Module (a sometimes removable microchip that stores a subscriber’s personal data and the information necessary to identify the subscriber on the mobile network)
SMS Short Message Service
T9 Text input on nine keys (a text-input helper application that employs a database of commonly-used words)
UMTS Universal Mobile Telecommunications System (a 3G transmission standard)
WAP Wireless Application Protocol
W-CDMA Wideband CDMA (a 3G standard)
WML Wireless Markup Language

Standards organizations

Developer sites

Other resources

Shawn Smith has worked as an IA and user experience designer since 1996. Currently he develops applications and UI standards for Vodafone, the world’s largest mobile operating company.

Ranganathan for IAs

by:   |  Posted on

An Introduction to the Thought of S.R. Ranganathan for Information Architects

“Ranganathan aimed big—he was looking for the fundamental laws that underlie experience and it quickly became an obsession.”

S.R. Ranganathan was the greatest librarian of the 20th Century. No one else even comes close. His ideas influenced every aspect of library science (a term he is credited with coining), and because he was such a complete and systematic thinker, he was gifted in the development of all areas of the field, including theory, practice, and management. Yet, as impressive as his accomplishments were, Ranganathan didn’t start out with the intention of becoming a librarian at all.

He was born in Madras, India, in 1892, trained as a mathematician, and eventually became a lecturer of mathematics at the University of Madras. In 1924 the university offered him the position of librarian. One of the conditions of the appointment was that he attend training in London to learn contemporary methods of librarianship. It was during this trip that he met W.C. Berwick Sayers, who taught him about classification theory, and it was on this trip that he began observing libraries throughout the city.

In 1925 he returned to India a different person. His desire to build libraries and improve librarianship became a passion. The basic methods Ranganathan used to develop his ideas emerged from his background in mathematics and his beliefs in Hindu mysticism. He would examine complex phenomena, break his observations into small pieces, and then attempt to connect the pieces together in a systematic way. This method has often been called the Analytico-Synthetic method. Ranganathan used this methodology for classification, management, reference, administration, and many other subjects. Francis Miksa stated it well: “Ranganathan treated library classification as a single unified structure of ideas which flowed from a cohesive set of basic principles” (Miksa, 1998) Ranganathan aimed big—he was looking for the fundamental laws that underlie experience and it quickly became an obsession. Girja Kumar reports, “There had not been a day of the life of Ranganathan since 1924 when he did not breathe, think, talk, and even dream of librarianship and library science” (Kumar, 1992) Kumar further reports, “[Ranganathan] spent two decades as librarian of Madras University. Never did he take any vacations during this period. He spent 13 hours every day for seven days a week on the premises of the library.” (Kumar, 1992) He wrote his 62 books in the evenings, during his off hours.

In addition to the almost uncountable number of books and articles Ranganathan authored, he also created several professional and educational organizations, primarily in India, and he participated in library movements around the world.

For most librarians today, he is primarily remembered for two contributions: the Five Laws of Library Science and the Colon Classification.

The Five Laws of Library Science
The Five Laws are the kernel of all of Ranganathan’s practice. They are:

  1. Books are for use.
  2. Every reader his or her book.
  3. Every book its reader.
  4. Save the time of the reader.
  5. The Library is a growing organism.

While the laws seem simple on first reading, think about some of the conversations on SIGIA and how neatly these laws summarize much of what the IA community believes. Ranganathan saw these laws as the lens through which practitioners can inform their decision making and set their business priorities, while staying focused on the user. Although they are simply stated, the laws are nevertheless deep and flexible. They can also be updated to include the field of IA in a variety of ways.

1. Books are for use.
Websites are designed to be used, they are not temples or statues we admire from a distance. We want people to interact with our websites, click around, do things, and have fun.

2. Every book its reader.

3. Every reader its book.
Maybe we can modify these two to say “each piece of content its user” and “each user his/her content.” The point here is that we should add content with specific user needs in mind, and we should make sure that readers can find the content they need. Laws 2 and 3 remind me of the methodology taught by Adaptive Path. Make certain our content is something our users have identified as a need, and at the same time make sure we don’t clutter up our site with content no one seems to care about.

4. Save the time of the user.
This law, when we are talking of websites, has both a front-end component (make sure people quickly find what they are looking for) and a back-end component (make sure our data is structured in a way that retrieval can be done quickly). It is also imperative that we understand what goals our users are trying to achieve on our site.

5. The library is a living organism.
We need to plan and build with the expectation that our sites and our users will grow and change over time. Similarly we need to always keep our own skill levels moving forward.

Colon Classification
Besides these laws, Ranganathan is also famous for the Colon Classification system, a widely influential but rarely used classification system. This is his greatest achievement and where he developed most of his most famous ideas, including facets and facet analysis. The system is again based on Ranganathan looking for “universal principles” inherent in all knowledge. His belief was that if he could identify these, organizing around them would be more intuitive for the user.

For Ranganathan, the problem with the Dewey Decimal and Library of Congress classification systems is that they used indexing terms that had to be thought out before the object being described could fit into the system. With the explosion of new information early in the 20th Century, the enumerative, or pre-planned, systems could not keep up. Ranganathan’s solution was the development of facets. This idea came to him while watching someone use an erector set (Garfield, 1984).

Rather than creating a slot to insert the object into, one starts with the object and then collects and arranges all the relevant pieces on the fly. This allows for greater flexibility and a high degree of specificity.

The fundamental facets that Ranganathan developed were: Personality, Matter, Energy, Space, and Time. (Amaze your librarian friends by referring to these by the acronym PMEST!)

  • Personality—what the object is primarily “about.” This is considered the “main facet.”
  • Matter—the material of the object
  • Energy—the processes or activities that take place in relation to the object
  • Space—where the object happens or exists
  • Time—when the object occurs

Ranganathan believed that any object (for him this meant any concept that a book could be written about) could be represented by pulling relevant pieces from these five facets and fitting them together. All of the facets do not need to be represented, and each can be pulled any number of times. The notation for each facet was separated by using a colon, hence the name of the system. Arlene Taylor provides a good example that uses all five facets. Imagine a book about “the design of wooden furniture in 18th century America.” (Taylor, 1999)

The facets would be as follows:

  • Personality—furniture
  • Matter—wood
  • Energy—design
  • Space—America
  • Time—18th century

The book is described by combining the relevant pieces from each facet. “Wood” is a piece of that description which covers an area that none of the other pieces cover. The power comes through combining the pieces together to form the whole. In this case, it is a one-to-one ratio, which would be rare in real life. Also, keep in mind that the specifics of how the Colon Classification works are complex (be skeptical of anyone who claims to understand them), and are generally beyond the realm of the practicing IA.

(Stay Tuned: Boxes and Arrows has plans to write in more detail about facets in the future.)

There is, however, much that the practicing IA can take from Ranganathan. Besides exploring concepts such as the Five Laws or practices such as facet analysis, Ranganathan was also a diligent evangelist of getting information to people who needed it, and he thought deeply about the problems he faced from all sides. There is still a lot that needs to be done to build up the field of information architecture; Ranganathan may help us the most by serving as inspiration.

  1. Miksa, Francis L., The DDC, the Universe of Knowledge, and the Post-Modern Library. Albany: Forest Press, 1998; 67
  2. Kumar, Girja, S.R. Ranganathan: An Intellectual Biography., New Delhi: Har-Anand Publications, 1992; 45
  3. Kumar, 93
  4. Garfield, Eugene, A Tribute to S.R. Ranganathan: Part 1. Life and Works,; 40
  5. Taylor, Arlene G., The Organization of Information., Englewood: Libraries Unlimited, Inc., 1999; 180

Mike Steckel is an Information Architect/Technical Librarian for International SEMATECH in Austin, TX.

From Satisfaction to Delight

by:   |  Posted on
“As a field, I think we’ve already learned how to satisfy. But we’ve only scratched the surface of providing delight.”How many times have you heard this recently: “We want to go beyond satisfying customers, we want to delight them.” What exactly does that mean? How often do customers truly experience delight when interacting with a company, its products and its services? The answer, I suspect: not often. After reading or hearing “delight” referenced in company and product charters multiple times in a single week, I thought the idea deserved deeper consideration. As a field, I think we’ve already learned how to satisfy. But we’ve only scratched the surface of providing delight.

For the purpose of this discussion, I am both an experience design professional (one striving to delight) and a customer (one desiring delightful experiences). Personally, I am satisfied when an individual or business knows, understands and meets my wants, expectations and needs. But I am delighted when an individual or company goes beyond my needs and exceeds my expectations. In the world of digital applications and devices (where, after all, many of us live), my expectations are high, making delight a truly rare emotion.

Toward customer understanding
Humans interact with a product or service with an outcome in mind. We, as design professionals, have the means of bringing to life the concepts and systems that enable people to complete tasks and satisfy those outcomes. The process of creating potentially satisfying experiences is already defined. Using a breadth and depth of research and data collection methods, we are able to form a thorough understanding of customers’ wants, needs, tasks, perceptions and behaviors. Our photographs, recordings, drawings, collages, server log records, transaction records, registration data, call center logs and survey responses are the raw materials of their stories.

We synthesize this data into models, frameworks and matrices that tell the stories. We invent representative customers, give them names and histories and put them in modern-day contexts of interaction. These stories come to life via a project plan and the digital and physical products and services that result. During this iterative user-centered process, we categorize, prioritize, hypothesize and validate our solution, ensuring that it succeeds. At every step, we account for efficiency, feasibility and fitness. We predict a future interactive dialogue and then put a measurement plan in place to track, refine and continuously improve it.

Many of us have been on teams that masterfully balanced the art and science of acquiring customers, converting customers into buyers and retaining customers over long periods of time, succeeding in the face of fierce competitive pressures. Our industry has matured and, for the most part, we’ve gotten good at designing and building the right thing in the right way.

The next evolution of our interactive pursuits ought to be toward emotion, specifically delight. Beyond satisfying what humans want, need, desire or expect is the potential to inspire, to trigger creativity, intuition, discovery and spine-tingling emotion. Our technology-driven marketplace continues to encroach upon a point at which highlighting technology will be mute. The playing field will be level with all technology available to everybody. In today’s world of quantitative validation, desirability, perception and whimsy get the short end of the stick. In time, these may become our primary goals—the only points of competitive difference.

We’ve set the bar too low
At this point in experience design’s evolution, satisfaction ought to be the norm, and delight ought to be the goal. So how do we do this as experience design professionals? If the word “experience” is in your title or department, it implies you’re considering these issues. You’re planning and designing potential customer experiences—the interactions an individual has with your company, its product and services—at all times and in all places of awareness. You’re creating perceptions, setting the tone, building a relationship, and enabling dreams.

But the reality for many web users is this: simply allowing me to get something accomplished without encountering mental and physical barriers gives me pleasure. Guiding me to complete a task as I expected brings me extreme pleasure. Whether on the web or with the devices I use, my expectations are so low that merely encountering products that allow me to interact with them as I anticipated (or that match my mental model) exceeds my expectations.

Toward pure delight
Each moment of delight persists and contributes to a positive customer perception. Pure delight is the ultimate brand builder. The power of delighting on a regular basis is not to be underestimated. From $100 million box office weekends to high-priced vacations to gas guzzling luxury SUVs, our everyday experience is shrouded in escapism and physical pleasure. We strive for pure comfort in the Western world.

But in our day-to-day lives, we interact with companies, especially service companies, in a mundane, mechanical fashion. Consider a retail experience. I go into a store and try to find an item I need or desire. I’m approached by a sales associate, get help if I want it, decide to make a purchase, proceed to the checkout line, take my product and leave. Often, though, the styles aren’t to my liking, I can’t find my size, there’s no help, or the help that’s available isn’t actually helpful. The product is layered in decorative branded packaging, and I become a walking billboard. In two weeks, I get unsolicited catalogues in the mail because my address has been added to a list.

Delight in the consideration and purchase process is rarely in the picture. More likely, we find a mix of satisfaction and frustration. Delight can still exist in my enjoyment of the day-to-day wear of the garment I purchased. In many cases, we suffer through pre- and post-purchase disappointment to enjoy the daily use of a product or service. Satellite TV service, automobiles, daycare, home internet service and air travel come to mind as experiences that are fundamentally satisfying in use but contain a periphery of annoyance and inconvenience.

Now imagine a company whose core values and brand platform are based on respect for each individual customer, with an undertone of fanatical courtesy and general admiration of its customers. My interaction with that business would be designed with me and for me at the same time. Its products and services would be empathetic to my every state, yet would challenge me at just the right level, exploiting my capacity for insight, curiosity and perception. This company does not push unwanted products and deals in front of me, nor does it force a change in my behavior. I’m recognized when I enter the store and am led to what I desire. My needs are met, and through the course of my interaction I’m presented with something unexpected but captivating. The company is a trusted friend, one that inspires, enlightens and challenges me when appropriate. The emotions I feel when interacting with this company would compel me to engage further. Does this sound like any company you know? It’s a stretch.

Things to consider when planning a delightful experience
Much to my satisfaction, the consideration of good design applied to our everyday experiences has become widespread across diverse industries, disciplines, corporations, governments and consultancies. Along its evolutionary path, experience design has adopted various tips, techniques and best practices from fields as disparate as anthropology, theater, psychology, linguistics, library sciences and art. Much of experience design’s success is the result of remaining grounded in fundamental business principles—brand, channel integration, usability and customer service, to name a few. The field has reached a point where success stories are recognized and many companies value user-centered solutions. I often say that providing processes and solutions that result in the measurable satisfaction of customers ought to be the “cost of entry” into the field. These should be the minimal expectations of companies and clients today.

Today’s interactive solutions should, at the very least, deliver:

  • Brand consistency, translation and extension into people’s lives.
  • An integrated, seamless experience of all interactions with a company, whether online, on the phone or in a store.
  • Ease of use in all interactions.
  • Establishment of success metrics with rigorous measurement and validation.
  • Opportunity for a personal relationship that continuously evolves.

We should also strive to delight customers regularly, to achieve a higher plane of customer connection. This is potentially accomplished when a company:

  • Demonstrates that it knows and understand me.
  • Anticipates my questions and provides satisfactory answers without my needing to ask them.
  • Communicates with me using a heightened degree of respect, tolerance and empathy.
  • Maximizes my capacity for insight, curiosity and perception, creating the desire to engage.
  • Recognizes connections or relationships of value to me.
  • Provides pleasant surprises.
  • Intelligently personalizes my experience based on my past needs, behaviors and purchases.

Are these the outcomes we aim for when we say that we strive to delight our customers? Admittedly, recognizing opportunities to delight and then designing those potential experiences is difficult. It requires a deeper immersion in and understanding of the lives of those we design for and with. The dimensions of consideration are vast and the opportunities exist in the details, swimming between tasks and personal desires. Performing task analysis, defining behavioral models and understanding wants and needs are the foundation. Mining, correlating and modeling a multidimensional context, which may include physical environment, activities, pressures, mindset and goals, is where the clarity and connections of surprise and delight reside.

And just like striving for satisfaction, designing for delight requires rigorous measurement and validation of the intended outcome. Success is recognized in facial expressions, body gestures and, if you’re lucky, words. “What just happened there?” “Oh, I see what you’re doing. It’s not obvious, but I get it.” “I didn’t think that could be done, wow.” “Wow” is a dead giveaway.

I dare you to set delight as a success metric on your next project. Recognize and craft only one opportunity and then, on your customer satisfaction survey or user interview, inquire as to its presence and frequency. Imagine if all of the companies with which you interact during your lifetime each provided you one genuine moment of delight. Let the revolution begin.

Parrish Hanna is Director of Experience Planning at Semaphore Partners. Previously, Parrish served as President of HannaHodge, a groundbreaking user experience firm that he co-founded in 1998. For over a decade, he has spent the better part of each week planning better experiences for humans and refining the process to do so. He jumps at the chance to write and speak on issues related to experience design.

Computer Human Values

by:   |  Posted on
“This is not a crisis of technology or computing power, but one of imagination, understanding and courage.”Computers and related devices need to be more human

As computers and digital devices increasingly insert themselves into our lives, they do so on an ever increasing social level. No longer are computers merely devices for calculating figures, graphing charts, or even typing correspondence. When producers of the first personal computers initially launched them into the market over 20 years ago, they could think of no better use for them than storing recipes and balancing one’s checkbook. They couldn’t predict how deep computers (and related devices) would seep into our lives.

Computers have enabled cultures and individuals to express themselves in new and unexpected ways, and have enabled businesses to transform how, where, when and even what business they do. However, this rosy outlook has come at a price. Computers have become more frustrating to use. In fact, the more sophisticated the use, the application, the interface and the experience, the more important it is for computers and other digital devices to integrate fluidly into our already-established lives without requiring us to respond to technological needs. Also, the wider-spread these devices, the more socially-agile they need to be in order to be accepted.

Interfaces must:

  • Be more aware of themselves.
  • Be more aware of their surroundings and participants/audiences.
  • Offer more help and guidance when needed, in more natural and understandable ways.
  • Be more autonomous when necessary.
  • Be better able to help build knowledge as opposed to merely processing data.
  • Be more capable of displaying information in richer forms.
  • Be more integrated into a participant’s workflow or information and entertainment processes.
  • Be more integrated with other media.
  • Adapt more automatically to behavior and conditions.

People default to behaviors and expectations of computers in ways consistent with human-to-human contact and relationships.

Ten years ago, when the computer industry was trying to increase sales of personal computers into the consumer space, the barrier wasn’t technological, but social. For the most part, computers just didn’t fit into most people’s lives. This wasn’t because they were lacking features or kilohertz, it was because they didn’t really do much that was important to people. It wasn’t until email became widespread and computers became important to parents in the education of their children that computers started showing up in homes in appreciable numbers. Now, to continue “market penetration,” we’ll need to not just add new capabilities, but build new experiences for computers to provide to people that enhance their lives in natural and comfortable ways.

If you aren’t familiar with Cliff Nass’ and Byron Reeves’ research at Stanford, you should be. They showed (and published in their book Media Equation) that people respond to computers as if they were other people. That is, people default to behaviors and expectations of computers in ways consistent with human-to-human contact and relationships. No one is expecting computers to be truly intelligent (well, except the very young and the very nerdy), but our behaviors betray a human expectation that things should treat us humanely and act with human values as soon as they show the slightest sophistication. And this isn’t true merely of computers, but of all media and almost all technology. We swear at our cars, we’re annoyed at the behavior of our microwave ovens, we’re enraged enough to protest at “corporate” behavior, etc. While on a highly intellectual level we know these things aren’t people, we still treat them as such and expect their behaviors to be consistent with the kind of behavior that, if it doesn’t quite meet with Miss Manner’s standards, at least meets with the standards we set for ourselves and our friends.

We should be creating experiences and not merely “tasks” or isolated moments in front of screens.

Experiences happen through time and space and reflect a context that’s always greater than we realize. Building understanding for our audience and participants necessarily starts with context, yet most of our experiences with computers and devices, including application software, hardware, operating systems, websites, etc. operate as if they’re somehow independent of what’s happening around them. Most people don’t make these distinctions. How many of you know people who thought they were searching the Web or buying something at Netscape five years ago? Most consumers don’t distinguish between MSN, Windows, Internet Explorer, AOL and email, for example. It’s all the same to them because it’s all part of the same experience they’re having. When something fails, the whole collection is at fault. It’s not clear what the specific problem might be because developers have made it notoriously difficult to understand what has truly failed or where to start looking for a solution.

We need to rethink how we approach helping people solve problems when we develop solutions for them. We need to realize that even though our solutions are mostly autonomous, remote, and specific, our audiences are none of these. They exist in a space defined in three spatial dimensions, a time, a context, and have further dimensions in play corresponding to expectations, emotions, at least five senses, and real problems to solve—often trivial ones, but real nonetheless.

Most of you probably create and use user profiles and scenarios during development to help understand your user base. These are wonderful tools, but I have yet to see a scenario that includes someone needing help. I’ve never seen a scenario with a truly clueless user that just doesn’t get it. Yet, we’ve all heard the stories from the customer service people, so we know these people exist. When you pull out those assembly instructions or operating instructions or even the help manual, they really don’t help because they weren’t part of the scenario or within the scope of the project (because the help system never gets the same consideration and becomes an afterthought). They may not be part of the “interface,” but they are part of the experience.

This is what it means to create delightful experiences, and is a good way of approaching the design of any products or services. What delights me is when I’m surprised at how thoughtful someone is, how nice someone is in an adverse situation, and when things unexpectedly go the way I think they should (which is most likely how I expect a person to act).

“What we need are human values integrated into our development processes that treat people as they expect to be treated and build solutions that reflect human nature.”Interfaces must exhibit human values

Think about how your audience would relate to your solution (operating system, application, website, etc.) if it were a person.

Now, I’m not talking about bringing back Bob. In fact, Bob was the worst approach to these ideas. He embodied a person visually and then acted like the least courteous, most annoying person possible. But this doesn’t just apply to anthropomorphized interfaces with animations or video agents. All applications and interfaces exhibit the characteristics that Nass and Reeves have studied. Even before Microsoft Word had Clippy—or whatever that little pest is called—it was a problem. Word acts like one of those haughty salesclerks in a pricey boutique. It knows better than you. You specify 10-point Helvetica but it gives you 12-point Times at every opportunity. It constantly and consistently guesses wrong on almost every thing. Want to delete that line? It takes hitting the delete key three times if the line above it starts with a number, because of course it must, must be a numbered list you wanted. You were just too stupid to know how to do it. Interfaces like that of Word might be capable in some circumstances, but they are a terrible experience because they go against human values of courtesy, understanding and helpfulness, not to mention grace and subtlety.

So when you’re developing a tool, an interface, an application or modifying the operating system itself, my advice throughout development and user testing is to ask yourself what type of person is your interface most like? Is it helpful or boorish? Is it nice or impatient? Is it dumb or does it make reasonable assumptions? Is it something you would want to spend a lot of time with? Because, guess what, you are spending a lot of time with it, and so will your users.

I don’t expect devices to out-think me, think for me, or protect me any more than I expect people to in my day-to-day life. But I do expect them to learn simple things about my preferences from my behavior, just like I expect people to in the real world.

Human experiences as a model

When developers approach complex problems, they usually try to simplify them; in other words, “dumb them down.” This is usually a failure because they can’t, really, take the complexity out of life. In fact, complexity is one of the good things about life. Instead, we should be looking for ways to model the problem in human terms, and the easiest way to do this to look at how humans behave with each other—the good behaviors, please. Conversations, for example, can be an effective model for browsing a database (show example). This doesn’t work in every case, but it is a very natural (and comfortable) way of constructing a complex search query without overloading a user. And just because the values are expressed in words doesn’t mean they can’t correspond to query terms or numerical values. An advanced search page is perfectly rational and might accurately reflect how the data is modeled in a database, but it isn’t natural for people to use, making it uncomfortable for the majority, despite how many technologically-aware people might be able to use it. There is nothing wrong about these check-box-laden screens, but there is nothing right about them either. We’ve just come to accept them.

God is in the details

As Mies van der Rohe said, “God is in the details.” Well, these are the details and the fact that they’re too often handled poorly means that technological devices are ruled by a God that is either sloppy, careless, absent-minded, inhuman, or all of the above.

This isn’t terribly difficult but it does take time and attention. And we don’t need artificial intelligence, heads-up displays, neural nets, or virtual reality to accomplish it. There is a reason why my mother isn’t a fighter pilot—several, in fact. But the automobile industry in the U.S. spends tens of millions of dollars each year trying to develop a heads-up display for cars. That’s all my mother needs—one more thing to distract her from the road, break down someday, and scare her even more about technology and making a mistake. What we need are human values integrated into our development processes that treat people as they expect to be treated and build solutions that reflect human nature.

Everything is riding on this: expansion into new markets, upselling newer and more sophisticated equipment, solving complex organizational problems, reducing costs for customer service, reducing maintenance costs, reducing frustration, and (most of all) satisfying people and helping them lead more meaningful lives. Companies fail to differentiate themselves anymore on quality or tangibles. Instead, they try to differentiate themselves on “brand.” What marketers and engineers often don’t “get” is that the only way to differentiate themselves on brand is by creating compelling experiences with their products and services (and not the marketing around them). Niketown and the Apple Stores would never have succeeded—at least not for long—had they not been selling good product experiences. This isn’t the only reason the Microsoft store failed (a tourist destination for buying Microsoft-branded shirts and stationery really wasn’t meeting anyone’s needs), but it was part of it. Gateway, in comparison, has been much more successful, though they still aren’t getting it quite right.

The Apple Store is a good example. You can actually buy things and walk out with them (unlike the Gateway stores which really disappoint customers by breaking this social assumption). What’s more, anyone can walk in, buy a DVD-R (they come only in 5-packs, though) and burn a DVD on the store equipment. Really, I’ve done it. I may be the only person who has ever taken Steve Jobs up on this offer, but it is a very important interaction because most people aren’t going to have DVDRs for awhile—and neither are their friends. Most people don’t even have CDRs, but if they want to burn a DVD of their children’s birthday party to send to the grandparents, what else are they going to do? This recognition of their users’ reality is what made Apple’s approach legendary (not that it hasn’t been tarnished often). It’s not a technological fix, it’s not even an economic one. In this case, access is the important issue and allowing people to walk in off the street, connect their hard drive or portable, and create something with their precious memories became the solution. It works because it supports our human values (in this case, sharing). It works because this is what you would expect of a friend or someone you considered helpful. This is not only a terrific brand-enhancing experience, it jives with our expectation about how things should be and that is what social and human values are all about.

This is not a crisis of technology or computing power, but one of imagination, understanding and courage. I would love to see designers create solutions that felt more human in the values they exhibited. This is what really changes people’s behaviors and opinions. Just wanting things to be “easy to use” isn’t enough anymore—if it ever was. If you want to differentiate your solution, if you want to create and manage a superior customer relationship, then find ways to codify all those little insights experts have, in any field, about what their customers need, desire, and know into behaviors that make your interfaces feel like they’re respecting and valuing those customers. This is the future of user experiences, user interfaces, customer relations and it’s actually a damn fine future.

For more information

  • Microsoft Bob was a “personal information manual” Microsoft built around Nass and Reeves’ research. Bob was a personified (read “anthropomorphized) character that represented the application. He came with a cadre of associates who could be chosen instead of Bob by users based on the personality characteristics that felt more comfortable. Fair enough. Bob’s downfall, however, is that no matter which character the user chose, they were all too prominent and annoying, failing to understand that the times they were needed and desired were drastically less than their programming assumed and that their personalities raised our expectations far beyond what they were capable of delivering.
  • Media Equation by Cliff Nass and Byron Reeves. C S L I Publications, June 1999.
Nathan Shedroff has been an experience designer for over twelve years. He has written extensively on the subject and maintains a website with resources on Experience Design at He can be reached at .

“Why We Buy: The Science of Shopping”

by:   |  Posted on

“Experience design,” as it’s often used in the online world, refers to everything a customer comes in contact with when having experience with a brand—what the colors are, what emotions the design conveys, how the text is written, ease of interaction with the web site, how the content is structured, and much more. Information architects and designers sometimes forget that there is an offline experience as well; Paco Underhill’s “Why We Buy: The Science of Shopping” explores customer experience and consumer behavior as they affect retail and offline environments.

Much has changed about ecommerce since this book was first published, but many of its predictions about online retailing have come to fruition.Overall, the book is a lively read, chock full of interesting stories, research data, and case studies. There are sections dealing with product usability, environmental graphics and navigation, demographic issues, location, marketing and promotion. Obviously a seasoned professional, Underhill presents business issues in a straightforward manner, backing up his claims and suggestions with anecdotal and statistical evidence.

Though the majority of the book focuses on “traditional” retailing, Chapter 17 specifically talks about online retailing. Much has changed about ecommerce since this book was first published (May 1999), but many of Underhill’s predictions about online retailing have come to fruition, and his bottom-line insistence on “you need a reason to start a web site” rings true in today’s economic environment.

One neglected aspect of ecommerce he mentions on the first page of the chapter has already been addressed by many retailers: “Few web sites will permit you to see if a particular item is in stock in a store near you, order it, pay for it and then go in person to retrieve it.” It would be interesting to hear the author’s feelings on the current state of online retailing three years after this was written and see what advances he feels have been made and what problems still need to be addressed.

Another brilliant aspect of this book is its universal appeal. While those interested in usability and ecommerce have snapped it up, it is not limited to those limited audiences. (In fact, those who lament “I can’t explain to my Mom what I do all day” might benefit from suggesting a read of “Why We Buy” and then adding, “It’s like that but with web sites.”)

If there’s any downfall of the book, it would be the sometimes-meandering text. The reader may expect a more of a textbook-like approach to physical experience design, but Underhill’s writing style mixes case studies with anecdotes, business, psychology, and opinions.

Though divided up into four sections and 19 chapters that purport to focus on specific topics, the end results often diverge from their intended subject. This is not necessarily a bad thing; it feels as though Underhill is leading the reader on a walking tour of a business, pointing out issues during the journey and recalling anecdotes whenever appropriate. However, those looking for a tome on the design of physical commerce spaces need look elsewhere.

There are dozens of lessons in “Why We Buy” that can be learned by those involved in web development, whether in ecommerce or brochureware. One is that, even after decades of running tests, Underhill and his staff are still learning new things and uncovering problems they’ve never noticed before, showing that continual learning is essential.

The author also talks about the importance of evaluating elements in the environment in which the customer will interact with them. (“Showing me a sign in a conference room, while ideal from the graphic designer’s point of view, is the absolute worst way to see if it’s any good. To say whether a sign or any in-store media works or not, there’s only one way to assess it—in place.”)

He devotes a good deal of printed space to the differences in the shopping habits of men and women, as well as the growing aging population and children, which suggests that these demographically-influenced habits (and others) could carry over to the online world.

However, two main messages permeate throughout, and they should be familiar, since those involved in designing the user experience online have been focused on them all along.

First, understand your customer and make things easy for them. Don’t make them feel uncomfortable, don’t confuse them, don’t make them do more work than they should. Structure things so that they make sense to your customer, for their actions will determine whether or not what you have done is successful.

Secondly, understand the business goals and design your changes to work towards those goals. Aesthetics, navigation, and structure are of no use if they don’t support the business objectives. And, of course, designing with your shopper/user in mind will help you reach these goals.

About the book:

  • “Why We Buy: The Science of Shopping”
  • Paco Underhill
  • Simon & Schuster, 1999
  • ISBN 0-684-84913-5
  • 225 pages
  • Hardcover retail price, $25.00; Paperback retail price, $15.00
  • Target audience: Anyone interested in retail or ecommerce
  • Sections:
     I–Instead of Samoa, Stores: The Science of Shopping
     II–Walk Like an Egyptian: The Mechanics of Shopping
     III–Men are from Sears Hardware, Women are from Bloomingale’s: The Demographics of Shopping
     IV–See Me, Feel Me, Touch Me, Buy Me: The Dynamics of Shopping
Jeff Lash is working on improving the intranet user experience at Premcor. He was previously an Information Architect at Xplane and is the co-founder of the St. Louis Group for Information Architecture.

The Age of Findability

by:   |  Posted on

The Third Annual Information Architecture Summit in Baltimore compelled my first visit to the new, state-of-the-art terminal at Detroit Metropolitan Airport.

As I approached the airport on a cold March morning, perhaps I should have been excited. After all, the $1.2 billion Northwest World Gateway was billed as the terminal of the future. According to Northwest Airlines, I was about to have “one of the world’s greatest travel experiences.”

But in reality, I felt dread. I was late for my flight and desperately needed a restroom and a cup of coffee in exactly that order. What I didn’t need was the challenge of finding my way in a new airport.

After circumnavigating “the largest single parking structure in the world ever built at one time” three times, in search of long-term parking, I finally broke down, asked a security guard, and was told the signs for international parking actually lead to long-term parking. Of course!

Several circles of hell later, freshly sprung from the airport security checkpoint and a full-body pat down, I emerged into the spectacular center of Concourse A. High-arched ceilings soared above. Luxury retail stores lined the hall. Straight ahead, a black granite elliptical water fountain fired choreographed, illuminated streams of water, “representing the connections made via global travel.”

Unfortunately, what I couldn’t find was a sign pointing to one of the 475 public restroom stalls inside this 2-million square-foot complex. To cut a long and painful story short, I was 30,000 feet in the air before I finally got my cup of coffee.

Name that pain
Jakob Nielsen might say this airport has usability problems. Conduct a heuristic evaluation, run a few user tests, fix the worst blunders, and you’re on your way. That’s the great thing about usability. It applies to everything. Websites, software, cameras, fishing rods and airports. It’s one hell of a powerful word.

Lou Rosenfeld might say this airport has information architecture problems. But he probably wouldn’t. While maps and signs fit comfortably into the domain of information architecture, it’s a stretch to include the structural design of an airport terminal or the solicitation of feedback from frustrated travelers. Like it or not, information architecture has boundaries. Unfortunately, our clumsy two-word label isn’t quite as flexible as Jakob’s.

That’s why I say this airport has findability problems. The difficultly I had finding my way dominated all other aspects of the experience. Like usability, findability applies broadly across all sorts of physical and virtual environments. And, perhaps most important, it’s only one word!

Post-Hum(or)ous self-definition
At Argus Associates, we built a consulting firm that specialized in “information architecture” and we wrote a book to explain and explore the topic.

In the past year, our company has been post-hum(or)ously accused of practicing “Content IA,” a pejorative label that bothers me.

It’s absolutely true that we Argonauts brought the strengths and biases of library science to the IA table. And, we certainly focused more on organizing sites with massive amounts of content than on designing task and process flows for online applications.

However, this focus was indicative, not of a love for content, but of a passion for designing systems that help people find what they need.

Unfortunately, we couldn’t declare this passion too openly, because in the 1990s most customers weren’t buying “findability.”

At first, they focused on image and technology. Remember the early days of glossy brochure web sites and hyperactive Java applications? Later, they learned to ask for usability, scalability and manageability. They had felt some pain, but not enough.

In order to create a big tent, we sold “information architecture,” striking a delicate balance between our clients’ needs and wants. But all along, we maintained a deep conviction that, in the long run, the most important and challenging aspect of our work would involve enabling people to find stuff.

So, if you want to label the Argus brand of information architecture, rather than calling it Argus IA or Content IA or Polar Bear IA, I humbly suggest that you call it Findability IA. Or else!

Arrows over boxes
True to form, I’ve always resisted attempts to canonically define information architecture. In an emerging field, the last thing you want to do is prematurely place its identity inside a box, or should I say coffin?

However, information architecture is entering a new stage of maturity. IA roles and responsibilities are firming up. The IA community is taking shape. While we insiders argue over the minutia, a de facto definition of information architecture has emerged and reached critical mass. There’s no going back.

On one level, this is wonderfully exciting. For many of us who labored in obscurity in the early 1990s, this is validation that our vision of the future wasn’t completely crazy.

But this is also frightening. With maturity comes rigidity. We’re finding ourselves trapped inside boxes of our own making. And those arrows that connect us to related disciplines and new challenges are looking mighty appealing.

After all, it’s a tough sell to argue that content management and knowledge management and social computing and participation economics are all components of the big umbrella of information architecture. The IA tent is simply not that big.

And yet, we information architects are fascinated by these topics. We yearn to escape our boxes and follow the arrows.

For me, findability delivers this freedom. It doesn’t replace information architecture. And it’s really not a school or brand of information architecture. Findability is about recognizing that we live in a multi-dimensional world, and deciding to explore new facets that cut across traditional boundaries.

Findability isn’t limited to content. Nor is it limited to the Web. Findability is about designing systems that help people find what they need.

The age of findability
Even inside the small world of user experience design, findability doesn’t get enough attention. Interaction design is sexier. Usability is more obvious.

And yet, findability will eventually be recognized as a central and defining challenge in the development of web sites, intranets, knowledge management systems and online communities.

Why? Because the growing size and importance of our systems place a huge burden on findability. As Lou posits “despite this growth, the set of usability and interaction design problems doesn’t really change…(but) information architecture does get more and more challenging.”

Ample evidence exists to support this bold claim. Companies are failing to deliver findability. For example, a recent study by Vividence Research found poorly organized search results and poor information architecture design to be the two most common and serious usability problems.

This resonates with my experience interviewing users of Fortune 500 web sites and intranets. Some of these poor souls are ready to burst into tears as they recount their frustrations trying to find what they need inside these massive information spaces.

At the IA Summit, usability expert Steve Krug also agreed with this bold claim, noting that his company’s motto doesn’t apply to the challenges faced by information architects. Designing for findability is rocket surgery!

In the coming years, our work will only become more difficult. But that’s a good thing. Consider the following passage from a fascinating article written by business strategy guru Michael Porter:

“Companies need to stop their rush to adopt generic ‘out of the box’ packaged applications and instead tailor their deployment of Internet technology to their particular strengths…The very difficulty of the task contributes to the sustainability of the resulting competitive advantage.”1

That last sentence applies directly to the work we do. We all have a great deal of difficult and important work ahead. There’s an awful lot of findability in our future.

Where do we go from here?
I wrote this article to explore findabilty as both a word and a concept. I’d be very interested in your reactions. Does findability strike a chord? Are you intrigued by the design of findable objects? Are you ready to become a findability engineer? Or does this pseudo-word annoy you? Is findability overrated? Do you prefer a future filled with expensive, beautiful airports that just happen to be unnavigable? Comments please!

For more information:

  1. “Strategy and the Internet,” by Michael E. Porter in Harvard Business Review, March 2001.
Peter Morville is President of Semantic Studios, an information architecture and knowledge management consulting firm and co-author of the best-selling book, “Information Architecture for the World Wide Web.”