Personalization is not Technology: Using Web Personalization to Promote your Business Goal

by:   |  Posted on

“Web personalization is a strategy, a marketing tool, and an art.”

Personalization, properly implemented, brings focus to your message and delivers an experience that is visitor-oriented, quick to inform, and relevant. Personalization, poorly implemented, complicates the user experience and orphans content.

If you are a web strategist, designer, or content manager, you are undoubtedly familiar with the value of web personalization. For years, we’ve been working to ease the complexity associated with authoring, delivering, and consuming rich, dynamic content via browser-based applications. Introducing more content, and varying it routinely, may make your site fresh, but it can also have a negative impact on your overall message. When does freshness become noise? And how can personalization cut through the clutter?

What is web personalization?

Web personalization is a strategy, a marketing tool, and an art. Personalization requires implicitly or explicitly collecting visitor information and leveraging that knowledge in your content delivery framework to manipulate what information you present to your users and how you present it.

Correctly executed, personalization of the visitor’s experience makes his time on your site, or in your application, more productive and engaging. Personalization can also be valuable to you and your organization, because it drives desired business results such as increasing visitor response or promoting customer retention.

Unfortunately, personalization for its own sake has the potential to increase the complexity of your site interface and drive inefficiency into your architecture. It might even compromise the effectiveness of your marketing message or, worse, impair the user’s experience. Few businesses are willing to sacrifice their core message for the sake of a few trick web pages.

Contrary to popular belief, personalization doesn’t have to take the form of customized content portals, popularized in the mid-to-late 90s by and My Yahoo!. Nor does personalization require expensive applications or live-in consultants. Personalization can be as blatant or as understated as you want it to be.

It’s a tired old yarn, but if you hope to implement a web personalization strategy, the first and most important step is to develop and mature your business goals and requirements. It is important to detail what it is you hope to do and, from that knowledge, develop an understanding of how you get from an idea to implementation. You might be surprised to discover that it won’t require most of next year’s budget to achieve worthwhile results.

What makes personalization successful?

Too frequently, personalization initiatives die on the white board. It can seem a daunting task when development teams gather to consider technical and business requirements (such as changes to architecture, user profile storage and analysis, and content management). Analysis paralysis kills personalization projects early and often because teams overreach.

So what’s the key to successfully implementing personalization initiatives? Start small and pick achievable goals that integrate well into your existing presentation framework. Think of personalization as a way to enable your business plan. Over time, with successful implementations, it can become an enabling technology; a component of your overall marketing strategy, your communication message, even branding.

However, in order to accomplish any level of personalization, whether it’s for your internet, intranet, or extranet site, you need:

  • A high-level driver, owner, and/or sponsor
    This should be someone in management, executive management, or at the C-level who has ownership of the “bottom-line” results.
  • Measurable business goals
    Your personalization initiatives must be measured against practical and relevant business metrics.
  • Long-term commitment
    This is an iterative process; some phases will be very successful, others will be less so.

Most importantly, keep the process simple. Stay focused on the business goals, tackle manageable projects, measure the success or failure of your changes, and learn from your mistakes.

What are your business requirements?

Think through this carefully. What are your business goals? How can you turn these business goals into personalization business requirements?

By giving prudent forethought to maturing your intention and measuring your results, you can keep the process well focused. For example, if your goal is to increase sales revenue, you might use personalization to better transition anonymous internet visitors to sales leads. Or, if your goal is to decrease software support costs, you might use personalization to promote online support tools for an application or service that you know a specific user is interested in.

How are you going to do it?

Once the business requirements are well defined and understood, refine and elaborate upon them until you can develop use cases to support the end goal. I am using the software engineer’s definition of “use case” here, focused on describing the precise behavior of the application, not necessarily the user interface.

For example, if your goal is to collect more email addresses from job-seeking internet site users, your use case might explain how you intend to identify visitors as job-seekers, how you will prompt them for their email addresses, and how they will be rewarded for providing the information. (Remember, these are your customers. Don’t force them to provide data. And when they do provide personal details, offer them tangible rewards for doing so.)

User interface design, when implementing personalization initiatives, remains an important part of the design process. In fact, careful user interface design may be more important than ever. Don’t allow your modified presentation framework to become a barrier to end users, compromising your message or intentions. Keep in mind:

  • This is a partnership
    You are engaging in a partnership with your visitor, using what they share with you, explicitly and implicitly, to facilitate a more productive relationship. They need to trust you and you need to honor their wishes. These objectives may manifest themselves in the user interface.
  • The message is still key
    When choosing to display or hide content from your site visitor based on a personalization initiative, you need to fully understand the ramifications of such an effort. Will this adaptation of the user interface render some content inaccessible, or orphaned? Will this adaptation of the user interface alter the presentation such that the overall integrity of your site is compromised?

If business goals describe what you want, business requirements describe what you need to do, and use cases describe how you plan on doing it.

Who is your visitor?

From an understanding of your business requirements, develop a visitor profile definition and visitor segments.

A visitor profile is a collection of attributes that you’ll need to either maintain or derive in order to support personalization. Implicit profile attributes can be derived from browsing patterns, cookies, and other sources. Explicit profile attributes come from online questionnaires, registration forms, integrated CRM or sales force automation tools, and legacy or existing databases. In short, explicit profile attributes come from customer responses, while implicit profile attributes come from watching or interpreting customer behavior.

A visitor segment is a collection of users with matching profiles. Certainly, a loose definition of target segments may develop as business requirements mature. After all, these are the people you strive to reach with your personalization initiatives. Visitor segments may be very broad or very confined in scope. However, once a visitor’s attributes and the mechanics of maintaining and collecting visitor profile data are known, rules can be developed that formally define segments.

Sample visitor segments might include registered site users who have not purchased any services, customers who have not purchased a service in more than 12 months or, simply, investors.

How you collect and store this information is a sensitive and timely topic. In many parts of the world, and among some segments of the internet community, cookies are despised. Take this into account when determining what data you have access to and how you leverage it.

How do you measure success?

How will you measure the success or failure of your personalization business requirements once they become technical deliverables? It is important to measure success or failure in any personalization exercise. Failures need to be eliminated before they cause further trouble. Successes can be used to drive further financial, time, and personnel investment.

As you determine your business goals, requirements, and use cases, keep in mind what sort of metrics you can collect before and after implementing any changes to your user interfaces. Also, try to determine how this data should change as a result of personalization.

Case Study: Improving the Effectiveness of an Internet Site for Human Resources

What is the business requirement?

  • To enable Human Resources to increase their pool of candidates, and improve their ability to leverage information about existing candidates using an existing internet site.

How are we going to meet the requirement (what are our use cases)?

  • The web delivery application will detect first-time website visitors browsing the job openings page. These visitors will be prompted for their email address and given an opportunity to register for job opening announcements by email.
  • The web delivery application will detect returning website visitors interested in job openings, offering them a chance to register for email announcements (see above) and a chance to win a new laptop computer. Visitors who register to win the laptop will provide their name, address, email address (if unknown to us), and phone number.
  • When a known visitor submits a resume for a job opening, additional profile information will be collected. Known attributes (name, address, email address) will be populated from the profile.
  • All visitor profile information collected will be stored in an internally accessible database and used by the HR department to promote job openings and career fairs that might be of interest to the candidate.

Who is the visitor/What is the visitor profile?

The following information can be collected and associated with the user in question:

  • Number of site visits
  • Name
  • Address
  • Email address
  • Phone number
  • Resume
  • Interest in job openings (implicitly derived—based on browsing patterns)

How will we measure success?

  • By reduced recruitment costs due to lessening the time it takes to fill job openings and eliminating recruiting expenses.
  • By an increased number of candidates hired via website.


Personalization may be tough to define and hard to measure, but it doesn’t require a rocket scientist or piles of cash to accomplish. As with most business initiatives, developing that first business requirement and making the first commitment, right or wrong, is the hardest step.

The software market is flooded with companies ready to sell you an off-the-shelf, shrink-wrapped personalization solution. Unfortunately, what buyers don’t often realize until it’s too late is that personalization isn’t a plug-and-play solution.

Know your goals and stay focused on long-term improvements by following these steps:

  1. Define your business goals.
  2. Convert your business goals into personalization business requirements.
  3. Convert your business requirements into use cases.
  4. Define the user profile and formally define the user segment(s).
  5. Determine which metrics you will use to evaluate the initiative.
  6. Implement.
  7. Repeat.

Personalization requires analysis of your goals and the development of business requirements, use cases, and metrics. Once these are fully understood, you may find that your personalization strategy doesn’t require substantial augmentation of your application environment. If you do find that the integration of a personalization tool is necessary, with this knowledge, you’ll be able to better analyze and judge the offerings.

Christian Ricci is a consultant, application developer, web designer, and project manager with over 11 years of experience in software design and development, network and server administration, and software project management and engineering. As a Senior Solutions Architect for Saillant Consulting Group, Chris has led portal, content, and document management projects for Qualcomm, Intermountain Health Care, J.D. Edwards, EAS, and the Denver Post.


Designing for Limited Resources

by:   |  Posted on

“Good online experience design must accommodate real-world limitations.”

“It is not difficult to manufacture expensive fine furniture. Just spend the money and let the customers pay. To manufacture beautiful, durable furniture at low prices is not so easy.”

—From the IKEA website

In information architecture circles, we often talk about “good design” and “effective user experience” as if they can only exist in a world with no limitations—with unlimited time and money, free reign over technology, and an army of people to administrate the finished system. This, however, is rarely the world in which we work.

And this focus on the ideal is not true of most other design disciplines. In product design, for instance, limitations such as cost or materials are almost always a key factor. IKEA, the international retailer beloved for its low-cost, durable furniture, frequently states that the company “designs the price tag first.” This is not just hype. IKEA explores untraditional manufacturing options (candleholders are made in fuse factories or by the manufacturers of laboratory test tubes) to find unusual ways to build products that meet very specific limitations: products that are low cost, highly durable, and that can be shipped flat.

In the same way, good online experience design must accommodate real-world limitations. From a purely practical perspective, this approach ensures that the system can be built and supported effectively. I have also found, though, that it is quite liberating to stop striving for some imaginary ideal design, and to strive instead for the best user experience that can be achieved with the available resources. In this mindset, limitations become creative challenges rather than frustrating barriers to the “right” design.

Even in an ideal world, designs must optimize both the user experience and the business return. When resources are limited, the design must be optimized to make the best use of all resources as well. To account for this complexity, it is important to have a clear understanding of both sides of the design equation?what you have to work with and what you are trying to build.

Understanding your limitations

As some great philosopher should have said, understanding your limitations is the key to designing for them. Outlining the possibilities allows you to design and prioritize accordingly. This sounds obvious, but I have found that it is frequently difficult to get a handle on just what is possible and what is not.

It is worth the effort to list out and define the project limitations early in the process. There isn?t much money? Okay, exactly how much is there? We can?t make any changes to the navigation? Okay, what counts as a change? What if the majority of users don?t notice the change in a user test? Documenting the limitations makes it much easier to see where there is room to explore, and where there is no wiggle room at all.

Some developmental limitations to look out for on any project:

  • Time and money
    The old favorites. Although difficult, it is worth trying to quantity what is meant by “no budget” or “spend as little as possible.” I have seen these phrases mean anything from tens of thousands of dollars to literally no money at all.
  • Existing technology
    It is rare for a project to have free reign of technical choices. Organizations may either have existing hardware and software or standards for hardware and software that may constrain what can be used. Try to understand what the limitations mean from a functional perspective.
  • User expectations
    If you are working on a system that is already in use, there may be significant limitations on what can be changed without baffling your users.
  • Development team
    Significant budget restrictions can mean building a system with semi-qualified resources?people learning on-the-job, client staff members, or volunteers. If you understand the skills that are available to the project, you can tailor the design and the level of complexity to the strengths of your team.
  • Hardware and software
    Some designs dictate specific hardware or software decisions. If there is no budget for new purchases, you may need to tailor the design to the products that are already on hand.

These developmental limitations will affect the core decisions that will need to be made for the project: What should the scope of the project be? And, can we include functionalities that rely on a specific technology? As discussed below, these limitations should be balanced against project needs to determine a useful solution

Long-term impacts

In addition to the limitations on system development, it is important to consider limitations that affect the long-term use and support of the system. It is easy to overlook these types of constraints, as they tend to be farther downstream from the system design process, but they are no less important.

Some long-term limitations to consider:

  • Deployment costs
    Distributing the system to user desktops may require an additional investment. It is important to weigh this investment against the time and cost required to make the system easier to deploy.
  • Training and user support costs
    The system will not be effective unless everyone involved?both administrators and front-end users?know how to use it. Training and user support can be a significant portion of the development effort and should be weighed against resources needed to create easier-to-use functionalities.
  • Time and expertise to administrate
    All systems will need to be monitored and many will need constant updates. It is critical to consider the ease of administration when designing. Unless the administrators are tech-savvy, the development effort needed for administration tools (such as content management systems or reports) can rival the effort needed to develop the front end.
  • Infrastructure
    The methods used to host and support your system can impose significant costs and limitations. For instance, hosting a website on a shared public server can be cheap, but often limits your software options. Maintaining servers and databases in-house allows more flexibility, but can be expensive.
  • Other ongoing costs
    Many services that decrease development or administration costs (such as application service providers, licensed content, or the like) incur monthly or other ongoing costs that should be factored into the budget.

These long-term limitations can have as significant an impact on your design as the developmental limitations. If it is infeasible, for example, to train hundreds of end users, it may be necessary to design an extremely self-explanatory system. Or, the necessity of hosting a site cheaply may rule out the use of a sophisticated packaged content management system. Your design needs to accommodate these limitations.

Guerilla requirement definition

In order to design a system that makes optimal use of limited resources, it is important to brainstorm “out-of-the-box” options. Judging whether unusual options meet business and user needs requires a sophisticated understanding of these needs?one that is both specific and flexible.

There are many classic techniques that can be used to robustly define these needs, including contextual inquiry, personas, scenarios, prototypes and the like. Any of these techniques will work for our purposes; anything that lets the team judge whether a solution fulfills the needs will work to also judge the more unusual options of a limited resource project.

Unfortunately, however, designing for limited resources often means designing with limited resources. Over the years, I have come up with some shortcuts that allow useful requirement definition when there is limited time or money. None of these shortcuts are ideal, but they often allow a bit of consideration on projects which would otherwise just skip over requirement definition altogether.

  • Focus on goals
    Even if I can?t find time or budget for anything else, I will work with the organization to define and prioritize what they are specifically trying to achieve with the system. And if I can?t do any user research, I will work with the organization to define the users? goals. Without these steps it is impossible to judge whether system options will meet anyone?s needs.
  • Phone interviews
    User research techniques can easily get caught up in logistics. I like phone interviews because they provide rich information about user expectations and processes, but are easy to plan and schedule. I try to do at least three half-hour interviews per target audience. While I find individual interview notes quite useful down the road, if I am really strapped for time I will do the interviews back to back and write up a bullet-pointed overview for all of them at once. Given a list of potential interviewees, it is possible to plan, conduct, and write up phone interviews for one target audience in less than four hours.
  • Group interviews/ Focus groups
    If a number of users are in one location, it can save a substantial amount of time to interview two to four users at once. This certainly will not provide the same depth of information as talking to each person individually, but may provide a bit more creative input when one user sparks a thought in another. The technique is a bit riskier than individual interviews, as there is the possibility of a dominant user hijacking the conversation.
  • Stakeholder workshop(s)
    With the goals defined and user research completed, stakeholder workshops can be a powerful and quick way to understand the features that your stakeholders expect. I often do one workshop to brainstorm features, and another where we discuss some sort of documentation or options to crystallize our group vision. These sessions provide a solid base from which to define both the core needs and the nice-to-have features.

With a firm grasp on these sides, you can creatively weigh options to design a system that optimizes all the factors.

Weighing your options

By this point, we have defined both limitations and requirements. In a typical system process, the next steps would be to document the vision for the project, assign priorities and complexities to features, and figure out what features should be included in the next phase.

However, in a situation where resources are very limited, I find that it is often useful to add a step at this point: Brainstorm overall structure and technology options at a very high level. These are the type of options that define what the project is and are frequently assumed rather than discussed. Does it have to be a website? Could it be an Excel spreadsheet? A mailing list? What is the core structure? Can we buy something rather than build from scratch? The answers to these high-level questions affect the approach to the features under discussion, so it is important to settle these overall technical approaches before documenting the vision or prioritizing features.

I find it very valuable to brainstorm these high-level options with a group (with at least a few people with significant technology experience). The options worth considering will obviously depend on the project, but a couple of high-level questions can be useful to consider:

  • Can the goals be met through a simpler technology? Or no technology?
    For an organization with little money or technical expertise, goals might be met as well or better with unsophisticated (but practical) tools like email, Excel, or paper. For instance, monthly updates could be provided via an email mailing list rather than requiring a website to be frequently maintained.
  • Can we use a packaged solution or an application service provider?
    For common functionalities (like searching, registration, or contact management), using packaged tools for at least pieces of the system, can utilize resources better than building from scratch. For instance, ASPs like Atomz or SpiderLine provide in-site search features at an infinitesimal fraction of what it would take to build this functionality.
  • Can we template it?
    Designing a system where many pages are structured identically can support a lot of content while minimizing administration. Content can be either pulled from a database or hand coded into the structure provided by the templates.
  • Can we make use of standard designs?
    It is possible to pull detailed designs and even open-source code for common functionalities, if you are willing to work within generally accepted standards. With shopping carts, for instance, if you refrain from re-inventing the wheel, you can leverage many readily available resources.

Once you have generated a list of possible options, you can weigh the options against one another. Put them into a spreadsheet and rate them on all the important limitation considerations for your project (i.e., cost, ease of administration, ease of training, etc.).

One of the main advantages of this approach is that you can rationally weigh the tradeoffs of each option. For instance, you might consider saving development time and cost by asking your (quite savvy) administrative team to do site updates in Dreamweaver instead of using a content management system. This will require more time from your administrators, and more skilled administration resources, but might make sense on the right project.

Creative tradeoff management can be the key to designing a system that can be built, but careless ones can often lead to disaster (i.e., We will ask my nephew to build the customer relationship management system from scratch, and host it in his basement—and then it will be free!). As you consider your options side-by-side, one will likely emerge as the right answer for your particular situation.

And on with the process?

At this point, the process for a limited resources project matches up again with a typical information architecture process. The weighing process ensures that the overall system vision optimally uses the available resources, and you can turn to more familiar issues of defining scope, considering structure, and creating the detailed design. While it is important to keep your limitations in mind throughout the entire design process (the project needs to be very straightforward to build or to administrate, for instance), I?ve found that this process is a more familiar one of ensuring the system meets its defined goals.

As the weighing process has defined an overall sense of system structure and functions, you can now create vision documentation to ensure that everyone is on the same page. You can consider complexities and priorities to determine what can be included for the next release. As for the rest—sitemaps, wireframes and interaction diagrams—I have, unfortunately, found no shortcuts.

It is our job as functional designers to design websites and systems that can be built and supported by our clients—within their resources. It is not as easy as it would be if we had access to unlimited resources, but it can be a much more interesting challenge.

Laura S. Quinn is a technology strategy and information architecture consultant for both corporate and nonprofit clients. Her company, Alder Consulting, specializes in helping nonprofits build powerful internet and database systems with the resources they have (learn more about Alder Consulting at In her spare time, Laura prepares for a new career as a homesteader with faithful practice in cooking, gardening, sewing, and weaving.

We Are All Connected: The Path from Architecture to Information Architecture

by:   |  Posted on

“Noticing the similarities between physical and virtual environments can help information architects visualize web design elements.”

When I’m asked, “How did you become an information architect?” my immediate answer is, “I was already halfway there by being an architect.” Although I say this partly in jest, it certainly has some truth to it. Information architecture has a great deal to do with traditional architecture—especially in the ability of each discipline to plan and connect various important elements together.

Architecture is commonly defined as “the art or science of building, specifically: the art or practice of designing and building structures and especially habitable ones.” (Merriam-Webster Dictionary) Viewed another way, architecture is “(a) a formation or construction as or as if the result of conscious act (the architecture of the garden) and (b) a unifying or coherent form or structure (the novel lacks architecture).”

Similarly, there have been numerous discussions about the definition of information architecture in the IA community. In his book Information Architects, Richard Saul Wurman defined information architect as “the individual who organizes the patterns…creates the structure…the science of the organization of information.” In a broader sense, IA is about creating a set of blueprints for information-related projects and products that builders—designers and programmers—can construct.

The purposes of architecture—from shelter to prestige

Any primitive structure reflects some kind of architecture, even if no one actually drew up plans before the structure was built. Virtually from the beginning, builders knew to fit their structures into the natural surroundings, rather than trying to conquer nature. They used the natural materials available to them and built on the topography they found.

The initial purpose of architecture was to shelter people from undesirable elements such as weather conditions, and to fend off danger such as wild animals. Once those needs were met, people then requested comfort; rather than keeping beasts at bay, the concern became such nuisances as ants and roaches. Aside from protection from rain and snow, people sought heat in winter and relief from it in the summer. Step by step, beyond basic comfort, people wanted convenience, then MTV and broadband Internet connections. Eventually, architecture became prestigious—a symbol of status, business or personal.

Balancing function and form

One way to categorize types of building architecture is by the degree of flexibility in the design. At one end of the spectrum are heavy industrial facilities, such as petrochemical plants and refineries, which are little more than extensions of the manufacturing process. The building is best seen as an enlarged machine, which just happens to also include workers. Architects have very little design flexibility beyond exterior color selection, if that.

The Vietnam Veterans Memorial

At the other end of the spectrum are museums, which provide architects with the ability to make much more flexible design statements, to fit the nature of the museum. And to an even greater degree, the goal of most monuments is to convey certain ideas to visitors by creating an environment that they can experience themselves. In these cases, architects have maximum flexibility in the form design.

Similarly, web sites can be categorized by construction type into the following categories:

  • Static: those that use static HTML to present content. Typically, these sites do not offer any way to interact on the site.
  • Interactive: sites where users not only can read but also participate or interact with the site, such as discussion boards.
  • Dynamic: sites like Yahoo!, which provide dynamic content continuously and let users customize their web page content, layout and colors.

Web sites can also be categorized by purpose: personal, non-profit, governmental, educational, commercial, and so on. Driven by the needs of commerce—either to generate revenue (through direct sales or brand building) or to cut costs (by reducing customer service calls)—commercial sites typically offer less design flexibility than personal sites. Within the world of commercial sites, transactional sites tend to have less design flexibility than entertainment sites.

Architectural and web design elements

In traditional architecture, in order to create a barrier between the usable space and external elements, roofs and walls are added to complement the building skin. Within the skin, architects further create room and space. In order to connect buildings or rooms, doors and windows are needed. Columns and beams are required to support these elements.

With the architectural elements identified, how do architects put them together? They adhere to principles and rules, which some call design languages, to lay out the building. These include:

  • Geometry
  • Scale
  • Proportion
  • Rhythm
  • Axis
  • Symmetry

Noticing the similarities between physical and virtual environments can help information architects visualize web design elements. A web page is like a physical room or space, but viewed on a screen. A link—be it a text link or graphical button—connecting one page to another is like a door connecting one room/space to another. Doors can be one-way, like revolving doors, or two-way. Similarly, links can be one-way or two-way. In the virtual environment, the equivalent to the dead-end street is being unable to do anything but use the browser “back” button. The well-known “breadcrumb” is also a physical metaphor transferred to the virtual environment. A site map or an index is just like a directory or map in a complex building or campus. A company logo on the homepage is just like the sign on a building. The label of a link is like a sign on a door that tells you what’s behind the door; it must be clear so users can decide whether to enter or not. Roll-over text (alt tag) is like a window that the user can peek to learn more about the room or space before entering it.

Typically, people can recognize a well-designed building from far away by the distinct characteristics in its silhouette. The Taj Mahal is a prime example. Upon entering the compound, the large cut-outs on the façade mark the locations of entrances and direct people entering the building. As you get closer, you find that the large-scale cut-out only signifies an entrance. The actual entrance is a pair of much smaller doors, and they are of human scale. Once inside the building, it’s finally possible to experience the intricate architectural details. It is this kind of progressive disclosure—from overall view, to full view, to human scale, to detail—that creates the opportunity for a logical and smooth user experience, beginning with a visit that actually starts miles away. A number of creative information design products are based on that principle, among them Relevare.

Overall view—silhouette

Overall view—silhouette

Full view

Full view

Human-scale entrance

Human-scale entrance

Detail view of light and shadow

Architecture has always been defined as the art and science of building. Numerous design theories and principles have evolved, based on art, philosophy, and scientific research. Anthropometrics research helps architects understand the physical dimensions required for certain users and certain tasks for various spaces and rooms. Pattern language is a well-known architectural guideline developed by Christopher Alexander. Pattern language tends to be either totally embraced or totally discounted by different architectural practitioners. The concept of patterns has been adopted by software development community and, lately, by the web development community as well. Jakob Nielsen is Alexander’s counterpart in that field. We read about his 10 heuristics, the magic number 5 for web testing, the top 10 web design mistakes, and so on. While the IA community is also split on Nielsen’s theories, the important thing for web development practitioners is to understand the rationale behind his principles and to apply them to achieve design goals.


For example, the purpose of a maze is to disorient visitors only to the extent that they are not totally frustrated. A maze designer needs to understand wayfinding in a physical environment: how people navigate and orient themselves in that space. The designer leverages this knowledge in wayfinding, and removes or disguises the sensory cues to make finding ways through the maze more challenging and, hence, more entertaining.

Design methodology: it’s all about teamwork

We’ve all seen blueprints—formally known as contract documents—which architects produce and builders use to construct. General contractors estimate costs, sign contracts (hence the name), and construct the building based on what’s spelled out in the contract documents. A typical blueprint contains working drawings and written specifications made by landscape artists, plumbers, interior designers and structural, mechanical, civil and electrical engineers. No one person knows all the details of the design; the end result is entirely a product of teamwork. But there is one axiom: architects do not build.

In contrast to web practice today, building architects design and contractors do the construction. In order to communicate the abstract design to the clients and contractors, a review and sign-off process is developed. Later in the process, design documents are subject to most scrutiny at the agency review phase (during which building officials check for code compliance) and at the bidding phase (when contractors read both the drawings and the written specifications to understand how the building will be constructed and to determine how much it all will cost). When cost is of concern, contractors tend to review the design in minute detail. This process ensures that all parties understand what to build before construction takes place.

What can IA learn from traditional architecture?

  • Know your users, client, and context before you design.
  • Include multiple checkpoints and sign-offs during design.
  • Design before you build.
  • Document everything.
  • Test before putting it together for real.

Site planning—not site design

In today’s complex business world, a successful site that will satisfy business goals must balance multiple stakeholder objectives and user goals. Sites need:

Scalability: The site must be expandable to support the growth and evolution of the business.

Personalization: One site doesn’t fit all. In order to meet the specific needs of each individual user, the content and functionality should be personalized to individual users or user groups.

Customization: No matter how much we know about users, personalization cannot be done without customization. Using the real world as an example, a person obtains a 3-bedroom house which meets his requirements—this is personalization. He would typically “customize” the home by filling it with his furniture, hanging a few pictures on the wall, and even painting the house to his taste. Customization provides users the power to make things more attuned to their changing needs.

Dynamic content: In order to provide users with the most valuable content, the information has to be timely, which means dynamic—that is, ever-changing. The “Open” or “Closed” sign hung in front of a business storefront, or “Daily Specials” posted by the store entrance are just a few examples in the physical world of design elements that provide timely information for ever-changing business and customer needs.

Similarly, today’s websites are continuously changing. We can no longer design a site, since it can differ greatly depending on who the users are and when the site is accessed. Instead, we plan the site based on business objectives and user goals. While business objectives and user goals typically do not change often, needs will. A site needs to address these changing needs, and as designers we must plan for that. The site will, we hope, grow without deviating too much from the master plan, of course. This is much like a city planner shaping the city growth by laying out the usage (or zoning) density, site coverage ratio, building height limit, and even exterior treatment. Architects and builders then design individual buildings following these guidelines. Even after they are built, individual buildings continue to grow, as Steward Brand laid out in his book, How Buildings Learn: What Happens After They’re Built.


Emergence is what happens when an interconnected system of relatively simple elements self-organizes to engage in more intelligent and more adaptive higher-level behaviors. It’s a bottom-up model; rather than being engineered by a general or a master planner, emergence begins at the ground level. Similarly, the web today is like a primeval village: there is no master plan, no general plan, no zoning ordinance, no architectural guidelines, and no building codes governing what you can build and how you should build it. It is communal architecture. A primeval village that grows organically and adapts to its surroundings, though it may lack visible order, can serve its inhabitants for centuries. Some may even find it beautiful and poetic. On the other hand, a modern community may have planned infrastructure, order, and even style, but some may find it monotonous or lacking in character (i.e., the kind of character that can only grow with human touch and through time). So the question is: Do we actually need a big plan? A small plan? Or no plan at all? It’s a question that may not have a simple answer.

IA in the real world: a case study

Connections often extend beyond the design of just a website, to physical spaces and related sites. As an example, consider the series of projects my company did for an automobile manufacturer. Genex, the web consulting firm for which I work, delivered a range of solutions that span the auto sales and owning lifecycle.

Customer loyalty is at the core of automotive sales, so car manufacturers’ web sites should be designed to foster awareness of the brand, consideration of the brand, preference for the brand, and purchase of the brand (and, idealy, repeat purchases). The “look and feel” of a site is core to communicating the brand’s values and differentiators—the key factors in a consumer’s purchase decision. Of equal importance are the processes embedded in the site that facilitate consumer purchase, including car comparisons, “build your car” functionality, 360-degree views of models, and personalized financing planning tools.

Being able to access information through several channels is also important to car buyers. Recognizing this, Genex designed self-service kiosks at its client’s dealerships to facilitate the on-site sales process by providing consumers with on-demand model and financing information, as well as the ability to submit a pre-approval form for financing. The kiosks were designed to work with different hardware configurations and within various locations at the dealership.

The purchase is not the end of the process, though. Just as important is the customer’s experience as an owner, which can bring that person back into the buying cycle when the time is right. An owners’ portal—providing a range of services that includes service records, maintenance reminders, and financial account management features—is a highly effective means of continuing to engage the customer with the brand.

The dealers also benefit from web technology. Genex designed a dealer portal that tightly connects dealers with customers and manufacturers. Much like a library, it contains information a dealer needs to run its daily business, from marketing and sales to service. Sales leads generated by various sites are funneled into a single application where salespeople can manage them effectively. Dealers can also use the portal to manage parts and accessories purchases as well as pricing. Customers can purchase parts and accessories for their vehicles from the owner’s portal, while dealers can use their own portal to set pricing and manage orders.

Convergence of architecture: “we are all connected”

Stepping back from this real world example, it’s important to keep focused on connections. The physical world is a network where everything touches everything else and everyone touches everyone else. The connection can be physical, financial, emotional or spiritual, but it’s there.

This is even more the case in the virtual world. As its name suggests, the web is a system of connected networks. In our quest for information, we are linked from one site to another and another; there is no beginning or end.

We tend to think that information flows from this superhighway. Indeed, the Internet connects us conveniently to types and quantities of information we have never before experienced. Still, we don’t get all our information from this or any particular channel. Aside from surfing the Internet and reading emails or instant messages, we watch movies and TV, read books, newspapers and magazines, listen to radios, view billboards while driving, talk in person or by phone to family, neighbors, friends, and even strangers. Some of us even write letters to communicate. Most of the time we draw information from firsthand experience—when we do things, visit places, and meet people, we gain information and experience. Knowing that there are connections among the ways people get information, we should at least acknowledge and design for them. Better yet, we should try to create and design the connections themselves, in order to improve this flow of information and bridge any missing links. After all, we are all connected.

Fu-Tien Chiou is the senior information and usability architect for Genex, in Los Angeles. His responsibilities include setting up corporate-wide information design and usability assessment processes, information architecture and user interface design, and mentoring in user-centered-design methodology. He is a licensed architect with a post-graduate degree in environment and human behavior.

Natural Selections: Colors Found in Nature and Interface Design

by:   |  Posted on
“Perhaps no other design element has as much influence on how we feel in a space (a website, a home, etc.) as color.”

The World Wide Web is awash with sterile design solutions. Hewlett-Packard, IBM, Dell, Microsoft, and countless others are virtually indistinguishable from each other (similar layout, similar color scheme). Though one might say that this uniformity makes web browsing easier by virtue of a standardized interface, the reality is such sites create mundane experiences for their users and fail to make a positive connection with their audience.

One easily remedied cause of such drab design is color. Perhaps no other design element has as much influence on how we feel in a space (a website, a home, etc.) as color. Colors can instantaneously change our moods and alter our opinions. They can make us comfortable, put us in a state of awe, or get us excited. In the case of interface design, color combinations found in nature are especially useful. From complex web applications to informative “brochure-ware” sites, naturally occurring color combinations have the potential to distinguish (by helping create a more memorable website), guide (by allowing users to focus on interactions), engage (by making page layouts comfortable and more inviting), and inspire (by offering new ideas for color selection).

As we go through our lives, we quickly forget about events that are routine and mundane. We tend to save our memories for unique experiences or events with which we had an emotional attachment. It’s no different as we go through the web. When all websites look the same, it’s quite easy to drop them into the “been there, seen that” bin. And once you consider how fast we move through websites, it’s probably even easier.

Therefore, any opportunities websites have to be distinct shouldn’t be squandered. If your site stands out, chances are web users might give it more time or thought when they arrive. They might even remember it and come back. There’s probablybly no better opportunity to make a favorable impression than with color. People have an immediate response to color: they get excited, they get happy, or they get bored.

A unique palette based on colors found in nature can get you out of the World Wide Web color rut (Fig 1) and help create a more memorable website. For example, the naturally occurring color combination (Fig 2) used in this website mock-up (Fig 3) is a stark contrast to the more “standard” version (Fig 4) of the same site. The soft colors are subtle enough to work as background, yet distinct enough to separate the four main information areas of the site.

But before you go applying “prairie tones” to your design, remember your color selections need to be appropriate for your audience. Because color communicates so effectively, it’s important to make sure that it says the right thing.

“Form follows function—that has been misunderstood. Form and function should be one, joined in a spiritual union.”
Frank Lloyd Wright

GuideColors found in nature are often less saturated and more pleasing to the eye than their artificial counterparts. As a result, they allow users to focus on interactions, and not be distracted by overly bright hues. When you attempt to focus on the information in a layout with very saturated hues (Fig 4), your eye consistently returns to the bright colors (in this example, to the blue bar at the top). In contrast, the blues and yellows in the alternate layout (Fig 3) create a balance that allows the images, navigation, and content to come forward. (This is especially useful for pages that have lots of content.) The strongest visual elements are the most useful ones: navigation menus and featured content, not background colors. Perhaps this occurs because of our familiarity with nature’s color combinations. We are used to backdrops composed of blues, yellows, and grays because we see them every day.

This phenomenon becomes especially important in web-based applications where users can interact with an online service for hours or days at a time. Having a palette that does not fight for a user’s attention allows them to focus on their work and on important information. Of course, color isn’t all you need to create a great web experience; structure, interaction, layout, and more need to work together to create usable and useful websites and applications. But color is an important part of the equation and shouldn’t be ignored.

“Nature’s colors are familiar and have a widely accepted harmony.”
Edward Tufte, 1989
EngageColor combinations found in nature are especially useful for addressing another design consideration: emotional response. Usability is vital for easily getting users from Point A to Point B, but it takes personality to create enjoyable experiences that people want to repeat and share.

Consider the following two versions of a transaction form. One (Fig 5) uses a palette that is bleak and intimidating. The other makes use of a naturally occurring color palette and is more approachable because of its warmer, more inviting colors. For the clericals who must repeatedly use this online program, a less intimidating interface can engage them and provide a more comfortable setting within which they can work.

Color combinations found in nature also provide a wealth of inspiration. The diversity of the natural world continually offers new ideas and approaches to color selection. For instance, the colors used to encourage tourism in the city of Dublin are not the orange, green, and white of the Irish flag you might expect. Instead, they come from a naturally occurring color combination on the Irish coast. This combination is both lively and attractive, making Dublin seem fun and expansive. It’s a shame this color scheme was not carried through to the Dublin website, which is much less vibrant and engaging.

All that said, naturally occurring color combinations are not a silver bullet. Sometimes, you might want to inspire “shock and awe” in your audience. In which case, colors that never occur together in nature (and therefore seem uncomfortable) could be your best bet. Other times, your favorite shade of corporate blue could be just what your audience is looking for. But when it comes to extended or complex interactions and unique ideas, color combinations found in nature are a valuable weapon to have in your arsenal.

Setting the Mood with Color by Sean Glithero

Color My World by Molly E. Holzschlag

Color Design for the Web by Vaishali Singh

Semiotics: A Primer for Designers

by:   |  Posted on

“Semiotics is important for designers as it allows us to understand the relationships between signs, what they stand for, and the people who must interpret them — the people we design for.”


In its simplest form, Semiotics can be described as the study of signs. Not signs as we normally think of signs, but signs in a much broader context that includes anything capable of standing for or representing a separate meaning.

Paddy Whannel[1] offered a slightly different definition. “Semiotics tells us things we already know in a language we will never understand.” Paddy’s definition is partly right. The language used by semioticians can often be overkill, and indeed semiotics involves things we already know, at least on an intuitive level. Still, semiotics is important for designers as it allows us to understand the relationships between signs, what they stand for, and the people who must interpret them — the people we design for.

The science of Semiology (from the Greek semeîon, ‘sign’) seeks to investigate and understand the nature of signs and the laws governing them. Semiotics represents a range of studies in art, literature, anthropology, and the mass media rather than an independent academic discipline. The disciplines involved in semiotics include linguistics, philosophy, psychology, sociology, anthropology, literature, aesthetic and media theory, psychoanalysis and education.

Origins of Semiotics

Swiss linguist Ferdinand de Saussure[2] is considered to be the founder of linguistics and semiotics. Saussure postulated the existence of this general science of signs, or Semiology, of which linguistics forms only one part. Semiology therefore aims to take in any system of signs, whatever their substance and limits; images, gestures, musical sounds, objects, and the complex associations of all these, which form the content of ritual, convention or public entertainment: these constitute, if not languages, at least systems of signification.

Language of Language

Structuralism is an analytical method used by many semioticians. Structuralists seek to describe the overall organization of sign systems as languages. They search for the deep and complex structures underlying the surface features of phenomena.

Social Semiotics has taken the structuralist concern with the internal relations of parts within a self-contained system to the next level, seeking to explore the use of signs in specific social situations.

Semiotics and the branch of linguistics known as Semantics have a common concern with the meaning of signs. Semantics focuses on what words mean while semiotics is concerned with how signs mean. Semiotics embraces semantics, along with the other traditional branches of linguistics as follows:

  • Semantics: the relationship of signs to what they stand for.
  • Syntactics (or syntax): the formal or structural relations between signs.
  • Pragmatics: the relation of signs to interpreters.

A Text is an assemblage of signs (such as words, images, sounds and/or gestures) constructed (and interpreted) with reference to the conventions associated with a genre and in a particular medium of communication. Text usually refers to a message, which has been recorded in some way (e.g., writing, audio- and video-recording) so that it is physically independent of its sender or receiver.

Saussure made what is now a famous distinction between language and speech. Language refers to the system of rules and conventions which is independent of, and pre-exists, individual users; Speech refers to its use in particular instances. Applying the notion to semiotic systems in general rather than simply to language, the distinction is one between code and message, structure and event or system and usage (in specific texts or contexts). According to the Saussurean distinction, in a semiotic system such as cinema, any specific film is the speech of that underlying system of cinema language.

The structuralist dichotomy between usage and system has been criticized for its stiffness, separating process from product, subject from structure. The prioritization of structure over usage fails to account for changes in structure. Valentin Voloshinov[3] proposed a reversal of the Saussurean priority, language over speech: “The sign is part of organized social intercourse and cannot exist, as such, outside it, reverting to a mere physical artifact.” The meaning of a sign is not in its relationship to other signs within the language system but rather in the social context of its use. Voloshinov observed “there is no real moment in time when a synchronic system of language could be constructed… A synchronic system may be said to exist only from the point of view of the subjective consciousness of an individual speaker belonging to some particular language group at some particular moment of historical time.” As it turns out, both are correct.

In other words, take a very simple example—the word live. The fact that the ‘L’ is next to ‘I’ is next to “V” is next to “E” is important. Without those characters in that order we wouldn’t have the word live. But it is also important that the word live is being viewed on July 3, 2003 and that the context is ‘on a concert ticket’, so that we may imply that the music is indeed being played live! The study of semiotics needs to account for the relationship of the symbols and the social context or context of use.

Understanding Design as a Dialogue

In Semiotics: The Basics[4], Daniel Chandler sums up precisely why we as designers must be well versed in semiotics.

“The study of signs is the study of the construction and maintenance of reality. To decline such a study is to leave to others the control of the world of meanings.”

Semiotics teaches us as designers that our work has no meaning outside the complex set of factors that define it. These factors are not static, but rather constantly changing because we are changing and creating them. The deeper our understanding and awareness of these factors, the better our control over the success of the work products we create.

Semiotics also helps us not to take reality for granted as something that simply exists. It helps us to understand that reality depends not only on the intentions we put into our work but also the interpretation of the people who experience our work. Meaning is not contained in the world or in books, computers or audio-visual media. It is not simply transmitted—it is actively created, according to a complex interplay of systems and rules of which we are normally unaware.

Becoming aware of these systems and rules and learning to master them is the true power of visual communication and design.


Semiotics, Semiosis, Semiology: The noun form of the study of signs and signification, the process of attaching signifieds to signifiers, the study of signs and signifying systems. The study of the process by which signs and symbols come to have meaning. Signs are seen as the basic building blocks of meaning. Semiotics is concerned with how signs are produced, maintained and changed. This is why semiotics is sometimes called the study of the process of signification.


  • Symbol: Stands in place of an object – flags, the crucifix, bathroom door signs.
  • Index: Points to something – an indicator, such as words like “big” and arrows.
  • Icon: A representation of an object that produces a mental image of the object represented. For example, a picture of a tree evokes the same mental image regardless of language. The picture of a tree conjures up “tree” in the brain.

Signifier: Is in some ways a substitute or stand-in. Words, both oral and written, are signifiers. The brain then exchanges the signifier for a working definition. The word “tree,” for example, is a signifier. You can’t make a log cabin out of the word “tree.” You could, however, make a log cabin out of what the brain substitutes for the input “tree” which would be some type of signified.

Signified: What the signifier refers to (see signifier). There are two types of signifieds:

  • Connotative: Points to the signified but has a deeper meaning. An example provided by Barthes is “Tree” = luxuriant green, shady, etc.
  • Denotative: What the signified actually is, quite like a definition, but in brain language.

Slippage: When meaning moves due to a signifier calling on multiple signifieds. Also known as “skidding.”

Discourse: Messages that serve a communicative function and are usually more complex than simple signs.

Mythic Signs: Messages that “go without saying” that reinforce the dominant values of their culture. These messages don’t raise questions or inspire critical thinking.

Denotative System: A signifier, signified, and sign that together form a meaning.

Second-Order Semiological System: Connotative system that incorporate the sign of an initial system which becomes the signifier of the second system.

Taxonomy: A kind of structural analysis where features of a semiotic system are classified.

Structuralism: Structuralism is a mode of thinking and a method of analysis practiced in 20th-century social sciences and humanities. Methodologically, it analyzes large-scale systems by examining the relations and functions of the smallest constituent elements of such systems, which range from human languages and cultural practices to folktales and literary texts.

Social Semiotics: Social semiotics is the study of human social meaning-making practices of all types. These include linguistic, actional, pictorial, somatic, and other semiotic modalities, and their codeployment. The basic premise is that meanings are made, and the task of social semiotics is to develop the analytical constructs and theoretical framework for showing how this occurs.

Other Terms

Exegesis: Critical interpretation of a text. Interpretation of content only that searches for meaning connotatively.

Hermenuetics: Differs from exegesis in that it is less “practical.” It is the text that postpones and even breaks with itself to shift meaning through slippage or skidding.

Readerly Text: (from the Pleasure of the Text) Discourse that stabilizes and meets the expectations of the reader.

Writerly Text: is a text that discomforts the reader and creates a subject position for him/her that is outside of his/her mores or cultural base.


[1] Semiotics, Structuralism, and Television, Ellen Seiter, 1992.

[2] Saussure, Ferdinand de (1993). Third Course of Lectures on General Linguistics. Pergamon Press.

[3] In Perspective: Valentin Voloshinov, Issue 75 of International Socialism, Quarterly Journal of the Socialist Workers Party (Britain), Published July 1997.

[4] Chandler, David (2001). Semiotics: The Basics. Routledge. ISBN 0415265940


Barthes, Roland (1964). Elements of Semiology. Hill and Wang.

Chandler, David (2001). Semiotics: The Basics. Routledge. ISBN 0415265940.

Saussure, Ferdinand de (1993). Third Course of Lectures on General Linguistics. Pergamon Press.

Stuart Hall, Recent Developments in Theories of Language and Ideology: A Critical Note, from Culture, Media, Language: Working Papers in Cultural Studies, 1972-1979 (1980).

Vestergaard, T & K Schroder (1985): The Language of Advertising. Oxford: Blackwell.

Umberto Eco, A Theory of Semiotics (Bloomington: Indiana University Press, 1976), p. 16.Challis Hodge is an Experience Strategist with over 16 years experience leading cross-functional teams in the planning, research, conceptualization, design and development of user-centered products, services, software and Internet solutions. Challis was CEO and co-founder of HannaHodge a pioneering user experience firm that broke new ground in human-centered design process. Challis has undergraduate and graduate degrees in industrial design and human-computer interaction. He consults, teaches and writes about design and experiences at the intersection of people, business and technology! You can contact Challis at or learn more about his work at

Cognitive Psychology & IA: From Theory to Practice

by:   |  Posted on
“Theories exist to be tested and disproved, so that more accurate theories can then be devised.”What do cognitive psychology and information architecture have in common? Actually there is a good deal of common ground between the two disciplines. First and foremost, both are concerned with mental processes and how to support those processes. Indeed, many information architects (including the author) have backgrounds in cognitive psychology or a closely related field. Certainly, having a background in cognitive psychology supports the practice of information architecture, and it is precisely those interconnections and support that will be explored.

Before delving into the details of how cognitive psychology informs information architecture, it is important to keep a few considerations in mind. The first is that psychological science is not “perfect” in the sense that all the significant variables influencing human behavior are known. Theories exist to be tested and disproved, so that more accurate theories can then be devised. A second consideration is that cognitive psychology focuses on theory and research, while the field of information architecture has tended more toward the practitioner side. A third consideration, perhaps the most important one, is that translating research results and theory to industry practice is, at best, an imprecise process. In fact, misapplication of research findings is a very real possibility—an example, given later, concerns misapplication of research on short-term memory.

Mental categories
From the user’s perspective, a mental category is a grouping mechanism, a way to bring together items or concepts through some unifying characteristic(s) or attribute(s). So what are those characteristics or attributes? This is where it gets interesting (and challenging) for the information architect because there are various ways to create categories and certainly no “right” set of categories to use.

In fact, from one user to another, the categories may differ significantly. Some categories are formed on the basis of visual similarity (e.g., this looks like a laptop computer, so it goes in my “laptop” category), while other categories could be based on items serving a shared purpose (e.g., software involved in web design). The items in a category could even be based on a set of rules for inclusion and exclusion (e.g., if there are no geometrical shapes, this cannot be a site diagram). Cultural differences, socialization, and cohort effects (differences based on when someone was born) also factor into the categorization process, creating even more diversity in how categories are formed.

Given that there are so many possible approaches to categorization, what is an information architect to do? The best advice is to try to accommodate as many different categorization approaches as possible, ideally supporting the most common approaches while realizing that accommodating everyone is impossible. In most cases websites just support one categorization approach for content, which puts the burden on the user to try to work within that approach. If the adjustment cannot be made, it is likely to be quite a frustrating experience. An example of this situation would be the use of categories specific to a given corporate culture; those outside the company would likely have a difficult time following the categorization scheme.

Open-ended card sorting is a very useful tool for studying mental categories and exploring why some categories are formed and others are not. Allow the user the freedom to freely sort the content cards, forming groupings and sub-groupings as necessary. If given the choice between sitting down with the user doing the card sort and taking notes, or letting the user complete the sorting using card sorting software, go with the face-to-face interaction. The software will likely perform a cluster analysis and reveal statistical groupings, but it is unlikely to record what the user said at a given moment, why groupings were made, or what groupings were perhaps created initially and then changed later. Those insights are invaluable when trying to understand mental categories.

Based on the card sorting, numerous approaches to categorizing the content are now apparent. How can these approaches be supported in the same interface? One answer is through the use of facets in browsing and searching. Both the Epicurious website and the Flamenco image search provide excellent examples of how a facet-based approach can be implemented. Search engines can also support this diversity of mental categories by clustering results based on facets. Of course, the trade-off here is that categorical metadata needs to be developed and applied to the documents, so significant time and resources must be available. The advantage, however, is that the users can now browse or search based on the facet closest to their way of mentally categorizing the content. Choice is returned to the user.

Visual perception
Visual perception also factors into the user experience because visual cues are often the basis for mental associations users make among items on the interface. The Gestalt psychologists explored how visual perception works, isolating a number of rules that explain how the human perceptual system functions. The two rules most pertinent to the Web are proximity and similarity.

The Gestalt rule of proximity indicates that items close together are perceived as being related/associated:

Proximity suggests two statements, not one.

Proximity suggests three columns, not five rows.

The Gestalt rule of similarity indicates that items with a similar appearance are perceived as being related/associated:

Similarity suggests two statements, not one.

Second and fifth rows perceived as distinct rows.

The implications for navigation bar design are apparent, since navigation items that are across the page and appear dissimilar are unlikely to be perceptually associated. There are also important implications for the display of local navigation bars, which need to have their items be proximal and similar so that users perceive them as being together. They must also be associated with the appropriate section in the global navigation bar (and not be perceived as global navigation).

Most of your users should perceive the desired relationships if proximity and similarity are used appropriately. Wireframes and prototypes work quite well for testing your implementation of the Gestalt rules, providing low-cost options for improving visual design.

Memory is one of the primary domains examined by cognitive psychologists, since encoding, storage, and retrieval of information constitute a significant portion of our cognitive activity. Unfortunately, research done by cognitive psychologists on memory has not always translated successfully (or correctly) to information architecture practice. The best example of this is research on short-term memory, which established that humans can hold from five to nine chunks of information in short-term (temporary) memory.

Based on this research, some practitioners claim that navigation bars should not be longer than nine items. There are a number of flaws in this thinking. The first flaw, and the most important, is that global navigation is meant to be present on every page and that local navigation is meant to be present on every page within a given section of the website. Where is the need to commit the navigation bar to memory, if it is always there? Are users closing their eyes and trying to recall everything on the navigation bar? Of course not.

Another flaw is that applying the research to navigation bar length is akin to comparing apples and oranges; they are different settings. Navigating a website involves far more visual stimuli and interaction than the research tasks, and website content is familiar enough to fit into established semantic networks, unlike research content that is often random or nonsensical.

This is not to say that short-term memory (or other proposed players in memory processes, such as working memory) never comes into play. It just doesn’t come up based on the length of the navigation bar. Short-term memory becomes important when navigation bars disappear in the lower levels of a website, or when a link takes the user to an entirely different part of the website. At that point the user wonders: How did I get here?

And where is here, exactly? If navigation bars cannot be maintained at lower levels (perhaps because of interface constraints), short-term memory can be supported through the inclusion of breadcrumb trails, providing descriptive page title information and giving the page a descriptive and visually prominent heading/name. Efforts can even be made to visually denote a section of the website, such as using a certain color or graphic for pages within a section, which serves to remind users of their current location.

Excessive numbers of items in a navigation bar does potentially tax mental resources, but the issue here is one of information processing (not memory/storage) and the ability to visually scan and differentiate the “signal” (the desired item) from the “noise” (all the surrounding items). To reduce the noise, the use of headings in the navigation bar to group links is recommended. The Microsoft website uses such headings to good effect. Users can scan a smaller number of headings before scanning within a given grouping of links. Using this approach, navigation bars can grow comfortably to much larger sizes. Boldfacing the headings further supports scanning, as they “pop–out” from the surrounding links.

When a user is scanning through a list of links, trying to decide which one is best, that decision-making process brings in another type of memory: long-term memory. As the name suggests, long-term memory is storage for the long haul. Whether or not information is stored there permanently is still up for debate. Our problems remembering something could be because that information decayed and was lost, or it could be that the information is intact, but our ability to retrieve it is hampered by interference from other information. Whatever the case, our interest is in how memory networks are conceptualized. Those conceptual models give us insight into how users structure and associate concepts.

One approach to conceptualizing these is through semantic networks, or networks of meaning. Based on our experiences and learning, the nodes in these networks (which represent concepts or items) are linked together, forming an intricate network of interrelations.

As labels in the navigation bar are visually scanned, various nodes are activated in the network. Activation of one node triggers activation of connected nodes, which then trigger their connected nodes, spreading outward in a weakening wave until the strength of that activation is spent. This spreading activation helps explain how thinking of one topic can bring to mind related topics. Spreading activation also helps to explain how users struggle to decide between two or more navigation links that “both sound good” or who give up and say “none of these look right.” In the former case, the spreading activation patterns for the labels could have both seemed equally promising (they both activate the desired node at about the same strength), while in the latter case the desired node was never activated.

Closely related is the concept of information scent, as users could be evaluating labels based on their semantic networks, attempting to locate the label with the best “scent” (i.e., the label most closely related to the desired node in their semantic network). Card sorting is again useful in exploring (albeit in a limited and indirect way) the various semantic network structures established by users. Open-ended card sorts, where users freely sort cards into groupings and sub-groupings, help to establish which nodes are most closely interconnected. Closed card sorts, where users have to place content cards into predefined categories, help determine whether the information scent of the category labels is adequate, while also giving insight into how their semantic networks are structured. User testing is also extremely useful in determining whether information scent is correct and whether labels need adjustment.

A primary concern is with a little phenomenon called transference. No, this is not related to how you got along with your mother and how that influences current relationships. This is cognitive psychology, not counseling or clinical psychology.

Transference, in this context of learning, refers to our expectations about an interface’s behavior based on our previous experiences with other interfaces. How do we know the way a scrollbar works? That knowledge is based on previous experience. What do the icons in a toolbar represent to us? We make inferences based on our experience with other software that uses the same or similar icons. Trying to distinguish which navigation bar is global or which is local? We use what we have learned on previous websites to identify where those navigation bars tend to be located. If someone has to switch back and forth between Windows and Macintosh operating systems, sooner or later they will try something that only works in Windows on the Mac or vice versa, simply because of transference.

When our expectations turn out to be correct, the effect is one of positive transference. What we learned from the previous interface transferred successfully to the current one. Negative transference occurs when our expectations are incorrect; this process tends to result in an error. On websites, the transference typically involves layout choices, how links and buttons function, what certain labels and icons mean, and how technology used by the website functions. Websites with frames, for example, can pose negative transference issues, as users unaccustomed to frames may be confused about how printing functions and why bookmarking will only work for the homepage. In addition, the multitude of interface options offered by Dynamic HTML and Flash may be beneficial or harmful, depending on whether positive or negative transference occurs. Certainly, the effects of transference should always be considered in labeling and interface development.

A broadened perspective
Consideration of mental categories influences how content is organized, while factoring in visual perception, memory, semantic networks, and learning helps to guide labeling and interface decisions. Considering research and theory in cognitive psychology helps information architects create better user experiences, if only because it helps us ask more, and better, questions about what creates a good user experience.

Jason Withrow is a faculty member in the Internet Professional department at Washtenaw Community College. He teaches a wide variety of web design classes, including classes on user experience, web coding, project management, and professional practices. He maintains an instructional website, although it is mainly for his students.

Prior to entering the teaching field, he worked in industry as an information architect at a web design firm in Ann Arbor, Michigan. In his spare time he works as a freelance information architect and web designer.

Usability Heuristics for Rich Internet Applications

by:   |  Posted on
“The key difference between a typical Flash site and an RIA is that RIAs possess the functionality to interact with and manipulate data, rather than simply visualize or present it.”Heuristics, or “rules of thumb,” can be useful in both usability evaluations and as guidelines during design. Jakob Nielsen’s 1994 set of usability heuristics were developed with a focus on desktop applications. In 1997, Keith Instone shared his thoughts on how these heuristics apply to what was a relatively new area: websites. Today, in 2003, with Flash-enabled Rich Internet Applications (RIAs) becoming more popular, Nielsen’s heuristics still offer valuable guidelines for RIA designers and developers.

In this article, we focus on Flash because it currently dominates the RIA landscape. However, many of the lessons for Flash apply to other technologies as well.

Rich Internet Applications offer the benefits of distributed, server-based Internet applications with the rich interface and interaction capabilities of desktop applications. The key difference between a typical Flash site and an RIA is that RIAs possess the functionality to interact with and manipulate data, rather than simply visualize or present it. While RIAs hold significant promise, many in the Flash community don’t have the opportunity to work with interaction designers, information architects, or other user experience professionals. As well, user experience professionals often decry Flash or other rich technologies as “bells and whistles” that detract from user goals. We hope this article provides some common ground for discussion between the two communities.

The list below includes Nielsen’s heuristics in bold; our comments about how they apply to RIAs follow each heuristic. Since RIAs cover a broad range of applications, we know we haven’t covered everything. We’d love to hear your own thoughts and experiences in the comments.

1. Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
RIAs should leverage the rich display capabilities to provide real-time status indicators whenever background processing requires the user to wait. While progress indicators are frequently used during an extensive preload when launching an application, they should also be used throughout a user’s interaction with data. This may be monitoring backend data processing time or preloading time.

When dealing with sequential task steps, RIAs should indicate progress through the task (e.g., “Step 4 of 6”). This helps users understand the investment required to complete the activity and helps them stay oriented during the activity. Labeling task steps will provide a clearer understanding of system status than simply using numbers to indicate progress. RIAs’ ability to store client-side data can be used to allow the user to skip optional steps or to return to a previous step.

System status should relate to the user’s goals, and not to the technical status of the application, which brings us to our next heuristic.

2. Match between system and the real world

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
Understanding the user’s vocabulary, context and expectations is key to presenting a system that matches their world. While RIAs are made possible by the functionality of Flash and other technologies, users are usually not familiar with terms like rollover, timeline, actionscript, remoting, or CFCs – such technology-based terms should be avoided in the application. (See our sidebar for definitions if you’re not sure of them yourself).

While RIAs can offer novel metaphors, novelty often slows usefulness and usability. When using metaphors, ensure that they act consistently with their real-world counterparts. If application functions cause the metaphor to behave in ways that don’t match the real world, the metaphor has lost its usefulness and should be discarded in favor of a different concept.

Both information and functionality should be organized to reflect the user’s primary goals and tasks supported by the application. This supports a user’s feeling of competence and confidence in the task – a key need that is also supported by letting the user stay in control.

3. User control and freedom

Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Users are familiar with browser-based controls, including the Back button and Location field. However, using browser commands within an RIA may result in data loss.

The RIA should include code that is aware and responsive to browser history. For applications that contain complex functionality that is the focus of user attention, creating a full-screen version that hides the browser controls can be appropriate as long as there is a clearly marked exit to return to the browser.

While “undo” and “redo” are not yet well-developed in the Flash toolkit, changes to data can be stored as separate copies, allowing the application to revert to a previous version of the data. However, this becomes quite complex in multi-user environments and requires strong data modeling to support.

Many Flash projects include splash screens or other scripted presentations. These non-interactive exhibitions of technical prowess reduce the user’s feeling of control. The ubiquitous “Skip Intro” link offers little help – instead, consider how any scripted presentation benefits the user. If a scripted sequence doesn’t support user goals, skip the development time in favor of something that does. One area that may be a better investment is working to ensure consistency in the application.

4. Consistency and standards

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
All applications require consistency within their features, including terminology, layout, color, and behavior. Complying with interface standards can help maintain consistency. However, since RIAs are a new category of applications, standards are still being developed. The Microsoft Windows User Experience guidelines, Apple Human Interface guidelines, and Macromedia’s developing guidelines provide some alternatives starting points for RIA standards.

Branding guidelines also often require consistency that RIA teams need to consider. RIAs are often deployed as branded solutions for a variety of customers. The application needs to be flexible in implementing custom copy, color, and logos. However, branding should not compromise good design. RIA teams may need to show that the brand will gain equity through applying useful and usable standards as well as beautiful visual design. A gorgeous, cutting edge, award-winning presentation won’t help the brand if it’s easy for users to make disastrous mistakes that prevent them from reaching their goals.

5. Error prevention

Even better than good error messages is a careful design which prevents a problem from occurring in the first place.
In forms, indicate required fields and formats with examples. Design the system so that it recognizes various input options (780.555.1212 vs. 780-555-1212) rather than requiring the user to comply with an arbitrary format. Also consider limiting the amount of data entry required and reducing input errors by saving repetitious data and auto-filling fields throughout the application.

Avoid system functions with disastrous potential, such as “Delete All Records.” When functions with significant impact are necessary, isolate them from regular controls. Consider an “Advanced Options” area only accessible to administrators or superusers, rather than exposing dangerous functionality to all users.

With RIAs, when problems do occur, network connectivity allows for the capture and transmission of error details. Similarly, the distributed model of RIAs empowers developers to provide minor updates that are nearly immediately available to the user. These immediate updates can provide correction to issues that repeatedly cause user errors. Beyond the technology, another way to prevent errors is to make currently needed information available to the user instead of making them remember things from previous screens.

6. Recognition rather than recall

Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Too often, the rich presentation possibilities of Flash are used to play hide-and-seek with important interface elements. Don’t hide controls that are key to user tasks. Revealing application controls on rollover or with a click can create exciting visual transitions, but will slow user tasks and create significant frustration.

Since people who are engaged in a task decide where to click based on what they see, rollovers or other revealed elements can only provide secondary cues about what actions are appropriate. The interface should provide visible primary cues to guide user expectations and help users predict which controls will help them achieve their goals. While some of these cues will be basic functionality, cues should also be available for frequent users to show functions that save them time or let them work more flexibly.

7. Flexibility and efficiency of use

Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
RIAs can leverage the advanced functionality of the platform to provide accelerators such as keyboard shortcuts, type-ahead auto-completion, and automatic population of fields based on previously entered data or popularity of response.

Less technically sophisticated accelerators should also be available—particularly bookmarks—either by allowing bookmarking in the browser, or creating a bookmark utility within the application itself. Another option for giving quick access to a specific screen is assigning each screen a code which a user can enter in a text field to immediately access the screen without navigating to it.

RIAs also offer the opportunity allow for personalization of the application through dynamic response to popularity or frequency of use, or through user customization of functionality.

Established usability metrics, such as time spent carrying out various tasks and sub-tasks, as well as the popularity of certain actions, can be logged automatically, analyzed, and acted on in a nearly real-time fashion. For example, if a user repeatedly carries out a task without using accelerators, the application could provide the option of creating a shortcut or highlight existing accelerated options for completing the same task. However, providing these options should be an exercise in elegance, instead of a display of technical prowess.

8. Aesthetic and minimalist design

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
For any given feature, style, or branding element, ask two key questions: “What is the return on investment for the business?” and “What is the return on experience for the user?” What value does the element contribute? If a feature can be removed without seriously impacting ROI or ROE, the application will be better without it.

RIA design is often a balancing act between application functionality and brand awareness for the business. Limit user frustration by reducing branding emphasis in favor of functionality. While branding can and often should play an important role, the brand will best be supported by a positive user experience. Rather than creating a complicated visual style with an excess of interface “chrome,” work for simplicity and elegance.

Animation and transitions should also be used sparingly. While animation and transition can make a great demo in the boardroom, gratuitous animation will provoke user frustration. The time to animate an element interrupts a user’s concentration and engagement in their task. Disrupting task flow significantly impacts usability and user satisfaction.

Sound can also disrupt task flow – use subtle audio cues for system actions, rather than gratuitous soundtracks that are irrelevant to the task at hand.

A further advantage of maintaining clean, minimalist design is that it generally results in decreased file size, and lessened load times, which is essential given the limited patience of many internet users. Another advantage is that a clean interface makes it easier for the user to recognize when things are going right, and when things are going wrong.

9. Help users recognize, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

Changing the visual appearance of an interface element when the mouse “rolls over” it. A rollover may also trigger changes to other interface elements.

The Flash development environment shows a timeline to organize screen elements and their interaction over time, and along with ActionScript is the primary way of creating interactivity in Flash applications.

A JavaScript-based scripting language built into Flash. Used to program interface actions and behaviors.

Server technology called Flash Remoting allows the Flash client to interact with software components on a server that can contain business logic or other code. Flash Remoting can allow connections between Flash and server programming environments like ColdFusion, .NET, Java, and PHP. This provides for a cleaner division of labor and better security, with the server-side components doing the heavy computational lifting and the Flash client focusing on user interaction.

Acronym for ColdFusion Components – server-based software components written in Macromedia’s ColdFusion language. CFCs natively support remoting.

Error messages should hide technical information in favor of explaining in everyday language that an error occurred. References to “missing objects” or other development jargon will only frustrate users.

RIA error messages can explain complicated interactions using animation. However, animation should be used sparingly in favor of clear textual explanations. Explanations should focus on solutions as much as causes of error. Display error messages along with the appropriate application controls or entry field sot that the user can take appropriate corrective action when reading the message. The ability to overlay help messages or illustrations directly over the interface can be useful in explaining task flow between related screen elements.

When errors are not easily diagnosed, make solution suggestions based on probability – ask what things is the user most likely to want to accomplish right now, and present those options.

RIAs also provide the opportunity to immediately connect a user experiencing major difficulties with support personnel who can guide them to a solution through text chat, video chat, or remote manipulation. These live support channels are just some of the help options available to RIA teams.

10. Help and documentation

Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.
RIAs should contain simple and concise instructions, prompts, and cues embedded in the application itself. More extensive help should be available from within the RIA.
Using animation or video tutorials with concise narration can often guide a user through complex tasks, while engaging individuals who learn better from visual or audio instruction rather than text. Showing the required steps is often easier for the user to understand than mentally translating a text description of the steps to the appropriate interface elements. Providing immediate contextual help through the use of tool tips and contextual help buttons allows the user to complete their tasks without having to shift focus to a help system.

This take on how Jakob Nielsen’s heuristics apply to RIAs are far from definitive. Rather than accepting these examples as unquestioned rules, we hope it sparks your own thinking about how to apply the heuristics in your work, whether you’re a Flash developer or an interaction designer (or both). RIAs hold considerable promise for both Flash developers and user experience practitioners. Usability best practices like Nielsen’s heuristics are essential for realizing that promise.

The key takeaway for the Flash community: RIAs aren’t about grabbing attention, they’re about getting things done. This is a different mindset than many marketing-driven Flash sites, where bells and whistles are often encouraged in an effort to hold short attention spans. With RIAs, there’s no need to shout – the user is already engaged in accomplishing a goal. The best way to rise above the crowd is to cultivate a deep understanding of who your users are, what their goals are, and then design to meet those goals as quickly and elegantly as possible.

The key takeaway for the user experience community: Flash has matured beyond bells and whistles to provide a platform that enables a far better user experience for complex interactions than regular browser technology. While it isn’t perfect, it can open new possibilities for you as a user advocate. You’ll hear less “we can’t do that” from engineering teams, and be able to create interfaces and interactions closer to your vision. Getting to know the potential of Flash and other RIA platforms will help user experience professionals take advantage of the rich interaction available.

Over the coming months and years, RIAs will move from cutting edge to mainstream. That transformation will accelerate with the Flash and user experience communities working together to understand and develop best practices and shared knowledge. We’re looking forward to great new things— if you’re already doing them, drop us a line in the comments.

Jess McMullin is a user experience consultant who helps companies innovate products and services. Through value-centered design, Jess works to maximize return on investment for his clients and return on experience for their users. A founder of the Asilomar Institute for Information Architecture, he runs the popular IA news site iaslash. He is based in Edmonton, Alberta, Canada.

Grant Skinner On the cutting edge of Rich Internet Application conceptualization and development, Grant fuses coding prowess with interface design, marketing and business logic. Grant is internationally recognized for his work on, FlashOS2 and gModeler. As a freelance consultant, Grant works with corporate clients and leading web agencies to deliver online solutions that generate value. A frequent conference speaker, he will be speaking on usability issues specific to RIAs in SIGGRAPH at the end of July. Grant is based in Edmonton, Alberta, Canada.

The Sociobiology of Information Architecture

by:   |  Posted on
“To approach information architecture from a purely anthrocentric perspective is to overlook the lessons of billions of years’ worth of evolutionary history.”Pity the poor prokaryote.

Born blind, deaf, and mute, shuffling around in the darkness at 30 miles per hour, grasping for food, searching for mates, the life of your average bacteria (or any of the several trillion single-cell organisms on the planet) is invariably nasty, brutish, and short.

Be glad you’re a eukaryote. Like amoeba, insects, chimpanzees, and every other form of complex animal life, we enjoy not only the polymorphous pleasures of multi-cellularity, but also a singular gift, one that distinguishes us from all other known life forms: the ability to share knowledge with each other.

John Locke famously argued that “beasts abstract not.” But in recent years, breakthrough research by sociobiologists and evolutionary psychologists suggests otherwise: that not only do many of our fellow “beasts” abstract, but they have also developed surprisingly sophisticated and highly variegated mechanisms for managing information.

When most of us talk about “information architecture,” we seem to situate ourselves within strictly humanist reference points like graphic design or library science (with perhaps a perfunctory nod to journalism, cognitive psychology, or semiotics).

To approach information architecture from a purely anthrocentric perspective, however, is to overlook the lessons of billions of years’ worth of evolutionary history. We are by no means the first species to grapple with the basic problems of what we now call information architecture: how to acquire knowledge in social groups, how to get the right information to the right party at the right time, how to distill meaning from raw data.

Much as we may like to think of ourselves as belonging to a uniquely privileged species, the fact is that every complex organism on this planet is engaged in a shared struggle with information overload.

As information architects (and human beings) we should be careful of presuming that all our present quandaries surfaced only in the past few years—or, for that matter, in the last few thousand years of recorded human history (a comparative millisecond on the evolutionary clock).

Long before anyone was looking for “godfathers” of information architecture, our fellow species were wrestling with some of the same problems we face today. The real godfathers of information architecture, as it turns out, emerged a very long time ago with the earliest origins of life on this planet.

The memory “switch”

Let’s dial back in time to a hot wet Tuesday in, say, 2,200,000 B.C. Swimming in the briny planetary sea, we find the earliest recognizable living organisms: our aforementioned friends, the prokaryotes.

Now by this time, prokaryotes had been sloshing around in the ooze for something like a billion years. Then, about 2 billion years ago—whether by dint of divine impetus or happy cosmic accident—something remarkable happened: These formerly independent organisms started to collaborate.

To make a long story short: Networks of formerly independent bacteria began forming loose collectives—sharing labor, food, and increasingly deploying specialized cells to complete certain discrete tasks. Eventually, these loosely affiliated organic teams began replicating in tandem, taking on a more persistent form as they became the earliest multi-cellular organisms.

Eukaryotic cells took shape as “host” bacteria started allowing other bacteria to take up residence within them, gradually conscripting the simpler, formerly independent prokaryotes into service. Eventually, these new bacteria began to reproduce in tandem with the host bacteria, forming a replicable organism that evolved into successively more complex life forms—with increasingly specialized tasks.

These early eukaryotes—close cousins of present-day amoeba or slime molds—learned to sense and respond to environmental conditions, adapting cells and forming new cells in response to incoming data from the surrounding environment. One cell would capture a sensory impression and relay it through adjacent cells across its immediate network, tripping amino acid “switches” to signify changes in the environment.

Maverick scientist Howard Bloom has theorized that the advent of multi-cellularity marked the birth of a nascent “global brain,” a worldwide neural network that would continue to grow and evolve for the next two million years. “Informationally linked microorganisms possessed a skill exceeding the capacities of any supercomputer from Cray Research or Fujitsu,” writes Bloom. “Ancient bacteria” had mastered the art of worldwide information exchange.

Meet the original information architects.

Social networks 1.0

3_socialnetworksLet’s fast-forward another 1.5 billion years to a rainy Thursday in the Pleistocene. In one of those rare bursts of evolutionary activity (what Stephen Jay Gould famously called “punctuated equilibrium”), a sudden wave of life forms is taking shape during the so-called Cambrian Explosion.

It took a billion years for species to evolve to the point where complex multi-cellular organisms—like shellfish—could emerge. With increasingly elaborate networks of interdependent organs—mouths, hearts, legs, and so forth—individual organisms began to comprise a trillion cells or more.

As life forms became more complex, they also became less directly dependent on each other for survival. As a substitute for the direct transmission of data through chemical relays, these independent organisms began developing a new mechanism for transmitting knowledge: imitation. By observing, responding, and mimicking their peer organisms, these creatures could effectively leverage each other’s senses, experiences, and decision-making capabilities.

The archetypal success story of the Cambrian Era is the trilobite, a wildly prolific organism whose numbers at one point circled the entire planet (its survival as a species was aided in no small part by its predilection for wild, shells-off mating orgies). These organisms were self-contained, self-directed, and less dependent on a constant stream of data inputs for survival. Rather, they had evolved to the point where the individual organism had the resources to ensure its own immediate survival: sensing, responding, eating, and mating. But they were not exactly what you would call independent thinkers.

These new self-directed organisms still relied heavily on their peers for survival and adaptation. As a substitute for the direct transmission of data over the biological network, they began developing a new mechanism for transmitting knowledge: imitation. By observing, responding, and mimicking their peer organisms, these creatures could effectively leverage each other’s senses, experiences, and decision-making capabilities.

These early social learning networks relied not on biological or chemical signals, but rather on imitative learning and the gradual development of behavioral “memes” that could persist beyond the lifespan of any one organism.

Pulitzer Prize-winning entomologist E.O. Wilson coined the term “sociobiology” to describe the study of social behavior from an evolutionary perspective. Wilson’s landmark 1975 book, Sociobiology: the New Synthesis, brilliantly punctured the prevalent scientific view that animal behavior could be adequately explained through the traditional disciplinary filters of biology, chemistry, and genetic inheritance.

Wilson argued that the social learning mechanisms evident in other species required a new perspective, a “synthesis” of biology and the social sciences. Wilson argued that “learned modifications of behavior are not inherited; only the innate predispositions are inherited, and only these can evolve by natural selection.” In other words, social groups can transmit and preserve knowledge through non-biological means, forming “learning machines” with remarkable powers of collective memory, calculation, and distributed decision-making capabilities.

Regrettably, Wilson’s work has been misinterpreted by certain doctrinaire humanists, who have chosen to infer parallels between sociobiology and pseudo-sciences such as genetic determinism or—worse yet—eugenics (the dark science of Nazi genetic engineering). Alas, like Darwin, Wilson’s theories have lent themselves to misuse and misappropriation by groups with political or social agendas. Some feminists, for example, have objected strenuously to the entire discipline of sociobiology on the basis that it seems to offer an apologia for male dominance. Wilson himself would vigorously protest such abuse; he has frequently cautioned against perversions of science in the service of political “advocacy.”

Thanks to the conceptual foundation of sociobiology and, more recently, evolutionary psychology, we are beginning to understand the complexity and sophistication of nature’s super-organisms, some of them seeming to exhibit properties once thought the exclusive province of humanity: language, reason, even the outlines of culture.

For a glimpse of how these early “learning machines” may have operated, we need look no further than some of our contemporary planetary species.

Hive minds

4_hive Perhaps the most widely studied examples of nature’s collective learning machines are insect colonies. Ants and bees in particular exhibit remarkable powers of collective reasoning and an ability to accumulate and share sensory data in social groups.

Wilson devoted much of his career to studying the behaviors of ant colonies, which perform seemingly complex feats of calculation and geometry, and elaborately orchestrated group warfare.

Throughout the insect world, colonies of individual organisms appear to exhibit powers of reasoning seemingly not predicted by the capabilities of a single organism. Douglas Hofstadter first applied the term “emergence” to the behavior of ant colonies in his landmark essay “Ant Fugue,” in which he described the dual nature of individual ants as both functioning organisms and as, in effect, signals.

In his 2001 book Emergence, Steven Johnson explored emergence theory as a context for explaining the self-organizing properties of internet communications, and as a construct for self-directed software agents in a future, more intelligent incarnation of the World Wide Web.

While a few software developers have attempted intriguing experiments at modeling software after the distributed behaviors of ant colonies, we should bear in mind that that the essential mechanisms of colony behavior cannot be solely explained in terms of mechanistic or mathematical models. Wilson argued that insect colony social behavior must be properly understood as “an idiosyncratic adaptation” to the surrounding environment, rather than a purely mechanical operation.

In other words, these behaviors constitute distinctly social responses, transmitted across generations through an elaborate dance of imitative learning and adaptation. There is another force at work here: information.

Monkeys in the mirror

5_monkeysmirror Ever since Carl Linnaeus boldly decided to group humans with monkeys and apes into a family he designated “primates,” we have looked to these close evolutionary cousins for clues to our own behavior patterns.

Although we may tend to think of “culture” as a uniquely human trait, numerous primate studies have revealed the presence of localized social traditions, rudimentary language, and the facility for transmitting learned knowledge across generations.

Dutch primatologist Frans de Wahl recounts an experiment in which he introduced a group of rhesus monkeys—a particularly argumentative, pugnacious group—with a troop of more even-tempered stump-tailed monkeys. Within a few months, the rhesus monkeys “developed peacemaking skills on a par with those of their more tolerant counterparts” through imitative learning and ritual displays. Most importantly, the rhesus monkeys carried on these behaviors long after they had been permanently separated from the stumptails. In other words, social transmission of knowledge effected a permanent change in group behavior.

All primate cultures seem to rely on learned—rather than genetically determined—social arrangements, which often vary between different social groups within the same species (as demonstrated in numerous chimpanzee studies).

While these social knowledge transmissions have no external symbolic manifestation—other primates don’t write books or create external symbolic language—they do nonetheless create, store, and transmit social knowledge that persists across generations: surely a manifestation of the same impulse that drives us towards information architecture.

The disintegration of hierarchy

6_disintegrationThroughout most of human history, information has flowed through small groups in ways not so different from the imitative social learning mechanisms evident in other primates.

Only in the past four thousand-odd years of recorded human history have we developed the capacity for symbolic representation—and with it, a new externalized construct of “information.”

The rise of written language paralleled (and facilitated) the rise of the modern institution: churches, governments, universities, and corporations, to name a few. As these larger collective entities began to supplant the close-knit family and kinship bonds of earlier social groups, the institution also took on a new function as a container for shared knowledge—what Francis Fukuyama has called the “knowledge bureau.”

Fukayama has argued that the rise of the “knowledge worker” in Western society, coupled with the liberating effects of communications technologies, is gradually undermining these institutional hierarchies that have characterized our collective social experience for the past four thousand years. And with the fragmentation of institutions comes the upending of traditional knowledge bureaus.

In The Social Life of Information, John Seely Brown and Paul Duguid draw the distinction between “fixed” sources of information that are typically the province of institutions (such as government records, books, and other documents) and the “fluid” information that tends to emanate from individuals and small groups—(such as email, instant messages, and threaded discussions).

Howard Rheingold has recently chronicled the rising tide of fluidity in newly evident social phenomena like “smart mobs” and “flocking”—social behaviors in which large groups of individuals act in seeming concert, without any apparent organizational hierarchy at work. From recent political riots in the Philippines to more recent mass events like the Nigerian Miss World Pageant riots, or the unprecedented wave of recent anti-war protests, we seem to be undergoing the early stages of a dramatic transformation in the behavior of social groups.

If we look closely at the behavioral dynamics of these new behavior patterns—widely dispersed, non-hierarchical social relay systems—we can easily recognize the contours of earlier patterns of communication and knowledge-sharing evident in every species to the earliest forms of life on this earth. While these recent phenomena may seemingly result from modern technologies, they appear to manifest some very old patterns of social learning and knowledge sharing.

From social networks to social capital

7_socialToday, the practice of information architecture remains primarily an institutional endeavor, driven by the needs of corporations, governments, and educational institutions.

Today’s information architects are the heirs of yesterday’s scribes, clerks, and clerics: laboring to acquire, store, and disseminate knowledge for the sake of humanity, but ultimately in the service of institutions.

Now, some IAs may protest that assertion, arguing that the practice of IA is not about the organization, but about “the user.” But I would argue that if we look closely at that elusive user, we may discover not real human need but a flimsy straw man, a construct designed to serve an intrinsically institutional agenda.

What evolution teaches us is this: in order to understand the deeper roots of our need to generate and manage information, we need to look beyond the individual organism, towards the social groups that drive the mechanisms of evolution and adaptation for all species.

In recent years, the term “social software” has gained currency as a rubric for describing a new breed of software: groupware, social network visualization, discussion lists, and a host of other collaborative tools that support the needs of small, self-selected groups of individuals rather than organizational imperatives.

The real promise of social software has less to do with commercial productivity, and more to do with generating social capital: trust, social engagement, and the development of sustainable knowledge-sharing mechanisms that enable our advancement and evolution within social groups.

What does this mean for information architects? Over time, I believe we may find ourselves progressively less focused on solving the problems of institutions, and increasingly attuned to the needs of groups: a new kind of information architecture—and a very old one too.

Brown, John Seely and Duguid, Paul. The Social Life of Information. Harvard Business School Press, February 2000.

Johnson, Steven. Emergence: The Connected Lives of Ants, Brains, Cities, and Software. Scribner, September 2001.

Wilson, Edward Osborne. Sociobiology: The New Synthesis. Harvard University Press, September 1975.Alex Wright is a writer, information architect, and former librarian who lives and works in San Francisco. He maintains a personal web site at

What is a Web Application?

by:   |  Posted on
“The fundamental purpose of all web applications is to facilitate the completion of one or more tasks.”To reiterate the themes of dance and conversation introduced in my last column, truly superior interaction design strikes a delicate balance between the needs and expectations of users and the capabilities and limitations of technology. The ability to consistently find solutions that achieve this balance in a manner appropriate to the medium is a hallmark of an experienced interaction designer.

The purpose of this article is to improve your ability to find that balance by adding to your understanding of web applications as an interactive medium. What distinguishes a web application from a traditional, content-based website and what are some of the unique design challenges associated with web applications? A reasonable launching point is the more fundamental question, “What is an application?”

What is an application?
The first step toward differentiating web applications from traditional content-centric websites is to focus on the “application” part of the equation. According to the American Heritage Dictionary, an application is (among other things), “…a computer program designed for a specific task or use.” That last phrase, “specific task,” is perhaps the most important.

The fundamental purpose of all web applications is to facilitate the completion of one or more tasks. Unlike visitors to traditional, content-centric websites, users of web applications invariably arrive with specific goals, tasks, and expectations in mind. Of course, that’s not to say that visitors to content-based websites don’t also arrive with certain goals and expectations, but rather that the motivations for using a web application are almost always explicit and precise.

One of the most important implications of this task-based orientation is the degree to which the application should call attention to itself. Compared to content-centric websites, video games, and various forms of entertainment media, application design succeeds not when it draws attention to itself but when it recedes into the background. This requires the designer to find solutions fundamentally natural to both the user and the medium, allowing the application itself to become transparent. The paradox of application design is that the perfect solution is invariably the one that goes unnoticed.

While this does not mean that an application’s design shouldn’t be enjoyable and aesthetically pleasing, it does mean that the design should play a subservient role to the user’s work.

A second implication of their task-based orientation is that web applications have to provide users with various milestones informing them when tasks are complete. In other words, web applications have to support an end-state in a way that content-based sites typically don’t.

In addition to the challenges resulting from their focus on task completion, the manner in which web applications function and connect with users highlights other issues affecting web application design.

When is a website a web application?
Without being overly concerned about semantics or classification (if that’s actually possible on a site like Boxes and Arrows ), it is important to establish an objective means of differentiating between a web application and a traditional website. To wit, in contrast to content-based websites, a web application possesses both of the following observable properties:

  • One-to-one relationship – Web applications establish a unique session and relationship with each and every visitor. Although this behavior is fundamental to Web applications it is not present in either content-based websites or desktop applications. A web application such as Hotmail knows who you are in a way that Cnet or even Photoshop doesn’t.
  • Ability to permanently change data – Web applications allow users to create, manipulate, and permanently store data. Such data can take the form of completed sales transactions, human resources records, or email messages to name but a few. This contrasts with web services like Google that allow users to submit information but do not allow them to permanently store or alter information.

Although these two characteristics alone result in a fairly broad definition of web applications, websites that possess both of them necessarily contain a degree of application behavior, logic, and state lacking in traditional content-based sites. In addition, they require a significantly more sophisticated level of user interactivity and interaction design than what is associated with content sites.

This distinction between websites and web applications is most obvious in situations where a given site is almost exclusively composed of either content OR functionality. (a website) and Ofoto (a web application) are two such cases. However, even popular web destinations such as Amazon, and myYahoo!, sites that combine both content AND functionality, should be considered web applications because they meet these two criteria and therefore exhibit the interactive complexities and behaviors associated with applications.

In the case of Amazon, this takes the obvious form of personalized content and complex transactions, as well as a variety of other functions including the creation and storage of , the uploading and ordering of digital photographs, the editing and tracking of orders, and many others. That’s not to say that all online stores qualify as web applications; in fact most don’t. But Amazon and other stores of similar sophistication have the same characteristics and design considerations as more traditional applications such as email and contact management.

Granted, consumer sites like Amazon and myYahoo! typically lack the level of complexity found in licensed enterprise applications such as Siebel, PeopleSoft, or Documentum, but as a tool for classification, complexity is both inadequate and subjective.

Whether any particular application has sufficient complexity to require a highly skilled interaction designer is a question that can only be answered on a case-by-case basis. The point remains, however, that if a web property establishes a one-to-one relationship with its users and allows those users to edit, manipulate, and permanently store data, then it possess certain capabilities and complexities that distinguish it from traditional content-centric websites.

So what? “The point remains, however, that if a web property establishes a one-to-one relationship with its users and allows those users to edit, manipulate, and permanently store data, then it possess certain capabilities and complexities that distinguish it from traditional content-centric websites.”If the point of all this definition stuff was simply to provide a consistent method for classifying web properties, the whole exercise could be dismissed as little more than academic rhetoric. What’s useful about this definition, however, is not so much its utility as a classification scheme, but rather its ability to highlight some of the unique design challenges and functional benefits associated with web applications.

One of the most significant challenges and benefits results from the one-to-one relationship web applications form with their users.

Because a web application requires each user to uniquely identify themselves to the system, typically through a username and password pair, the application can be dynamically altered from one user to the next. This can take both the obvious form of personalized content and the more subtle and complex form form of personalized functionality based on roles and privileges. This type of dynamic behavior allows a complex corporate accounting application, for example, to provide different functionality to account managers, regional directors, corporate executives, etc.

Although this type of capability has been a mainstay of enterprise applications for some time, many less sophisticated or expensive applications now employ this behavior. For example, consumer-based online services can add and remove features or advertising based on whether or not a particular user has paid a subscription fee.

More so than any other interactive medium, a web application has the ability to adapt itself to each user, providing them with a personalized and unique experience. Accommodating the full range of permutations afforded by this capability is a unique and significant design challenge. Because various functions, interface controls, and data can dynamically come and go from the interface, designers are forced to think in terms of modular components that are simultaneously harmonious and autonomous.

In the same way that it is practically impossible for a visual designer to fully anticipate how a given web page will look in every situation, the designers of large-scale applications also struggle to fully document and consider every possible permutation of functionality and data.

Another unique design challenge associated with web applications results from their ability to allow users to make permanent changes to stored data. Because web applications are fundamentally database applications–that is, they store and present information contained in a defined database structure—the user’s information almost always has to fit within a predetermined format. Certain fields are present; some fields are required, others are not; some require a specific type of value; and still others require a value within a precise range. The result of all this is a level of data validation and error recovery unseen in either content-based websites or most desktop applications.

Accommodating this behavior requires the designer to carefully consider the task flow from one operation to the next, the full scope of potential errors, the most expedient way to recover from errors, and of course the holy grail of it all, the ideal solution for avoiding errors altogether.

All this points to one critical conclusion: web applications are a new form of interactive media, distinct from both content-based websites and traditional desktop applications. Therefore, the creation of truly useful and usable web applications requires the designer to understand, appreciate, and exploit the unique capabilities, limitations, and conventions of this new medium rather than simply approaching the design problem from the perspective of more established interactive mediums.

Next Up
Next time around we’ll continue to explore web applications as an interactive medium by comparing their advantages, disadvantages, and uses to traditional desktop applications.

Bob Baxley is a practicing designer who specializes in interaction design for Web applications and desktop products. He currently runs the Design and Usability teams at The Harris-myCFO and recently published his first book, “Making the Web Work: Designing Effective Web Applications.” He looks forward to hearing from you at .

Visible Narratives: Understanding Visual Organization

by:   |  Posted on
“Visual designers working on the web need an understanding of the medium in which they work, so many have taken to code. Many have entered the usability lab. ”Art vs. engineering. Aesthetics vs. usability. Usability experts are from Mars, graphic designers are from Venus . The debate between design (of the visual sort) and design (of the technical sort) remains ongoing. A website, however, can’t take sides: it needs both to be successful.

“Interactive design [is] a seamless blend of graphic arts, technology, and psychology.”—Brad Wieners Wired, 2002

So, why the debate? Perhaps the dividing line sits in our minds: left brain (logical) vs. right brain (intuitive). Or, if we take a less deterministic view: few engineers have taken the time to study art and few artists have spent time programming or conducting usability tests. But times are changing. Visual designers working on the web need an understanding of the medium in which they work, so many have taken to code. Many have entered the usability lab.

But what about the other side? Are developers and human factors professionals immersed in literature on gestalt and color theory? They certainly have the tools for it—programming environments make it very easy to throw around images, colors, and fonts (of all shapes and sizes). But with power comes responsibility. And in this case, the need to understand how visual information communicates with your audience.

“We find that people quickly evaluate a site by visual design alone.” —Stanford Guidelines for Web Credibility, 2002

Visual communication can be thought of as two intertwined parts: personality, or look and feel, and visual organization. The personality of a presentation is what provides the emotional impact —your instinctual response to what you see. Creating an appropriate personality requires the use of colors, type treatments, images, shapes, patterns, and more, to “say” the right thing to your audience. This article, however, focuses on the other side of the visual communication coin: visual organization.

How we see: visual relationships
Whenever we attempt to make sense of information visually, we first observe similarities and differences in what we are seeing. These relationships allow us to not only distinguish objects but to give them meaning. For example, a difference in color implies two distinct objects (or different parts of the same object), a difference in scale suggests one object is further from us than the other, a difference in texture (one is more blurry) enforces this idea, and so on. Once we have an understanding of the relationships between elements, we can piece together the whole story and understand what we are seeing.

This process is accelerated by our ability to group information visually. When we observe one blade of grass, nearby objects that share a similar color, shape, size, and position are grouped together and given meaning: a lawn. We don’t have to compare each blade to the others.

The principles of perception give us valuable insight into how we visually group information. For example, objects near each other are grouped (proximity), as are objects that share many visual characteristics (similarity).

Fig 1: Principles of perception: proximity, similarity, continuance, and closure.

But understanding the psychological manner in which we group visual information is not enough if we want to be able to communicate a specific message. In order to do that, we need to know how to use visual relationships to our advantage—we need to know what makes things different.

Though lots of variations are possible, we can group distinct visual characteristics into five general categories: color, texture, shape, direction, and size. Introducing variations in one or all of these categories creates visual contrast. The more contrast between two objects, the more likely they will be perceived as distinct and unrelated.

Fig 2: Visual differences between objects.

Telling a story: visual hierarchy
“Designers can create normalcy out of chaos; they can clearly communicate ideas through the organizing and manipulating of words and pictures.”—Jeffery Veen, The Art and Science of Web Design

Now that we understand the basic ways to visually distinguish objects, we can look at the big picture: using visual relationships to tell a coherent story. Elements within a “visual narrative” are arranged in an easily understood order of importance. A center of attention attracts the viewer’s attention, and each subsequent focal point adds to the story. This logical ordering is known as a visual hierarchy.

To build effective visual hierarchies, we use visual relationships to add more or less visual weight to page elements and thereby establish a pattern of movement through the layout. Visual weight can be measured by the degree to which a page element demands our attention or keeps our interest. Large red type, for example, contrasts with a white background much more than a light gray dot. As a result, the visual weight of the red type grabs our attention first, though it might not keep our attention as long as a detailed image.

Fig 3: Three objects with differing visual weights created by variations in color, shape, and texture.

Visually dominant elements (those with the heaviest visual weight) get noticed the most. They are the center of interest in a layout and they determine where the story begins. The hierarchy of subsequent elements guides our eyes through the rest of the composition, giving us pieces of the story as we go. The relative position of each element in the hierarchy provides valuable information about its importance to the big picture.

Fig 4: The heaviest (most visually dominant) elements in this circus poster are the images of the performers and the title. They communicate the big picture: the circus is in town. The lightest (least visually dominant) elements in the hierarchy are the ticket prices and features. If the hierarchy were reversed, few people would know the circus was in town. Instead they would be confused as they passed posters seemingly promoting “$5.50.”

A balanced hierarchy provides not only a clear path for recognizing and understanding information, it also helps unify the disparate elements within a page layout into a cohesive whole. This creates a sense of order and balance. Without visual hierarchy, page elements compete for attention, and as a result, none of them win. In all hierarchies, only certain elements should be on top; the rest need to follow suit. The appropriate position of each element in the hierarchy depends on the message you are trying to communicate.

Fig 5: In a layout with an effective visual hierarchy, the distinct visual weight of each element guides viewers through the page in an informative and appropriate manner.

Putting it to use
Any given web page is composed of many distinct elements. Navigation menus (possibly several layers deep), contact information, search boxes, site identifiers, and shopping carts are just a few. The visual organization of a web page can communicate valuable information about the similarities and differences between elements and their relative importance. Once your audience understands the significance of your page elements, they can apply that knowledge to the rest of the site.

Fig 6: If all the elements in a page layout are given equal visual weight, making sense of the page is difficult. Meaning is created through the differences and similarities among elements and their place in the page’s visual hierarchy.

Generally, the hierarchy of a web page is based on distinctions between the content, navigation, and supporting information on a page. Within each of these sections further distinctions can also be made. A general web page hierarchy (from highest to lowest importance) may look like the following:


  • Page title
  • Subsection title
  • Embedded links
  • Supplementary information (captions, etc.)


  • Location indicator
  • Top-level navigation options
  • Sub-navigation options
  • Trace route (breadcrumbs)

Supporting elements

  • Site identifier
  • Site-wide utilities (shopping cart, site map, etc.)
  • Footer information (privacy policy, contact info, etc.)

Fig 7: The visual hierarchy of a generic web page.

Of course, there are many situations where deviating from this formula is advised (on navigation pages, home pages, etc.). The content, audience, and goals of each page should determine its exact hierarchy. Nonetheless, the visual representation of each element on a web page should always be:

  • Appropriate for and indicative of the element’s function
  • Applied consistently throughout the site
  • Positioned properly in the page’s visual hierarchy (in a manner reflective of its relative importance)

Visual hierarchy, however, does more than simply explain page elements. It guides users through the site’s content and interactions. Armed with an understanding of each element’s place in a hierarchy, we can emphasize important elements (such as content or interaction points), and subdue other elements (supporting information).

Fig 8: In this online form, visual hierarchy guides the user through the necessary steps to place an order. It also emphasizes (with color, positioning, and scale) the first step (“Ordering from…”) and the last step (the “Sign-In” button). Simultaneously, the supporting information is subdued (it has little visual weight) and does not interfere with the main interaction sequence.

Similarly, visual hierarchy can provide users with a sense of where they are within a website, to direct their attention (to special offers, for example), to suggest distinct choices, to explain new elements, and more. However, effective visual communication does not “speak” loudly. It quietly educates and guides the audience through the interface.

Given the massive number of web pages and applications, users often rely on visual cues (especially initially) to assess web interfaces. Therefore, a well thought-out visual organization can greatly enhance usability by grouping information into meaningful page elements and sequences. Such a system relies on an understanding of how people use visual relationships to distinguish objects and what those relationships reveal to viewers (through visual weight and hierarchy). But visual organization is only half of visual communication. The rest, personality (or look and feel), is another story…

Luke Wroblewski is a Senior Interface Designer at the National Center for Supercomputing Applications (NCSA), birthplace of the first widely distributed graphical Web browser, NCSA Mosaic. At NCSA, he has designed interface solutions for Hewlett-Packard, IBM, and Kellogg’s, and co-developed the Open Portal Interface Environment (OPIE).