Re-Architecting PeopleSoft from the Top Down

by:   |  Posted on

Previously, Chiara Fox, senior information architect at PeopleSoft, presented a case study about the remarkable redesign that she and her team accomplished last year. Adaptive Path was fortunate to work with Chiara to develop the architecture. The project was a compelling one: PeopleSoft hoped to consolidate three highly trafficked–and highly redundant–sites into one site that would be served dynamically to a range of user types from a single content management system. It’s the sort of project that many companies talk about and few ever accomplish.

The challenge we faced was this: By what process can a team create a system that is both user-centered and content-centered?Adaptive Path’s role in the project was to lead the information architecture development. Because of the complexity of the problem, success for this project would hinge upon a sound methodology.

PeopleSoft had massive amounts of content stored in hundreds of databases, much of which was duplicative, and they wanted it unified, culled, and put into a single content management system. This was certainly a project that demanded bottom-up information architecture.

Despite the clear necessity for bottom-up IA, alone it would be insufficient to produce a successful architecture. Bottom-up techniques are content-centered, which is to say that they aren’t necessarily user-centered. To be successful, the finished system had to support user needs and business requirements, and also had to accurately represent the content assets that would reside in—and be made accessible through—the architecture. The challenge we faced was this:

  • By what process can a team create a system that is both user-centered and content-centered?

And, to bring the question to its finest point:

  • By what process can a team create a system that simultaneously reflects content patterns, supports user needs, and delivers on important business objectives?

In most development processes, it seems that these design objectives are mutually exclusive–that satisfying business objectives must necessarily impede the user’s ability to accomplish tasks, or that designing up from the content precludes designing down from user’s goals.

For the PeopleSoft re-architecture, we resolved this classic conflict by using a top-down architecture that integrated ethnographic user research, content analysis, and business requirements.

User-centered top-down methodology
For the PeopleSoft redesign, the top-down process had three components. We began by understanding user needs through a series of interviews that resulted in a mental model diagram. We used key deliverables from the bottom-up architecture work, specifically the content map, to begin establishing an organizational structure for the site. Finally, we used business goals, derived through stakeholder interviews and a process called “goal alignment,” to set build priorities. The result was a top-down architecture and navigation structure that was strongly user-centered, supported the content schemes emerging from the bottom-up activities, and could measurably achieve business goals.

User interviews, task analysis, and the mental model
The three PeopleSoft sites spanned a range of user types: four types of prospective customers, current customers, and five types of business partners. The users of the sites included people at all levels of the organization, from C-level executives to technical staff who implement the software. Despite their diversity, we believed that a common thread of activity tied them to the PeopleSoft websites–the cycle of selecting, buying, installing, and upgrading enterprise software.

The strong workflow common among PeopleSoft’s audiences allowed us to use a qualitative research process developed by Indi Young (a partner at Adaptive Path) to create task-based mental models. The methodology draws heavily upon techniques used in ethnographic research, contextual inquiry, and traditional task analysis. The result is a research-based visualization of the user’s mental model.

The first step was to define user types and conduct the interviews. Indi, along with Peter Merholz interviewed 19 people: six potential customers, seven current customers, and six partners. The hour-long interviews were designed to uncover the goals and tasks that each person had encountered when researching, buying, and maintaining enterprise software. During each interview we asked participants to tell us their story–to describe in detail what steps they took to accomplish their goals.

The discussion guide was used as prompts in a conversation, rather than a verbatim script (as you would, for instance, during a usability test). Interviewers encouraged participants to follow relevant tangents and probed for details by asking questions like “how did you…” and “what steps did you take to….”

Once Peter and Indi completed the interviews, they thoroughly dissected the transcripts. Every comment that included a “task” was pulled out and placed in a “task table.” When multiple comments mentioned the same task, those comments were placed together in the table. They grouped similar tasks, and gave the groups names. This analysis process removes the comments from their context in order to reveal patterns of activity across multiple users. By analyzing the interviews for potential customers, current customers, and partners separately, Peter and Indi were able to create three sets of task tables that they would turn into separate mental models.

To get from task table to mental model is a simple matter of grouping related tasks, then grouping the groups. Each task is a little white box, and related tasks are stacked and grouped inside a gray box. The stacks are arranged along a horizon line, and related groups of tasks are separated from one another by vertical rules.

The result looks like this:

In this way, we were able to create a fairly accurate “map” of which activities customers and partners needed to engage in when purchasing or maintaining PeopleSoft products.

Content matching
While Peter and Indi were interviewing users and assembling the mental models, Marcus Haid, our intern, and I were busy doing an inventory of the content on the three PeopleSoft sites. The exhaustive inventory resulted in a spreadsheet with thousands of lines that listed every HTML page and summarized every database on the three PeopleSoft sites. Although this level of detail was an asset to the bottom-up architecture activities that Chiara was leading, for the top-down architecture, it was like trying to understand a newspaper photo by looking at the pattern of dots.

We needed to zoom out, to see an accurate picture of what content was available on the sites, in order to understand how the content aligned with user tasks. This content/task alignment would form the basis for our high-level navigation decisions, so it was essential to get it right. To provide an appropriate view of the content, I created a “content map,” which summarized the site content in the same way that the mental model summarized the 170 pages of verbatim task tables. Using a combination of subject and document type, I drew general conclusions about the kinds of content assets available. In the end I was able to represent the complete content picture in about 50 objects.

The content map used the same “little-white-box” format that we had used to develop the mental model, which enabled us to merge the two diagrams. In collaborative working sessions that included PeopleSoft and Adaptive Path, we matched every content asset to tasks on each of the three mental models we had developed (one each for prospective customers, current customers, and partners).

Deriving high-level architecture and navigation
This user task/content comparison validated a few of our assumptions: First, users have a strong task orientation when working with a company like PeopleSoft. Second, PeopleSoft currently had (or was planning to add) ample content assets to support most of their users’ tasks and goals. Based on the strength of these findings, we agreed that a task-based navigation system would work well for PeopleSoft.

By comparing the three mental models (and the corresponding content) to one another, we could also see that there was substantial overlap in both the tasks to be accomplished and the content available. This validated our belief that best solution was not three separate sites, but rather a single site that would dynamically change when current customers and partners logged in. We knew that the dynamic changes would involve expanding and collapsing navigation to provide (or restrict) access to proprietary content, but it took several collaborative working sessions to define exactly how those would work.

To derive the navigation, we started by skimming the top off the mental model: We lined up the names of the mental spaces for each of the three audiences, eliminated redundancies and identified which parts would be available to current customers and partners only. This was our first draft of a top-level navigation. Next we filled in the local navigation using the task group names from the mental models, again removing duplication and identifying areas that would be private. Finally, we added the content assets within the local navigation, according to where we had placed them on the mental model diagram.

The end result of the derivation was a first-draft architecture diagram and navigation scheme. It showed us a single website with dynamic navigation that would expand and collapse to serve the specific needs of each user type. The architecture and navigation were refined and adjusted, but stayed true to the task-based organization that was developed in the mental model.

Business objectives and the final architecture
At this point in the development process, we had successfully developed an architecture scheme that satisfied user needs and supported the content patterns that were discovered during bottom-up architecture. But we knew that the architecture we had planned was still too blue-sky to be achievable in a single launch. We had to establish build priorities, and we wanted to use business objectives to drive the prioritization. This meant that the senior management of PeopleSoft had to define an achievable set of objectives through a process that they call “goal alignment.”

We started the goal-alignment process early in the project by interviewing 24 stakeholders, including senior executives. These interviews provided us with a long list of objectives. The next challenge was to gain broad support for a small set of objectives that could realistically be satisfied by the new architecture. Interview findings were analyzed and points of convergence and divergence across the organization were identified. A working session with senior PeopleSoft decision-makers was facilitated in order to achieve goal alignment and together they developed a prioritized list of measurable, achievable business objectives.

These objectives, combined with input from the build team about difficulty and available resources, made the implementation phases for our architecture plan very clear. With that in place we were able to quickly finish a set of architecture diagrams that described the site as it would be finally implemented.

Collaborative development
Possibly the most innovative aspect of the PeopleSoft redesign was not the techniques we used, but how we chose to employ them. At every step, vendor and client worked collaboratively as a team, revising and editing documentation in real time. This enabled us to advance the thinking and mature the design decisions before locking into a final solution. The top-down architecture was finished in August, and the site launched in December, after 10 months of work. By all measures, the launch was a success. PeopleSoft finished on schedule and under budget. Their customers and partners had a consistent experience across all of the sites. And in January, PeopleSoft’s website provided the company with a record number of leads.

Janice Fraser is a partner in Adaptive Path, a user experience design firm. She is on the faculty of San Francisco State University’s Multimedia Studies Program.

Re-architecting from the bottom-up

by:   |  Posted on

In December 2001 PeopleSoft, a large enterprise software company, relaunched its public website, and customer and partner extranets, Customer Connection and Alliance Connection. It took 11 months and more than 60 people to redesign and build the information architecture and graphic identity, build the technical infrastructure, migrate and rewrite existing content for the new content management system, test it, and finally publish the new site live.

All information architectures have a top-down and a bottom-up component. Top-down IA incorporates the business needs and user needs into the design, determining a strategy that supports both. Bottom-up methods look for the relationships between the different pieces of content, and uses metadata to describe the attributes found.

We undertook the re-architecture of the PeopleSoft web properties for a number of reasons. First, the three sites all had their own user experience, different architectures, and varying core goals. The sites also had overlapping content and users. Partners, who had to navigate all three sites to get all the information they needed, had the worst experience because they had three sites to navigate and understand.

Content was often duplicated across the three sites. This made updating the site time-consuming and difficult because files had to be updated in many places. It wasn’t uncommon to find different versions of a document on each of the sites, or even within the same site. Each site had its own style guide, which added to the varying experiences.

The sites also differed in their technical back-ends. Each site had its own search engine and content management system. Many types of databases were employed on the sites, and the structure of the data varied from database to database. Different information systems teams, as well as content development teams, supported the sites.

In February 2001, we started a project seeking to create a single site, with a unified technical infrastructure and three distinct user experiences. This new system would use Interwoven’s content management system, TeamSite, to store and generate the files for all three sites. The sites could share the same content assets where possible, reducing creation and maintenance overhead. Users would have the same type of experience on all of the sites, due to the shared graphic identity, branding, style guide, and information architecture. Once users learned one site, they would be able to transfer that learning to the others.

While we used many methods and tasks as part of this enormous project, this case study will focus on just one small piece of the bigger picture: the bottom-up information architecture methodologies. We did extensive user and stakeholder research, usability testing, and top-down IA, but a thorough discussion of them is beyond the scope of this article. The architecture portion was the first part of the project to be completed. PeopleSoft hired Lot21 and Adaptive Path to help with the architecture development.

Information architecture has a bottom?

All information architectures have a top-down and a bottom-up component. Top-down IA focuses on the big picture, the 10,000-foot view. It incorporates the business needs and user needs into the design, determining a strategy that supports both. Areas of content are tied together for improved searching and browsing. It determines the hierarchy of the site, as well as the primary paths to main content areas. Top-down IA can be as large as a portal or as small as a section home page.

In contrast, bottom-up IA focuses at the lower levels of granularity. It deals with the individual documents and files that make up the site, or in the case of a portal, the individual sub-sites. Bottom-up methods look for the relationships between the different pieces of content, and uses metadata to describe the attributes found. They allow multiple paths to the content to be built.

Both top-down and bottom-up methods are necessary to build a successful site, and they are not mutually exclusive. They work together to take the users from the home page to the individual piece of information they need.

Content inventory

Before we could do any designing, we had to first understand what we were dealing with. The first step we took was conducting a content inventory, which counted and documented every page on the site. It recorded specific information about each page that would later be used during the content analysis.

We created a separate Microsoft Excel spreadsheet for each site’s inventory. Each main section or global navigation point got its own workbook or “tab” in the spreadsheet. This made it much easier to work with the large files. The name of the page, URL, subject type, document type, topic, target user, and any notes about the page were manually recorded. There was room allotted in the spreadsheet for PeopleSoft to record the content owner, frequency of updates, and whether the page was a candidate for ROT removal. (ROT stands for Redundant, Outdated, and Trivial content.)

The final inventory consisted of more than 6,000 lines in the spreadsheets. Only HTML pages were recorded. Pages in Lotus Notes databases were excluded, though the different views were documented. Of the information recorded, link name, URL, and topic were the most useful and we referred to them again and again throughout the project. The other fields were still useful though. By filling those fields out, we were able to think more critically about each page, and get a better feel for and internalize what the sites had to offer. If we had just captured the page name and URL, or used an automatic method for gathering the information, this depth of knowledge would have been lost.

In addition, each page was assigned a unique link ID. At the beginning of the inventory, we envisioned using the link IDs as a way to refer to the pages since the page titles were often inconsistent and unreliable. In reality, the link IDs were too complex and numerous to use. No one could remember that meant the volunteer request form. The link IDs did prove to be helpful in other ways. By simply scanning a page in the spreadsheet, it was quick and easy to determine how broad or deep a section of the site was. They were also helpful during content migration in mapping the content on the old site to the architecture of the new site.

Unified content map

The content inventory spreadsheets were highly useful for detailed information about individual pages. But more than 6,000 lines of information are a bit hard for people to get their arms and brains around. The spreadsheets were not very good at giving a high-level view of the content on the site. For that we created the unified content map. Once the inventory spreadsheets were completed, we were able to pull out the different document types and content types we had found. We identified the larger content areas (e.g., general product information, customer case studies) and then listed out the individual examples that existed on the site (e.g., component descriptions, functionality lists).

The content areas of all three sites were mapped together in the same document, forming the unified map. We then identified content types that were duplicated between the sites. These overlapping items indicated areas that we wanted to investigate further to understand why they were duplicated. Was the document modified slightly to better serve a particular audience? We found out that in most cases, the documents were identical. Usually the content owner simply didn’t know that the document already existed elsewhere, or the technology used made it difficult to share assets. These overlaps were a driving force for structuring the content management system so a single asset could be used in multiple ways, for multiple audiences.

Classification scheme analysis

Beyond understanding the types of content that were on the PeopleSoft web properties, we also had to understand the organizational schemes that were in place on the sites. By looking at how the content was currently structured, we would have more insight on how it could be improved.

Classification scheme analysis was done on the products and industry classifications of all three sites. The names of the industries and products appeared in different places throughout the site, beyond the products section. For example, in the “Events” and “Customer Case Studies” sections documents are classified by product and industry. Each instance of the classification was recorded in a table, so the terms could be compared.

The first thing we looked for in the table was inconsistencies in wording from list to list. Inconsistencies illustrate the need for controlled vocabularies on the site because there are so many ways to describe the same thing. These inconsistencies were used as the basis for variant terms in the product and industry vocabularies. We also looked for “holes” in the classifications – places where terms were not used. Holes could indicate places where content needed to be developed, or needed to be removed because it was out of date. These sections were flagged so they could be examined during content migration.

Content analysis

Once the content inventory was complete and we had created the unified content map and classification scheme analysis tables, we had the daunting task of analyzing what we had documented. We used these tables and maps to help us find the patterns and relationships among the different types of content.

We looked for ways the content could be better tied together. On the previous site, content lived in discreet silos and there was very little interlinking. We discovered that there was actually a lot of information that could help prospective customers better understand our products and services or processes, such as implementing a PeopleSoft solution. For example, there are consulting services offered by PeopleSoft, as well as our Alliance Partners, which are specifically focused on the task of implementation. Training classes are available for both the technical implementation team, and the end users who will be using the new software. Once we saw these connections, it became clear that we needed a new section of the site devoted to implementation. User testing confirmed this, and we also learned of other types of information users needed, like a listing of supported platforms PeopleSoft software runs on.

Through content analysis we were also able to create the metadata schema to use on the new site. Some attributes such as products or services were obvious from the beginning. Others, like language and country, became obvious only when we saw how many documents we had that were non-English or appropriate for only North America. Twelve attributes in total were identified, and they are used to describe content on all three sites.

Creating the product lens

Information about the different products was spread out across the sites. This was especially true on the Customer Connection and Alliance Connection sites, where there are support documents in addition to sales and marketing information. Users had to go to multiple sections of the site to find all the information they needed. High-level marketing material could be found in the “Products” section, but support information was in its own area. Documentation was separate from support, and upgrade information was separate from both support and documentation. This model supported users who came to the site knowing what they wanted – support information for Global Payroll. The model didn’t work for users coming to site wanting to see all information related to Global Payroll. There was no central place that aggregated the links to the various resources together.

A goal for the new site was to support both types of users. We began by combing through the content inventory and the sites themselves to find all information related to products, no matter where they lived in the sites. Examples of content we found are support information, consulting services, training classes, and industry reports. We wrote each item down on a sticky note.

Working together with the Customer Connection team, we organized these sticky notes into different groupings. The sticky notes worked very well in this exercise. The “unfinished” nature of the notes encouraged people to be more critical and they felt freer to make changes. The whole team participated by moving the sticky notes around and discussing the reasons behind the movement and connections among notes. While coming up with the groupings, we didn’t think about final nomenclature. We instead focused on capturing a name that described the essence of the group. We ended up with titles like “What Others Are Saying About Product” and “Working Beyond the Product.” Things you would never want to see in a global navigation bar. We refined these labels later on once we built out the product pages.

These groupings formed the basic structure of the product module pages. Because there was so much information related to the products, we decided to divide the module pages into different tabs. The public would see three tabs— “Features,” “Technical Information,” and “Next Steps.” Customers and partners would see two additional tabs—“Support” and “Upgrade”—once they had logged into the site.

The information available on these tabs is supposed to be specific to the individual product. Ideally, a link to release notes on the “Global Payroll Support” tab would take the user to just the Global Payroll release notes. Unfortunately, due to technical limitations with our current database structure, we have to link to the release notes area in general. Users must then drill down to the information for Global Payroll. As we update the databases, we will be making these links more specific. Until then, we feel it is an improvement from before, when the user would have to backtrack out of the products area and drill into the documentation area to find these notes. We are at least getting them to the right neighborhood.

Site comparison tables

Not all of the bottom-up work occurred at the beginning of the redesign project. Once the new architecture was determined, we still had to populate that structure with the content. To aid in the migration and creation of content for the new site, we turned again to the content inventory.

The content inventory was performed in May 2001. Planning for the site migration didn’t take place until September. Even though specific pages on the sites had changed since the inventory, the bulk of the inventory and the structure it represented were still correct. We modified the inventory spreadsheets to include the new site structure, complete with new link IDs.

These tables began as a means to double-check that all the content had been accounted for in the new architecture. It also allowed us to see holes where we would have to create new content. As plans for migration continued, the use of the tables expanded. They provided a means for estimating the number of pages that had to be migrated. A column was added to indicate if the page was part of a database not scheduled for migration. Columns for the content approver and the migration team member names were also added to the spreadsheet. This made it clear to everyone who was responsible for which sections. This also helped in balancing out the workload among the whole team.

Once migration started, the usefulness of the comparison tables quickly faded. On-the-fly changes to the architecture occurred at the lower levels of the site as we worked with the migration team to slot the individual pieces of content. The tables quickly became out of date, and it took too much time to keep them updated.

State of things today

The new, Customer Connection, and Alliance Connection sites launched on December 21, 2001, on time and on budget. Since the launch, site inquiries, one of our major success indicators, are up significantly over last year.

But just because the site is live and successful doesn’t mean our work is done. We are continuing to refine and tweak the site. We are conducting various user testing and usability sessions to see how customers and prospects like the new site, and where they are having difficulty. We are retiring older databases and migrating the content into Interwoven TeamSite. There are areas of the site that we simply didn’t have the time to examine in detail during the redesign. We are now tweaking the architecture of these sub-sections such as “Training” and “Assess Your Needs” to better support the content we have and make it easier for users to find what they need.

Later this year we will be implementing PeopleSoft’s portal software so customers will be able to better log and manage their support cases and have more control over their site experience. The work is really just beginning.

Chiara Fox is the Senior Information Architect in PeopleSoft’s web department. Before joining PeopleSoft, Chiara was an Information Architect at the pioneering consultancy Argus Associates.

Challenging the Status Quo: Audi Redesigned

by:   |  Posted on

In September, 2000 Razorfish, Germany was charged with the task of “It is not uncommon that, by the end of a project, updating something as simple as a navigation label requires updating half a dozen documents or more. ”relaunching the main websites for Audi, the German car manufacturer. The project encompassed, their global brand portal, and, the regional site for Germany. Both sites were relaunched in December, 2001.

Rather than describe the project from beginning to end, this case study focuses on three aspects of particular interest:

  1. Razorfish’s approach to schematics (i.e., wireframes).
  2. An automated page layout technique referred to as “jumping boxes.”
  3. A user test that compared the performance of a left-hand navigation to a right-hand navigation.

Many web projects suffer from a lack of “traceability.” By this I mean the ability to trace a concept, idea, element, or artefact across a set of documents.

Unless a project employs all-encompassing document management tools, documents tend to end up separate and independent from one another. They are often owned by different people, reside in different locations, and are created in different formats. It is not uncommon that, by the end of a project, updating something as simple as a navigation label requires updating half a dozen documents or more. This is inefficient and leads to version control problems.

To address this problem, Razorfish, Germany turned to Adobe GoLive 5.0 in hopes of achieving a true convergence of documents. The plan was to integrate a range of deliverables, including sitemaps, schematics, text content, and screen designs. We even wanted to create functional specifications directly in GoLive in HTML format.

We chose GoLive for several reasons:

  1. Linkage
    Information was shared between the sitemap and schematics. Updating the page name in the sitemap, for example, updated the page name for the schematic.
  2. Modularity
    Page schematics were created using components. This allowed for the definition of global elements, such as the main navigation. Changes were made across the entire set of schematics very easily.
  3. File Sharing
    Working with a WebDAV server, IAs could check schematics in and out, thus offering version control. Audi was also able to see the schematics “live” online in HTML format through the project extranet.
  4. Cross-Platform
    GoLive is available for the PC and the Macintosh, and the output is simple HTML. Conversions to Adobe PDF, for example, were not necessary.

There were, of course, disadvantages to GoLive:

  1. File Size
    Even without text content and screen designs, the site file for the Audi schematics grew to 30 MB and became unwieldy.
  2. Instability
    We experienced some crashes and loss of work with GoLive 5.0, which had just been released before the Audi project began.
  3. Sitemapping
    The sitemap tool is primitive and doesn’t allow a great deal of control over appearance.
  4. Team Buy-in
    The use of GoLive didn’t get the buy-in from the whole Razorfish-Audi team and ended up being used primarily by IAs. In the end, the idea of true document convergence across skill groups never happened.

Overall, GoLive worked well and met most of our expectations, particularly from an IA standpoint. But it still isn’t the right tool for the job and our experience underscores the need for a program that meets all information architecture needs. Though no single technology will solve the problems of site conception and planning, a more appropriate tool would help.

Jumping Boxes
Razorfish, Germany wanted to address the fact that users surf with different “With an increase of alternative browsing devices on the horizon, the continuum of viewable browsing sizes will continue to expand. Never before has the demand for flexible layouts been greater.”browser window sizes. We believed developing pages for one fixed size is fundamentally inappropriate for web design and ignores the basic flexibility of the medium. Additionally, the Audi sites have a right-hand navigation that had to be visible without horizontal scrolling. Therefore, the layout had to expand and contract to fit variable browser sizes.

There are many ways to achieve flexible page layouts, but we developed what can be called an automated layout solution. Essentially, the Audi sites have “smart” pages that detect browser size and serve up the right layout automatically. Entire content areas of a page appear in different locations depending on the user’s resolution. These content boxes appear to “jump” around in the layout, hence the phrase “jumping boxes.” Three sizes are offered on the Audi sites —small (640×480), medium (800×600) and large (1024×768+).

There were at least two reasons for this approach. First, it fulfilled corporate design constraints. All page elements are aligned horizontally and vertically on a grid. Automated layout allowed us to better control alignment. Second, the solution is highly technical and speaks to the Audi slogan “Vorsprung durch Technik” (“Advancement Through Technology”). The site is based on JSP modules which are arranged to form a template. A style sheet (XSLT) controls the three possible arrangements of modules for a given template depending on the user’s browser size. This all happens in the front end and does not require extra server requests. In a sense, the layouts were supporting the brand with this technical solution.

An automated layout solution can be complicated to implement depending on the technology involved. For us, it proved to be more challenging than initially thought. Further, it is still unknown if there are any usability implications. We don’t believe so, but to date have no proof. Finally, the automated layout solution is not necessary for all page types.

With an increase of alternative browsing devices on the horizon, the continuum of viewable browsing sizes will continue to expand. Never before has the demand for flexible layouts been greater. Since the web stands at the center of our collective digital attention, solutions developed there can drive solutions in other formats and media. The Razorfish, Germany “jumping box” technique is an innovative technique, and we learned a great deal about page behavior from it.

Try resizing this screensaver download page on with an Internet Explorer browser to see the jumping boxes in action.
[] Right vs. Left Navigation
BMW, Mercedes and other car manufacturers generally have conservative page layouts with the navigation on the left or top. To set Audi apart from its competitors, we placed the navigation on the right side of the page. This solution addresses a core Audi brand value: innovation.

We tested the right-hand navigation extensively with our external partner, SirValuse. Two clickable prototypes, of about 10 pages each, were constructed – one with a left navigation and the other with a right navigation. 64 users were split into two groups of 32 each. This was a very large sample and not a sample of convenience: participants were recruited based on our user profiles and to fit Audi’s target group.

Prototypes used to test the Audi website.

The test consisted of three parts:
Part 1: Completion times for six tasks were timed with a stopwatch.
Part 2: Eye movements were analyzed to see where participants tend to look on the page.
Part 3: Users were directly asked what they thought about the right-hand navigation

Our hypothesis for Part 1 was that there would be a significant difference in task completion time for the first task and that by the last task there would be no significant difference in task completion time. We expected that users would need to use the site a couple of times to learn the uncommon pattern of interaction (i.e., a right-hand navigation), but that the learning curve would be very steep.

“Don Norman’s concept of affordance ‘the perceived properties of a thing that determine how it is to be used’ seems to be a better predictor of usability than conforming to standards or matching patterns to user expectations.”What we observed was surprising: There was no significant difference in completion times between the two navigation types for *any* task. In fact, the right hand navigation started to perform faster than the left in later tasks.

Part 2 looked at eye movement patterns. Instead of relying on traditional eye-tracking methods that make use of expensive equipment and headgear, we used a new method developed by an agency in Hamburg called Media Analyzer. This technique asks users to rapidly coordinate mouse clicks with where they look on the screen. Each click then represents a focal point of visual attention. A software program captures user interactions for later analysis.

We found that people tended to focus more on the content side of the page with a right navigation than with a left navigation.

In the final part of the test (Part 3), we asked several questions that addressed the central issue, “Do you like the right-hand navigation?” Overall, users were apathetic towards the navigation position. Most didn’t notice that the navigation was on the right and, when directly asked, they didn’t seem to care. However, seven people actually preferred the right navigation to a left navigation, while only two disliked it.

Subsequent usability tests and post-launch user feedback corroborate these findings: there is no apparent difficulty using a right-hand menu to navigate the and sites.

Though there is research about expectations of the location of page elements in a layout, such research does not correlate breaking these expectations with actual usability (see: Michael Bernard, and Jakob Nielsen, That is, while users normally anticipate a left-hand navigation, positioning the navigation elsewhere does not necessarily result in usability problems.

Don Norman’s concept of affordance —the perceived properties of a thing that determine how it is to be used —seems to be a better predictor of usability than conforming to standards or matching patterns to user expectations. With the Audi site, it is clear what is navigation and what is not. Users can build a pattern of interaction with the site immediately. Our findings show users have no problem distinguishing a right-justified navigation and tend to make generalizations about its function.

This does not mean that all sites should have a right-hand navigation. Indeed, a left-hand navigation may work best in most situations. However, for sites with particularly long texts that require scrolling, for example, a right-justified navigation might be beneficial.

The bottom line is that placing a navigation scheme elsewhere than on the left is not a taboo, contrary to “standards” professed by usability gurus. Without sacrificing usability, Razorfish, Germany was able to leverage a deviation in so-called standards to set Audi apart from its competitors and project an innovative brand image.

For more information:

James Kalbach is currently head of Information Architecture at Razorfish, Germany and has a masters degree in library and information science. Previously he established a usability lab at I-D Media, a large German digital agency. . A Case Study

by:   |  Posted on

One nonprofit + two web agencies + nine months = Yes, that was the formula to launch our web site, and I am one of the sole survivors to tell you about it. Before I begin telling the story of the project it is best to learn who and what Schwab Learning is.

Schwab Learning, a service of the Charles and Helen Schwab Foundation, is dedicated to helping kids with learning differences be successful in learning and life. The Foundation began in 1988 from the Schwabs’ personal struggle with learning differences (LD). After Mr. and Mrs. Schwab’s son struggled in school “Learning about our visitors’ experience first-hand has enabled us to create a web site that meets their needs in a more meaningful way.”they had him assessed for LD. During a meeting with a school psychologist, the Schwabs were asked: “Didn’t either of you have problems like this?” That is when Charles Schwab recognized his own dyslexia, and his lifelong struggle with reading and writing suddenly made sense.

In 1999, after eleven years of serving San Francisco Bay Area parents and educators through direct services and outreach, we realized that we could effect greater change if we expanded our web presence. We needed to find a Web agency that would conduct a study on our target group to understand their needs, develop a web strategy and implement the web site. This project was during the height of dot-com boom, and many agencies were not interested in us because they had many accounts that would bring in a lot more money than our budget allowed. After a few months of pitch meetings with agencies, we signed a contract with Sapient to conduct an ethnographic study and lead us from concept to implementation for a new web site.

Laying the foundation for our new site
When we began working with Sapient we had already established goals, objectives and a direction.

Goal: Help kids with learning differences be successful in learning and life. Support kids and moms through “the journey.”


  1. Create two web sites, one for parents/moms and one for kids, but begin with the parent site.
  2. Conduct a study with moms who have a child or children with LD to learn about their experiences. Also, test Schwab Learning’s hypothesis that moms are the “case managers” for their children when working with schools, doctors, etc., and that parents are on a journey to understand and cope with LD.
  3. Create a scalable business and Web strategy to reach moms.

We began working with Sapient in March 2000 focusing on the business strategy and study of moms’ experiences. There were approximately 10 to 12 Sapient team members and 10 to 12 Schwab Learning team members. As a small non-profit, it was awkward working with such a large team of consultants; they totaled one-third of our entire staff at the time. After two months of working together, a draft business strategy was ready for the Board, and the results of the study had been delivered by way of experience models.

Before explaining the experience models and their impact on the Web site it is important to understand the methodology of the study. These models are extremely rich, as it would be very difficult to describe a mom’s experience without them. There were three parts of the study: focus groups, in-home interviews and visual diaries.

Focus Groups: Conducted in San Francisco and Chicago to determine if there were state-to-state differences between moms. There were four focus groups in each state: Two with children identified with an LD and two with children who struggled in school. In each of these pairs one group of moms had children in kindergarten to third grade, and one group had children in fourth to eighth grade.

In-Home Interviews: Seven moms in San Francisco and seven moms in the Chicago area, each interviewed for two hours. These interviews asked moms how they found information about LD, which management strategies they used with their children and for details about their children’s daily routines. There was also a tour of the house to demonstrate how the mom and child interacted in the home. Moms wrote on index cards words, phrases and questions about how they managed their child’s LD and how they felt parenting a child with LD. They arranged these cards in groups to help us understand how the topics are related.

Visual Diaries: Sixteen visual diaries were given to moms in San Francisco and Chicago to chronicle their experiences in a four-day period. Moms were asked to answer some questions and to write free-form journals. Moms were also asked to take pictures of their home environment, their kids, etc.

The LD Landscape
Five domains make up the LD Landscape and demonstrate the areas of a mom’s life that are affected by her child having an LD. These domains exist before their child is identified with LD; however moms have to reorient their relationships in the domains once they begin managing their child.

The lifecycle: gaining awareness
There are usually three stages that parents go through before their child is identified with LD. First they begin to sense that something is different. Next they rule out the environment, sleep patterns or other factors that might cause their child to struggle in school. Finally, they have their child assessed for LD.

The lifecycle: management strategies
After a child is assessed it is now time for the mom to begin learning management strategies that will help her interact with her child in home and at school. Management strategies do not always work, and may have to be refined.

Mom’s evolution of knowledge
When a mom first finds out about her child’s learning difference she usually seeks all the information she can find. This information is critical in the beginning, but over time moms begin to gain confidence in their abilities to help their children and rely more on experience and knowledge.

The next phase
After the experience models were delivered and accepted by Schwab Learning, the next phase of the project began.

A study with moms identified six user types which illustrate the different roles a mom finds herself in along the journey.

Pre-Identified: Doesn’t know that an LD exists. Considers herself part of the “normal” community, yet might feel isolated.

Novice: Acknowledges her child has an LD, but might not know which one. Learns that an LD landscape exists and there are tools and strategies to learn.

Student: Begins to negotiate the landscape and recognizes the affected domains. Recognizes her need for information and assistance.

Case Manager: Reorients herself in the LD landscape. Improves her ability to handle crisis and management of her child.

Advocate: Proactively participates in larger community. Begins to extend her knowledge to others; beginning of leadership.

Sage: Becomes a community resource and begins to be sought out by others.

The articulation of these roles demonstrated to us that we needed to focus on a particular user type or role because we could not launch a site filling all of these needs. After several meetings working with Sapient we narrowed our target for launch to the Novice mom. Choosing this target group made the most sense as we had been serving this population in our local center for years, and we had ready-made content for the web site.

The day our direction changed
At the end of May 2000 the Foundation’s Board met to discuss various matters, primarily the new business strategy and direction of the Schwab Learning. After understanding the costs of the strategy: call centers, large-scale partnerships, and a deep and complex web site at launch, the Board was concerned. Mr. Schwab grew his business from the ground up, building on top of successes while taking calculated risks and learning from them. The decision was made to scale back the scope of the web site, find another web agency to build the web site from the study we had conducted, and launch by the end of 2000.

After finishing our commitment to Sapient in July, we wrote an RFP, interviewed agencies and hired Small Pond Studios (SPS) within a month. We did not want to stop the internal momentum and enthusiasm for building the web site, and we only had four and one-half months to launch the web site. SPS was an ideal agency to work with because not only did they have a stellar team, the four principles worked for Sapient prior to starting their own company. They understood all of the deliverables from Sapient and were able to translate them into a plan for the web site.

Creating a realistic web site
Once the documentation was internalized by SPS we began working on the design, branding and information architecture. There were four conceptual models to choose from: Information, Tools, Journey and Community. The “Journey” concept was the most compelling model because it gave site visitors an orientation about LD while balancing information, community and tools, which are important to managing the journey. Also, the Journey concept complemented our user study because parents need to understand the LD landscape before managing their child’s LD.

The Information concept did not provide Schwab Learning the space to be a guide to parents, and it de-emphasized community. The Tools concept would not provide parents enough desperately sought information. The Community concept would not put Schwab Learning in the expert role, and a community’s growth takes time, which we did not have.

Once the decision was made to move forward with the Journey concept, SPS created two different wire frames to test with moms. One wire frame was based on organizing the information architecture by the LD Landscape (domains): Work, Family, Institutions, Community and Self. The other wire frame was based on the Lifecycle: Is it LD?, Identifying and Managing a Learning Difference, and Sharing Information.

LD Landscape

LD Lifecycle

SPS conducted two rounds of user testing with six moms using wire frames. The first round was to determine which structure made more sense to moms, and the second was to refine the chosen model. During the first round of testing we discovered that moms did not know where to begin with the LD Landscape concept. All of the domains affected their life, and all were very interesting, so knowing where to click first was not intuitive. Moms had a better sense for were to start with the Lifecycle concept, and that confidence would be critical for first-time visitors to the web site.

For the second round of testing using the Lifecycle concept, the main “buckets” were reduced from four to three: Identifying a Learning Difference, Managing a Learning Difference and Sharing Knowledge. Also, because the concept made sense to moms, the domains became the secondary navigation architecture.. We probed on the wording of the “buckets” and placement of clicks, as well as interest in registering and reactions to a first version of the design.

Final information architecture wireframe

Initial design of homepage
We learned valuable information from this second round of testing. Moms liked the happy children and the warm, inviting color of the Web site. They also liked the “.org” front and center. To the moms it assured them that the site was not trying to sell them anything, and our information could be trusted. Moms did raise concern about the phrase “Sharing Your Knowledge” because some of them felt they did not have knowledge to share.

The next step was to continue to refine the design, then marry the technical and design for testing. We had decided early on to build the site in ASP with a MS SQL database. The live site at the time was built on the same platform so we were able to leverage our existing content management system and other functions for the new site.

In the span of two years, the site went from this design and information architecture in January 1999 …

To this site redesign in September 1999 …

And finally to this complete new site in December 2000.

So you launched, now what?
In 2001 we hired four staff members who grew the team to seven, and in 2002 we had a budget for two more. We added several pieces of functionality to the site: polls, quizzes, a web calendar and an html newsletter option; increased our content from eighty articles to two hundred articles and conducted a usability study with ten moms. In 2001 our web traffic steadily increased from month to month. The average visitors from the first quarter to the fourth quarter increased by 46 percent and page views increased by 49 percent.

When we conducted the usability test with moms we discovered that they were having a difficult time browsing once they clicked into “1, 2 or 3.” Moms were struggling to find information they needed in the domains because the lists of articles were becoming too long. Internally we were struggling with placing articles in our information structure, so we knew it needed to be changed. We have kept the 1, 2, 3 structure and added a 4 to house a visitor’s personal page and some of our functionality that previously did not have a home. We also consolidated the secondary information structure from Your Child, Your Family, Schools and Professionals, etc. to Kids & Learning, Home & Family, Schools & Other Resources and have now added a tertiary information structure. This provides us a more flexible structure that moms will hopefully relate better too. This new information and design structure launched in February 2002.

Lessons learned
It has been an amazing two years and yet we still have a long way to go. Looking back, we have achieved our original objectives and applied them to the building of We have learned many lessons along the way and here are a few:

First, don’t let your vision blind you. We were incredibly excited about helping moms and kids, and that enthusiasm led us to believe that our thirty-person organization could transform itself overnight. We needed to take a deep breath and say, “Wait a minute, how are we going to do this?” Today our vision remains as strong as ever to help kids with learning differences be successful in learning and life. Our process to achieve our vision changed from the big bang theory to starting small, building on the foundation we launched with and protecting our assets.

Second, conducting user studies was invaluable. Learning about our visitors’ experience first-hand has enabled us to create a web site that meets their needs in a more meaningful way. Our experience models have enabled us to communicate with partners and other friends of the Foundation as well as create a new language for us: domains, LD landscape, novice, case manager, etc.

Third, user research and usability testing will always put you on the right track. The testing we conducted pre- and post-launch has been extremely useful in guiding our development. The initial user research study gave us the opportunity to go into the homes of the people we were trying to help. This proved to be rich data because we could see first-hand the interactions with their children and how their homes were set up to accommodate their children (i.e. where they kept medications, chore lists, etc.). The focus groups revealed different information as these moms were in a group with different dynamics compared with one-on-one interviews in a home. The diaries gave us another data point that was intimate in a different way as we only knew these moms’ stories and never met them in person. As for the first usability testing, we were able to discover potential pitfalls before going live. Who would have known that moms would have concerns about the concept “Sharing Your Knowledge”, but “Connecting With Others” did not pose a problem. Also in our post-launch usability testing, we discovered that the secondary information structure based on the “Domains” made sense to us, but not to site visitors. This is a very important discovery because if users cannot browse the Web site easily they are apt to become frustrated and leave the site. Moms of kids with LD are most likely already frustrated when they arrive, and we want to provide them a place that takes away the stress and lets them know someone understands.

Although some of these lessons have been learned the hard way, it has been completely worth it. When we receive emails from moms that read, “I am so appreciative of you [], just for being there. Wish I would have found you sooner,” we know we are doing our job.

Jeanene Landers Steinberg is the Web Director for and had the role of project manager during the creation of the Web site. Jeanene manages a team of eight people consisting of technical, editorial and online community staff who are responsible for maintaining and growing into a premiere Web site for LD information, guidance and support.