Re-architecting PeopleSoft.com from the bottom-up

Written by: Chiara Fox

In December 2001 PeopleSoft, a large enterprise software company, relaunched its public website, and customer and partner extranets, Customer Connection and Alliance Connection. It took 11 months and more than 60 people to redesign and build the information architecture and graphic identity, build the technical infrastructure, migrate and rewrite existing content for the new content management system, test it, and finally publish the new site live.

All information architectures have a top-down and a bottom-up component. Top-down IA incorporates the business needs and user needs into the design, determining a strategy that supports both. Bottom-up methods look for the relationships between the different pieces of content, and uses metadata to describe the attributes found.

We undertook the re-architecture of the PeopleSoft web properties for a number of reasons. First, the three sites all had their own user experience, different architectures, and varying core goals. The sites also had overlapping content and users. Partners, who had to navigate all three sites to get all the information they needed, had the worst experience because they had three sites to navigate and understand.

Content was often duplicated across the three sites. This made updating the site time-consuming and difficult because files had to be updated in many places. It wasn’t uncommon to find different versions of a document on each of the sites, or even within the same site. Each site had its own style guide, which added to the varying experiences.

The sites also differed in their technical back-ends. Each site had its own search engine and content management system. Many types of databases were employed on the sites, and the structure of the data varied from database to database. Different information systems teams, as well as content development teams, supported the sites.

In February 2001, we started a project seeking to create a single PeopleSoft.com site, with a unified technical infrastructure and three distinct user experiences. This new system would use Interwoven’s content management system, TeamSite, to store and generate the files for all three sites. The sites could share the same content assets where possible, reducing creation and maintenance overhead. Users would have the same type of experience on all of the sites, due to the shared graphic identity, branding, style guide, and information architecture. Once users learned one site, they would be able to transfer that learning to the others.

While we used many methods and tasks as part of this enormous project, this case study will focus on just one small piece of the bigger picture: the bottom-up information architecture methodologies. We did extensive user and stakeholder research, usability testing, and top-down IA, but a thorough discussion of them is beyond the scope of this article. The architecture portion was the first part of the project to be completed. PeopleSoft hired Lot21 and Adaptive Path to help with the architecture development.

Information architecture has a bottom?

All information architectures have a top-down and a bottom-up component. Top-down IA focuses on the big picture, the 10,000-foot view. It incorporates the business needs and user needs into the design, determining a strategy that supports both. Areas of content are tied together for improved searching and browsing. It determines the hierarchy of the site, as well as the primary paths to main content areas. Top-down IA can be as large as a portal or as small as a section home page.

In contrast, bottom-up IA focuses at the lower levels of granularity. It deals with the individual documents and files that make up the site, or in the case of a portal, the individual sub-sites. Bottom-up methods look for the relationships between the different pieces of content, and uses metadata to describe the attributes found. They allow multiple paths to the content to be built.

Both top-down and bottom-up methods are necessary to build a successful site, and they are not mutually exclusive. They work together to take the users from the home page to the individual piece of information they need.

Content inventory

Before we could do any designing, we had to first understand what we were dealing with. The first step we took was conducting a content inventory, which counted and documented every page on the site. It recorded specific information about each page that would later be used during the content analysis.

We created a separate Microsoft Excel spreadsheet for each site’s inventory. Each main section or global navigation point got its own workbook or “tab” in the spreadsheet. This made it much easier to work with the large files. The name of the page, URL, subject type, document type, topic, target user, and any notes about the page were manually recorded. There was room allotted in the spreadsheet for PeopleSoft to record the content owner, frequency of updates, and whether the page was a candidate for ROT removal. (ROT stands for Redundant, Outdated, and Trivial content.)

The final inventory consisted of more than 6,000 lines in the spreadsheets. Only HTML pages were recorded. Pages in Lotus Notes databases were excluded, though the different views were documented. Of the information recorded, link name, URL, and topic were the most useful and we referred to them again and again throughout the project. The other fields were still useful though. By filling those fields out, we were able to think more critically about each page, and get a better feel for and internalize what the sites had to offer. If we had just captured the page name and URL, or used an automatic method for gathering the information, this depth of knowledge would have been lost.

In addition, each page was assigned a unique link ID. At the beginning of the inventory, we envisioned using the link IDs as a way to refer to the pages since the page titles were often inconsistent and unreliable. In reality, the link IDs were too complex and numerous to use. No one could remember that 1.3.2.1 meant the volunteer request form. The link IDs did prove to be helpful in other ways. By simply scanning a page in the spreadsheet, it was quick and easy to determine how broad or deep a section of the site was. They were also helpful during content migration in mapping the content on the old site to the architecture of the new site.

Unified content map

The content inventory spreadsheets were highly useful for detailed information about individual pages. But more than 6,000 lines of information are a bit hard for people to get their arms and brains around. The spreadsheets were not very good at giving a high-level view of the content on the site. For that we created the unified content map. Once the inventory spreadsheets were completed, we were able to pull out the different document types and content types we had found. We identified the larger content areas (e.g., general product information, customer case studies) and then listed out the individual examples that existed on the site (e.g., component descriptions, functionality lists).

The content areas of all three sites were mapped together in the same document, forming the unified map. We then identified content types that were duplicated between the sites. These overlapping items indicated areas that we wanted to investigate further to understand why they were duplicated. Was the document modified slightly to better serve a particular audience? We found out that in most cases, the documents were identical. Usually the content owner simply didn’t know that the document already existed elsewhere, or the technology used made it difficult to share assets. These overlaps were a driving force for structuring the content management system so a single asset could be used in multiple ways, for multiple audiences.

Classification scheme analysis

Beyond understanding the types of content that were on the PeopleSoft web properties, we also had to understand the organizational schemes that were in place on the sites. By looking at how the content was currently structured, we would have more insight on how it could be improved.

Classification scheme analysis was done on the products and industry classifications of all three sites. The names of the industries and products appeared in different places throughout the site, beyond the products section. For example, in the “Events” and “Customer Case Studies” sections documents are classified by product and industry. Each instance of the classification was recorded in a table, so the terms could be compared.

The first thing we looked for in the table was inconsistencies in wording from list to list. Inconsistencies illustrate the need for controlled vocabularies on the site because there are so many ways to describe the same thing. These inconsistencies were used as the basis for variant terms in the product and industry vocabularies. We also looked for “holes” in the classifications – places where terms were not used. Holes could indicate places where content needed to be developed, or needed to be removed because it was out of date. These sections were flagged so they could be examined during content migration.

Content analysis

Once the content inventory was complete and we had created the unified content map and classification scheme analysis tables, we had the daunting task of analyzing what we had documented. We used these tables and maps to help us find the patterns and relationships among the different types of content.

We looked for ways the content could be better tied together. On the previous site, content lived in discreet silos and there was very little interlinking. We discovered that there was actually a lot of information that could help prospective customers better understand our products and services or processes, such as implementing a PeopleSoft solution. For example, there are consulting services offered by PeopleSoft, as well as our Alliance Partners, which are specifically focused on the task of implementation. Training classes are available for both the technical implementation team, and the end users who will be using the new software. Once we saw these connections, it became clear that we needed a new section of the site devoted to implementation. User testing confirmed this, and we also learned of other types of information users needed, like a listing of supported platforms PeopleSoft software runs on.

Through content analysis we were also able to create the metadata schema to use on the new site. Some attributes such as products or services were obvious from the beginning. Others, like language and country, became obvious only when we saw how many documents we had that were non-English or appropriate for only North America. Twelve attributes in total were identified, and they are used to describe content on all three sites.

Creating the product lens

Information about the different products was spread out across the sites. This was especially true on the Customer Connection and Alliance Connection sites, where there are support documents in addition to sales and marketing information. Users had to go to multiple sections of the site to find all the information they needed. High-level marketing material could be found in the “Products” section, but support information was in its own area. Documentation was separate from support, and upgrade information was separate from both support and documentation. This model supported users who came to the site knowing what they wanted – support information for Global Payroll. The model didn’t work for users coming to site wanting to see all information related to Global Payroll. There was no central place that aggregated the links to the various resources together.

A goal for the new site was to support both types of users. We began by combing through the content inventory and the sites themselves to find all information related to products, no matter where they lived in the sites. Examples of content we found are support information, consulting services, training classes, and industry reports. We wrote each item down on a sticky note.

Working together with the Customer Connection team, we organized these sticky notes into different groupings. The sticky notes worked very well in this exercise. The “unfinished” nature of the notes encouraged people to be more critical and they felt freer to make changes. The whole team participated by moving the sticky notes around and discussing the reasons behind the movement and connections among notes. While coming up with the groupings, we didn’t think about final nomenclature. We instead focused on capturing a name that described the essence of the group. We ended up with titles like “What Others Are Saying About Product” and “Working Beyond the Product.” Things you would never want to see in a global navigation bar. We refined these labels later on once we built out the product pages.

These groupings formed the basic structure of the product module pages. Because there was so much information related to the products, we decided to divide the module pages into different tabs. The public would see three tabs— “Features,” “Technical Information,” and “Next Steps.” Customers and partners would see two additional tabs—“Support” and “Upgrade”—once they had logged into the site.

The information available on these tabs is supposed to be specific to the individual product. Ideally, a link to release notes on the “Global Payroll Support” tab would take the user to just the Global Payroll release notes. Unfortunately, due to technical limitations with our current database structure, we have to link to the release notes area in general. Users must then drill down to the information for Global Payroll. As we update the databases, we will be making these links more specific. Until then, we feel it is an improvement from before, when the user would have to backtrack out of the products area and drill into the documentation area to find these notes. We are at least getting them to the right neighborhood.

Site comparison tables

Not all of the bottom-up work occurred at the beginning of the redesign project. Once the new architecture was determined, we still had to populate that structure with the content. To aid in the migration and creation of content for the new site, we turned again to the content inventory.

The content inventory was performed in May 2001. Planning for the site migration didn’t take place until September. Even though specific pages on the sites had changed since the inventory, the bulk of the inventory and the structure it represented were still correct. We modified the inventory spreadsheets to include the new site structure, complete with new link IDs.

These tables began as a means to double-check that all the content had been accounted for in the new architecture. It also allowed us to see holes where we would have to create new content. As plans for migration continued, the use of the tables expanded. They provided a means for estimating the number of pages that had to be migrated. A column was added to indicate if the page was part of a database not scheduled for migration. Columns for the content approver and the migration team member names were also added to the spreadsheet. This made it clear to everyone who was responsible for which sections. This also helped in balancing out the workload among the whole team.

Once migration started, the usefulness of the comparison tables quickly faded. On-the-fly changes to the architecture occurred at the lower levels of the site as we worked with the migration team to slot the individual pieces of content. The tables quickly became out of date, and it took too much time to keep them updated.

State of things today

The new PeopleSoft.com, Customer Connection, and Alliance Connection sites launched on December 21, 2001, on time and on budget. Since the launch, site inquiries, one of our major success indicators, are up significantly over last year.

But just because the site is live and successful doesn’t mean our work is done. We are continuing to refine and tweak the site. We are conducting various user testing and usability sessions to see how customers and prospects like the new site, and where they are having difficulty. We are retiring older databases and migrating the content into Interwoven TeamSite. There are areas of the site that we simply didn’t have the time to examine in detail during the redesign. We are now tweaking the architecture of these sub-sections such as “Training” and “Assess Your Needs” to better support the content we have and make it easier for users to find what they need.

Later this year we will be implementing PeopleSoft’s portal software so customers will be able to better log and manage their support cases and have more control over their site experience. The work is really just beginning.

Chiara Fox is the Senior Information Architect in PeopleSoft’s web department. Before joining PeopleSoft, Chiara was an Information Architect at the pioneering consultancy Argus Associates.

Challenging the Status Quo: Audi Redesigned

Written by: Jim Kalbach

In September, 2000 Razorfish, Germany was charged with the task of “It is not uncommon that, by the end of a project, updating something as simple as a navigation label requires updating half a dozen documents or more. ”relaunching the main websites for Audi, the German car manufacturer. The project encompassed Audi.com, their global brand portal, and Audi.de, the regional site for Germany. Both sites were relaunched in December, 2001.

Rather than describe the project from beginning to end, this case study focuses on three aspects of particular interest:

  1. Razorfish’s approach to schematics (i.e., wireframes).
  2. An automated page layout technique referred to as “jumping boxes.”
  3. A user test that compared the performance of a left-hand navigation to a right-hand navigation.

Schematics
Many web projects suffer from a lack of “traceability.” By this I mean the ability to trace a concept, idea, element, or artefact across a set of documents.

Unless a project employs all-encompassing document management tools, documents tend to end up separate and independent from one another. They are often owned by different people, reside in different locations, and are created in different formats. It is not uncommon that, by the end of a project, updating something as simple as a navigation label requires updating half a dozen documents or more. This is inefficient and leads to version control problems.

To address this problem, Razorfish, Germany turned to Adobe GoLive 5.0 in hopes of achieving a true convergence of documents. The plan was to integrate a range of deliverables, including sitemaps, schematics, text content, and screen designs. We even wanted to create functional specifications directly in GoLive in HTML format.

We chose GoLive for several reasons:

  1. Linkage
    Information was shared between the sitemap and schematics. Updating the page name in the sitemap, for example, updated the page name for the schematic.
  2. Modularity
    Page schematics were created using components. This allowed for the definition of global elements, such as the main navigation. Changes were made across the entire set of schematics very easily.
  3. File Sharing
    Working with a WebDAV server, IAs could check schematics in and out, thus offering version control. Audi was also able to see the schematics “live” online in HTML format through the project extranet.
  4. Cross-Platform
    GoLive is available for the PC and the Macintosh, and the output is simple HTML. Conversions to Adobe PDF, for example, were not necessary.

There were, of course, disadvantages to GoLive:

  1. File Size
    Even without text content and screen designs, the site file for the Audi schematics grew to 30 MB and became unwieldy.
  2. Instability
    We experienced some crashes and loss of work with GoLive 5.0, which had just been released before the Audi project began.
  3. Sitemapping
    The sitemap tool is primitive and doesn’t allow a great deal of control over appearance.
  4. Team Buy-in
    The use of GoLive didn’t get the buy-in from the whole Razorfish-Audi team and ended up being used primarily by IAs. In the end, the idea of true document convergence across skill groups never happened.

Overall, GoLive worked well and met most of our expectations, particularly from an IA standpoint. But it still isn’t the right tool for the job and our experience underscores the need for a program that meets all information architecture needs. Though no single technology will solve the problems of site conception and planning, a more appropriate tool would help.

Jumping Boxes
Razorfish, Germany wanted to address the fact that users surf with different “With an increase of alternative browsing devices on the horizon, the continuum of viewable browsing sizes will continue to expand. Never before has the demand for flexible layouts been greater.”browser window sizes. We believed developing pages for one fixed size is fundamentally inappropriate for web design and ignores the basic flexibility of the medium. Additionally, the Audi sites have a right-hand navigation that had to be visible without horizontal scrolling. Therefore, the layout had to expand and contract to fit variable browser sizes.

There are many ways to achieve flexible page layouts, but we developed what can be called an automated layout solution. Essentially, the Audi sites have “smart” pages that detect browser size and serve up the right layout automatically. Entire content areas of a page appear in different locations depending on the user’s resolution. These content boxes appear to “jump” around in the layout, hence the phrase “jumping boxes.” Three sizes are offered on the Audi sites —small (640×480), medium (800×600) and large (1024×768+).

There were at least two reasons for this approach. First, it fulfilled corporate design constraints. All page elements are aligned horizontally and vertically on a grid. Automated layout allowed us to better control alignment. Second, the solution is highly technical and speaks to the Audi slogan “Vorsprung durch Technik” (“Advancement Through Technology”). The site is based on JSP modules which are arranged to form a template. A style sheet (XSLT) controls the three possible arrangements of modules for a given template depending on the user’s browser size. This all happens in the front end and does not require extra server requests. In a sense, the layouts were supporting the brand with this technical solution.

An automated layout solution can be complicated to implement depending on the technology involved. For us, it proved to be more challenging than initially thought. Further, it is still unknown if there are any usability implications. We don’t believe so, but to date have no proof. Finally, the automated layout solution is not necessary for all page types.

With an increase of alternative browsing devices on the horizon, the continuum of viewable browsing sizes will continue to expand. Never before has the demand for flexible layouts been greater. Since the web stands at the center of our collective digital attention, solutions developed there can drive solutions in other formats and media. The Razorfish, Germany “jumping box” technique is an innovative technique, and we learned a great deal about page behavior from it.

Try resizing this screensaver download page on Audi.com with an Internet Explorer browser to see the jumping boxes in action.
[http://www.audi.com/com/en/experience/entertainment/audi_screensaver/audi_screensaver.jsp] Right vs. Left Navigation
BMW, Mercedes and other car manufacturers generally have conservative page layouts with the navigation on the left or top. To set Audi apart from its competitors, we placed the navigation on the right side of the page. This solution addresses a core Audi brand value: innovation.

We tested the right-hand navigation extensively with our external partner, SirValuse. Two clickable prototypes, of about 10 pages each, were constructed – one with a left navigation and the other with a right navigation. 64 users were split into two groups of 32 each. This was a very large sample and not a sample of convenience: participants were recruited based on our user profiles and to fit Audi’s target group.

Prototypes used to test the Audi website.

The test consisted of three parts:
Part 1: Completion times for six tasks were timed with a stopwatch.
Part 2: Eye movements were analyzed to see where participants tend to look on the page.
Part 3: Users were directly asked what they thought about the right-hand navigation

Our hypothesis for Part 1 was that there would be a significant difference in task completion time for the first task and that by the last task there would be no significant difference in task completion time. We expected that users would need to use the site a couple of times to learn the uncommon pattern of interaction (i.e., a right-hand navigation), but that the learning curve would be very steep.

“Don Norman’s concept of affordance ‘the perceived properties of a thing that determine how it is to be used’ seems to be a better predictor of usability than conforming to standards or matching patterns to user expectations.”What we observed was surprising: There was no significant difference in completion times between the two navigation types for *any* task. In fact, the right hand navigation started to perform faster than the left in later tasks.

Part 2 looked at eye movement patterns. Instead of relying on traditional eye-tracking methods that make use of expensive equipment and headgear, we used a new method developed by an agency in Hamburg called Media Analyzer. This technique asks users to rapidly coordinate mouse clicks with where they look on the screen. Each click then represents a focal point of visual attention. A software program captures user interactions for later analysis.

We found that people tended to focus more on the content side of the page with a right navigation than with a left navigation.

In the final part of the test (Part 3), we asked several questions that addressed the central issue, “Do you like the right-hand navigation?” Overall, users were apathetic towards the navigation position. Most didn’t notice that the navigation was on the right and, when directly asked, they didn’t seem to care. However, seven people actually preferred the right navigation to a left navigation, while only two disliked it.

Subsequent usability tests and post-launch user feedback corroborate these findings: there is no apparent difficulty using a right-hand menu to navigate the Audi.com and Audi.de sites.

Though there is research about expectations of the location of page elements in a layout, such research does not correlate breaking these expectations with actual usability (see: Michael Bernard, http://www.internettg.org/newsletter/dec00/article_bernard.html and Jakob Nielsen, http://www.useit.com/alertbox/991114.html). That is, while users normally anticipate a left-hand navigation, positioning the navigation elsewhere does not necessarily result in usability problems.

Don Norman’s concept of affordance —the perceived properties of a thing that determine how it is to be used —seems to be a better predictor of usability than conforming to standards or matching patterns to user expectations. With the Audi site, it is clear what is navigation and what is not. Users can build a pattern of interaction with the site immediately. Our findings show users have no problem distinguishing a right-justified navigation and tend to make generalizations about its function.

This does not mean that all sites should have a right-hand navigation. Indeed, a left-hand navigation may work best in most situations. However, for sites with particularly long texts that require scrolling, for example, a right-justified navigation might be beneficial.

The bottom line is that placing a navigation scheme elsewhere than on the left is not a taboo, contrary to “standards” professed by usability gurus. Without sacrificing usability, Razorfish, Germany was able to leverage a deviation in so-called standards to set Audi apart from its competitors and project an innovative brand image.

For more information:

James Kalbach is currently head of Information Architecture at Razorfish, Germany and has a masters degree in library and information science. Previously he established a usability lab at I-D Media, a large German digital agency. .

SchwabLearning.org: A Case Study

Written by: Jeanene Landers Steinberg

One nonprofit + two web agencies + nine months = SchwabLearning.org. Yes, that was the formula to launch our web site, and I am one of the sole survivors to tell you about it. Before I begin telling the story of the project it is best to learn who and what Schwab Learning is.

Schwab Learning, a service of the Charles and Helen Schwab Foundation, is dedicated to helping kids with learning differences be successful in learning and life. The Foundation began in 1988 from the Schwabs’ personal struggle with learning differences (LD). After Mr. and Mrs. Schwab’s son struggled in school “Learning about our visitors’ experience first-hand has enabled us to create a web site that meets their needs in a more meaningful way.”they had him assessed for LD. During a meeting with a school psychologist, the Schwabs were asked: “Didn’t either of you have problems like this?” That is when Charles Schwab recognized his own dyslexia, and his lifelong struggle with reading and writing suddenly made sense.

In 1999, after eleven years of serving San Francisco Bay Area parents and educators through direct services and outreach, we realized that we could effect greater change if we expanded our web presence. We needed to find a Web agency that would conduct a study on our target group to understand their needs, develop a web strategy and implement the web site. This project was during the height of dot-com boom, and many agencies were not interested in us because they had many accounts that would bring in a lot more money than our budget allowed. After a few months of pitch meetings with agencies, we signed a contract with Sapient to conduct an ethnographic study and lead us from concept to implementation for a new web site.

Laying the foundation for our new site
When we began working with Sapient we had already established goals, objectives and a direction.

Goal: Help kids with learning differences be successful in learning and life. Support kids and moms through “the journey.”

Objectives:

  1. Create two web sites, one for parents/moms and one for kids, but begin with the parent site.
  2. Conduct a study with moms who have a child or children with LD to learn about their experiences. Also, test Schwab Learning’s hypothesis that moms are the “case managers” for their children when working with schools, doctors, etc., and that parents are on a journey to understand and cope with LD.
  3. Create a scalable business and Web strategy to reach moms.

We began working with Sapient in March 2000 focusing on the business strategy and study of moms’ experiences. There were approximately 10 to 12 Sapient team members and 10 to 12 Schwab Learning team members. As a small non-profit, it was awkward working with such a large team of consultants; they totaled one-third of our entire staff at the time. After two months of working together, a draft business strategy was ready for the Board, and the results of the study had been delivered by way of experience models.

Before explaining the experience models and their impact on the Web site it is important to understand the methodology of the study. These models are extremely rich, as it would be very difficult to describe a mom’s experience without them. There were three parts of the study: focus groups, in-home interviews and visual diaries.

Focus Groups: Conducted in San Francisco and Chicago to determine if there were state-to-state differences between moms. There were four focus groups in each state: Two with children identified with an LD and two with children who struggled in school. In each of these pairs one group of moms had children in kindergarten to third grade, and one group had children in fourth to eighth grade.

In-Home Interviews: Seven moms in San Francisco and seven moms in the Chicago area, each interviewed for two hours. These interviews asked moms how they found information about LD, which management strategies they used with their children and for details about their children’s daily routines. There was also a tour of the house to demonstrate how the mom and child interacted in the home. Moms wrote on index cards words, phrases and questions about how they managed their child’s LD and how they felt parenting a child with LD. They arranged these cards in groups to help us understand how the topics are related.

Visual Diaries: Sixteen visual diaries were given to moms in San Francisco and Chicago to chronicle their experiences in a four-day period. Moms were asked to answer some questions and to write free-form journals. Moms were also asked to take pictures of their home environment, their kids, etc.

The LD Landscape
Five domains make up the LD Landscape and demonstrate the areas of a mom’s life that are affected by her child having an LD. These domains exist before their child is identified with LD; however moms have to reorient their relationships in the domains once they begin managing their child.

The lifecycle: gaining awareness
There are usually three stages that parents go through before their child is identified with LD. First they begin to sense that something is different. Next they rule out the environment, sleep patterns or other factors that might cause their child to struggle in school. Finally, they have their child assessed for LD.

The lifecycle: management strategies
After a child is assessed it is now time for the mom to begin learning management strategies that will help her interact with her child in home and at school. Management strategies do not always work, and may have to be refined.

Mom’s evolution of knowledge
When a mom first finds out about her child’s learning difference she usually seeks all the information she can find. This information is critical in the beginning, but over time moms begin to gain confidence in their abilities to help their children and rely more on experience and knowledge.

The next phase
After the experience models were delivered and accepted by Schwab Learning, the next phase of the project began.

A study with moms identified six user types which illustrate the different roles a mom finds herself in along the journey.

Pre-Identified: Doesn’t know that an LD exists. Considers herself part of the “normal” community, yet might feel isolated.

Novice: Acknowledges her child has an LD, but might not know which one. Learns that an LD landscape exists and there are tools and strategies to learn.

Student: Begins to negotiate the landscape and recognizes the affected domains. Recognizes her need for information and assistance.

Case Manager: Reorients herself in the LD landscape. Improves her ability to handle crisis and management of her child.

Advocate: Proactively participates in larger community. Begins to extend her knowledge to others; beginning of leadership.

Sage: Becomes a community resource and begins to be sought out by others.

The articulation of these roles demonstrated to us that we needed to focus on a particular user type or role because we could not launch a site filling all of these needs. After several meetings working with Sapient we narrowed our target for launch to the Novice mom. Choosing this target group made the most sense as we had been serving this population in our local center for years, and we had ready-made content for the web site.

The day our direction changed
At the end of May 2000 the Foundation’s Board met to discuss various matters, primarily the new business strategy and direction of the Schwab Learning. After understanding the costs of the strategy: call centers, large-scale partnerships, and a deep and complex web site at launch, the Board was concerned. Mr. Schwab grew his business from the ground up, building on top of successes while taking calculated risks and learning from them. The decision was made to scale back the scope of the web site, find another web agency to build the web site from the study we had conducted, and launch by the end of 2000.

After finishing our commitment to Sapient in July, we wrote an RFP, interviewed agencies and hired Small Pond Studios (SPS) within a month. We did not want to stop the internal momentum and enthusiasm for building the web site, and we only had four and one-half months to launch the web site. SPS was an ideal agency to work with because not only did they have a stellar team, the four principles worked for Sapient prior to starting their own company. They understood all of the deliverables from Sapient and were able to translate them into a plan for the web site.

Creating a realistic web site
Once the documentation was internalized by SPS we began working on the design, branding and information architecture. There were four conceptual models to choose from: Information, Tools, Journey and Community. The “Journey” concept was the most compelling model because it gave site visitors an orientation about LD while balancing information, community and tools, which are important to managing the journey. Also, the Journey concept complemented our user study because parents need to understand the LD landscape before managing their child’s LD.

The Information concept did not provide Schwab Learning the space to be a guide to parents, and it de-emphasized community. The Tools concept would not provide parents enough desperately sought information. The Community concept would not put Schwab Learning in the expert role, and a community’s growth takes time, which we did not have.

Once the decision was made to move forward with the Journey concept, SPS created two different wire frames to test with moms. One wire frame was based on organizing the information architecture by the LD Landscape (domains): Work, Family, Institutions, Community and Self. The other wire frame was based on the Lifecycle: Is it LD?, Identifying and Managing a Learning Difference, and Sharing Information.

LD Landscape

LD Lifecycle

SPS conducted two rounds of user testing with six moms using wire frames. The first round was to determine which structure made more sense to moms, and the second was to refine the chosen model. During the first round of testing we discovered that moms did not know where to begin with the LD Landscape concept. All of the domains affected their life, and all were very interesting, so knowing where to click first was not intuitive. Moms had a better sense for were to start with the Lifecycle concept, and that confidence would be critical for first-time visitors to the web site.

For the second round of testing using the Lifecycle concept, the main “buckets” were reduced from four to three: Identifying a Learning Difference, Managing a Learning Difference and Sharing Knowledge. Also, because the concept made sense to moms, the domains became the secondary navigation architecture.. We probed on the wording of the “buckets” and placement of clicks, as well as interest in registering and reactions to a first version of the design.

Final information architecture wireframe

Initial design of homepage
We learned valuable information from this second round of testing. Moms liked the happy children and the warm, inviting color of the Web site. They also liked the “.org” front and center. To the moms it assured them that the site was not trying to sell them anything, and our information could be trusted. Moms did raise concern about the phrase “Sharing Your Knowledge” because some of them felt they did not have knowledge to share.

The next step was to continue to refine the design, then marry the technical and design for testing. We had decided early on to build the site in ASP with a MS SQL database. The live site at the time was built on the same platform so we were able to leverage our existing content management system and other functions for the new site.

In the span of two years, the site went from this design and information architecture in January 1999 …

To this site redesign in September 1999 …

And finally to this complete new site in December 2000.


So you launched, now what?
In 2001 we hired four staff members who grew the team to seven, and in 2002 we had a budget for two more. We added several pieces of functionality to the site: polls, quizzes, a web calendar and an html newsletter option; increased our content from eighty articles to two hundred articles and conducted a usability study with ten moms. In 2001 our web traffic steadily increased from month to month. The average visitors from the first quarter to the fourth quarter increased by 46 percent and page views increased by 49 percent.

When we conducted the usability test with moms we discovered that they were having a difficult time browsing once they clicked into “1, 2 or 3.” Moms were struggling to find information they needed in the domains because the lists of articles were becoming too long. Internally we were struggling with placing articles in our information structure, so we knew it needed to be changed. We have kept the 1, 2, 3 structure and added a 4 to house a visitor’s personal page and some of our functionality that previously did not have a home. We also consolidated the secondary information structure from Your Child, Your Family, Schools and Professionals, etc. to Kids & Learning, Home & Family, Schools & Other Resources and have now added a tertiary information structure. This provides us a more flexible structure that moms will hopefully relate better too. This new information and design structure launched in February 2002.

Lessons learned
It has been an amazing two years and yet we still have a long way to go. Looking back, we have achieved our original objectives and applied them to the building of SchwabLearning.org. We have learned many lessons along the way and here are a few:

First, don’t let your vision blind you. We were incredibly excited about helping moms and kids, and that enthusiasm led us to believe that our thirty-person organization could transform itself overnight. We needed to take a deep breath and say, “Wait a minute, how are we going to do this?” Today our vision remains as strong as ever to help kids with learning differences be successful in learning and life. Our process to achieve our vision changed from the big bang theory to starting small, building on the foundation we launched with and protecting our assets.

Second, conducting user studies was invaluable. Learning about our visitors’ experience first-hand has enabled us to create a web site that meets their needs in a more meaningful way. Our experience models have enabled us to communicate with partners and other friends of the Foundation as well as create a new language for us: domains, LD landscape, novice, case manager, etc.

Third, user research and usability testing will always put you on the right track. The testing we conducted pre- and post-launch has been extremely useful in guiding our development. The initial user research study gave us the opportunity to go into the homes of the people we were trying to help. This proved to be rich data because we could see first-hand the interactions with their children and how their homes were set up to accommodate their children (i.e. where they kept medications, chore lists, etc.). The focus groups revealed different information as these moms were in a group with different dynamics compared with one-on-one interviews in a home. The diaries gave us another data point that was intimate in a different way as we only knew these moms’ stories and never met them in person. As for the first usability testing, we were able to discover potential pitfalls before going live. Who would have known that moms would have concerns about the concept “Sharing Your Knowledge”, but “Connecting With Others” did not pose a problem. Also in our post-launch usability testing, we discovered that the secondary information structure based on the “Domains” made sense to us, but not to site visitors. This is a very important discovery because if users cannot browse the Web site easily they are apt to become frustrated and leave the site. Moms of kids with LD are most likely already frustrated when they arrive, and we want to provide them a place that takes away the stress and lets them know someone understands.

Although some of these lessons have been learned the hard way, it has been completely worth it. When we receive emails from moms that read, “I am so appreciative of you [SchwabLearning.org], just for being there. Wish I would have found you sooner,” we know we are doing our job.

Jeanene Landers Steinberg is the Web Director for SchwabLearning.org and had the role of project manager during the creation of the Web site. Jeanene manages a team of eight people consisting of technical, editorial and online community staff who are responsible for maintaining and growing SchwabLearning.org into a premiere Web site for LD information, guidance and support.

The Story Behind Usability.gov

Written by: Sanjay Koyani

When Detroit’s automotive engineers design a new car, they often bring in real drivers who sit in the seats, mash the gas pedals, and pump the brake. This is the engineers’ approach to involving users in the process of designing new cars that people want to drive—and can drive. Their approach is similar to the thinking that led the National Cancer Institute’s (NCI) Communication Technologies Branch to formally encourage the designers of government information websites to involve users in the design process. We created Usability.gov, a place to share our knowledge about user-centered web design and why it works with our colleagues.

We are gratified to see clear results from Usability.gov. Government web designers are using more user-centered design practices, and web designers in general appear to be more cognizant of the user’s mindset.

Today, Usability.gov has earned a following among technology professionals. For the uninitiated, Usability.gov is a one-stop source for government web designers to learn how to make websites more usable, useful, and accessible. Our site addresses a broad range of factors that go into web design and development: how to plan and design usable sites by collecting data on what users need; how to develop prototypes; how to conduct usability testing; and how to measure trends and demographics. We have packaged our core knowledge into a specific set of evidence-based guidelines for user-centered web design. In addition, the site offers case study information in a section called Lessons Learned.

Home Page of the Usability.gov website  

What many do not know is the story behind Usability.gov, and knowing that story puts our work in context. It’s a story that underscores the critical role that Usability.gov plays in the electronic communication of complex cancer information to very diverse audiences. One minute, a researcher seeking grant information is pulling up an NCI website for details on what grants are available and where to apply. The next minute, an ordinary citizen is frantically searching NCI websites for any informationæany cluesæabout a type of cancer for which the doctor is testing them. Every day, NCI disseminates life and death information. Usability.gov ensures that users and their web behaviors are kept in mind when designing sites.

The seeds for Usability.gov were sown in early 1999 when the popular CancerNet web site came up for a redesign. As usual, we began by seeking input for the new design from technical professionals: web designers, content writers, engineers. Our “kitchen cabinet” also included users. But the opinions from this broad group of professionals and laymen were as diverse as their backgrounds. Whose ideas were right?

Our director, Janice Nall, decided that we needed a methodology to show that what we were doing would produce an end result that was better than what we started with. In fact, we had to be able to quantifiably measure that CancerNet’s new face was better than the old face, to offer proof beyond a lot of people saying it looked better.

To accomplish this objective, we decided to collect quantitative data about CancerNet’s users and their needs as part of the design process. An online questionnaire and in-person interviews turned up some revealing information. We learned that one-third to one-half of CancerNet users were first-time visitors who were often totally unfamiliar with the site. This fact raised obvious questions: With so many new users, was the site easy enough to use? Could users find the information they needed on the site quickly and easily? These were critical questions in light of the kind of information that CancerNet provided to the public.

Given these questions, we began testing the site, an experience that furthered the need to develop a formal way to collect and share our knowledge for future reference. We conducted user tests with doctors, medical librarians, cancer patients, researchers, and others who we expected would be regular visitors. What we learned from testing was as surprising as what we learned from our questionnaire and interviews: some icons were not clearly clickable, many links were confusing, our terminology did not match our users’, and core information appeared to be buried or lost within the site. These were not mere glitches, but conceptual and foundational challenges that needed to be addressed.

To be thorough, our testing was iterative; we built on prototypes and brought in new sets of users to test each new version. We continually collected information to see if new problems cropped up, seizing on every comment, even something as simple as, “What is that there for?” We were like those automotive engineers in Detroit, watching test participants’ every move and examining their every facial expression.

 
User-centered design tips on CancerNet from Usability.gov’s Lessons Learned section. Click to enlarge.

Today, when you visit Usability.gov, you get a sense of how these tools help government and other web designers to avoid our early mistakes. Whether you read our case study about the redesign of CancerNet in our Lessons Learned section, or read our guidelines about testing issues such as scenario writing, user recruiting, goal establishment, or data compilation, you will see our picture of user-centered web design in action.

We are pleased with CancerNet’s redesign. In the past year or so, the site has won four content and design awards, and CancerNet recently merged with several existing sites, including Cancer.gov, into one portal site. But just as importantly, we are gratified to see clear results from Usability.gov. Government web designers are using more user-centered design practices, and web designers in general appear to be more cognizant of the user’s mindset. What Usability.gov demonstrates is that web design is not about flash and splash. It’s about transmitting useful information that users want—and need—in a way that helps them find what they are looking for.

Sanjay Koyani works for the Communication Technologies Branch of the National Cancer Insitute. He can be reached at .

Taking the “You” Out of User: My Experience Using Personas

Written by: Meg Hourihan

The best laid plans…
In 1999, I co-founded a small San Francisco-based start-up called Pyra. Our plan was to build a web-based project management tool and we chose to focus initially on web development teams for our target audience since, as web developers ourselves, we had intimate knowledge of the user group. At the time the team consisted of three people: my co-founder, our lone employee and myself. We considered ourselves to be good all-around developers: competent in both interface and back-end development. We also assumed we were developing our product (called “Pyra” for lack of a better name at the time) for people just like us, so we could make assumptions based on our wants and extrapolate those desires for all users.

At this time, Microsoft had just released Internet Explorer 5 (IE 5) for Windows and we were anxious to use its improved standards support and DHTML in our application to make the interface as whizbang as possible. By limiting our audience to IE 5, we decided we would be able to deliver the most robust application, one that was sure to impress potential users and customers. Later, we told ourselves, we’d go back and build out versions with support for Netscape and Macintosh. So we set to work building the coolest web application we could, taking full advantage of the latest wizardry in IE 5 for Windows. Development was chugging along when Alan Cooper’s “The Inmates Are Running the Asylum” was released and I picked it up. When I got to the chapter discussing the use of personas, I was intrigued. Though I was confident in our approach, creating personas sounded like a useful exercise and a way to confirm we were on track.

Discovering Personas

“Not only were the personas not all like us—our personas wouldn’t even be able to use the system we were building for them!”

Cooper’s personas are simply pretend users of the system you’re building. You describe them, in a surprising amount of detail, and then design your system for them. Each cast of personas has at least one primary persona, the person who must be satisfied with the system you deliver. Since you can’t build everything for every persona (and you wouldn’t want to), the establishment of the primary persona is critical in focusing the team’s efforts effectively. Through the use of personas, the design process moves away from discussions that are often personal in nature (“I’d want it to work this way.”) or vague (“The users like to see all the options on the home page.”) . It becomes a series of questions and answers based on a concrete example from which the team works (“Mary, the primary persona, works from home via dialup four days a week, therefore downloading an Access database isn’t an option.”). In our case, the development of personas helped us recognize that the target audience we’d chosen, web development teams, wasn’t as homogenous as we first assumed. Not everyone who’s involved in web development is gaga for DHTML or CSS—some people on the team might not even know what those acronyms stand for, a simple fact we’d failed to consider up until this point.

Our team stopped working to discuss personas and Cooper’s approach and we agreed it sounded important enough to devote some time to it. As we sketched out our various personas (a project manager for a large company whose corporate standard was Netscape 3, a web designer who worked on a Mac, an independent consultant who worked from home), it became apparent we had made some bad assumptions. Not only were the personas not all like us—our personas wouldn’t even be able to use the system we were building for them! We’d been so blinded by our own self-interest we failed to realize we were building a useless team product. Sure, it would have been great as an example of what we hoped to build, impressive to any engineer or web developer, but a manager might not be able to access it. We were cutting ourselves off from the people who would most likely make the decision to use the tool—and no project team would signup for Pyra because an entire project team couldn’t use it.

We were a month away from releasing the beta version of Pyra at this point, but we knew what needed to happen. We had to go back and redo our application to work for Netscape and IE, for Windows and Macintosh, and in doing so, we needed to reevaluate our tool using our personas (specifically our primary persona) rather than ourselves or the mythical “user” to guide our decisions. So that’s what we did, pulling out all our beloved DHTML and remote scripting so our 37-year-old project manager persona could access the application from her home office in Seattle on a Saturday afternoon. Though the rework delayed our beta release by two months, it resulted in a tool our potential customers could use immediately.

Learning hard lessons
Through the process of developing personas, the mistakes we’d made became clear to us:

Mistake #1: We chose flashy technology over accessibility.
We allowed the geeky part of our personalities, with its lust for the newest and greatest ways of doing things, to overwhelm the decision-making process. Though there was a sense at the beginning that we needed to support other platforms, we let our desire to use the newest “toys” change the priority of doing so. This is a common mistake programmers and engineers make but one which can be avoided through the use of personas. Interestingly, when we redid Pyra based on our personas’ needs, we didn’t lose any of the previous functionality. We only changed how it was done, e.g., reverting to less elegant page reloads rather than DHTML client-side changes. The previous version had only been impressive to fellow geeks like ourselves, but we hadn’t realized that. More importantly, the essential quality of the tool was never lost, but by redoing it, it become available to many more people.

Mistake #2: We assumed users would be more impressed by a robust interface they couldn’t use than by a less elegant application that they could use.
Again, our technical hubris blinded us into thinking that potential customers would be impressed by how we built our functionality, not by what the underlying features were. We let our wants come between our product and our users.

Mistake #3: We thought we were the primary persona.
While we shared common goals with our some of our personas, and though one of the personas we developed was very similar to the members of our team, none of us were the primary persona. This crucial distinction between primary personas and secondary personas forced us to realize the interface we designed shouldn’t be driven by our wants or needs, even as members of a web development team. Defining a primary persona prevented us from releasing our original tool with its accessibility failures.

Less than a month after the beta release of Pyra, we released a second tool, Blogger. Though we didn’t create formal personas for Blogger users, the experience we gained by using personas infused our company’s approach to building web applications. Any time the word “user” was mentioned, questions flew: “What user? Who is she and what’s she trying to do?” Our work with personas increased our awareness of our audience and their varying skill levels and goals when using the application. The use of personas helped move all our discussions about the application, not only those related to the interface, away from the realm of vagaries and into tangible, actionable items. (“It should be easy to create a new blog.” “Easy? Easy for whom?” “It should take less than a minute to get started.” “It should take less than a minute for my grandmother to get started,” etc.) We developed a system of familiar, conversational personas on the fly, focusing on the primary persona without going through the formal process.

In retrospect, some of this sounds like common sense, and yet time and time again I find myself looking at an interface and making assumptions based on how I’d like it to work. Like a recovering substance abuser, it’s a constant challenge for me to refrain—I can always imagine that I’m the user. Even if your budget or timeline doesn’t allow for the development of formal personas, you can still benefit through the use of informal “conversational” personas, like we did while building Blogger. It takes discipline to break the old assumption habit but the more I use personas, the easier it becomes. I’ve carried the lessons I’ve learned through their development with me for the past three years to other projects and engagements; the use of personas resulted in a fundamental shift in the way I approach not only interface design but application architecture as a whole.

For more information:
Alan Cooper, The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity. SAMS, 1999.