Blasting the Myth of the Fold

Written by: Jeff Parks

iTunes     Download    Del.icio.us     Pod-safe music generously provided by Sonic Blue

banda_headphones_sm.gif Jeff Parks had the opportunity to speak with Milissa Tarquini on her article, “Blasting the Myth of the Fold”:http://www.boxesandarrows.com/view/blasting-the-myth-of. They talk about how this long held rule in web design is being de-bunked by web analytics and user testing, as well as how this will impact design and development processes based on screen resolution and browser compatibility.

We discuss…

*Defining the Fold*
Milissa outlines the different terms that people use for the fold. Anything that falls below that point in the screen where the user has to scroll is the fold

*Back in the day*
In the early 90’s at AOL scrolling was prohibited. Milissa talks about the need for balance in designing for the fold while being creative.

*A moving target*
She goes on to talk about the challenge of designing for the fold with different screen resolutions and browsers and how in her opinion no one should be designing for the fold.

*Content is still king*
According to Milissa it all comes down to the quality of the content. If content is engaging and the user is interested in the information, they will follow the path to what they are seeking, regardless of the medium.

*Interaction Design is everywhere*
As Derek Featherstone pointed out in his discussion with Christina about Accessibility, IXDA plays an important role when designing with how users will find content on a page.

*Not the last, but a new frontier*
Milissa addresses social media tools such as Blogs, Facebook, and MySpace and how these new web services reinforce the notion that users do scroll. As Eric Reiss commented, “…perhaps the new frontier is the bottom of the page.”

In Appreciation of Measures That Tell Stories

Written by: Alison J. Head

Not long ago, usability measures and Web analytics were few and far between. The usual standards amounted to little more than task completion, error rates, and click streams. Yet, they served us well.

Some years ago, when relaying one telling measure—how many clicks it took to find a book—to clients at a large metropolitan library group the room fell silent. Finding a book on a library web site should have been, as my father was fond of saying, “as easy as shooting fish in a barrel.” In our test sessions, however, it took eight of 12 participants an average of 6.25 clicks to find John Grisham’s book, A Painted House. The benchmark for the task was one click.

All but a couple of the participants meandered through pages looking for the best-selling book without feeling they were progressing toward their goal. Some participants clicked 18 or 20 times before giving up. Of all the performance data in our 147-page report, this one piece of information, the number of clicks it took to find the Grisham book, moved the client to take action.

 

“The Three Cs”

This was back in 2002. Now, of course, we have more measures in our toolbox. Or, at least, we should. While the old standards are still useful, the digital spaces we try to improve have become much more complex and so too, have clients’ expectations for functionality and a return on their investment.

 

Whether you call this a Web 2.0 era or not, there is no disputing that most clients these days care more than they ever did before about the “Three Cs”: Customers, Competitors, and Conversion. Click streams have made room for bounce rates, search analytics, and so much more. If we play our cards right, we can reduce and synthesize the raw data and give our clients more meaningful information that foments action.

 

Emblematic Measures Have Teeth

Of all the data we report, there are certain measures that are more meaningful than others. I call the more meaningful data emblematic measures. In dictionary terms, emblematic is a “visible symbol of something abstract,” which is “truly representative.”

 

In our presentation to the library group, the rate for the Grisham task was emblematic. That is, the measure was representative of the library website’s greater inadequacies: its failure to fulfill the basics of its fundamental purpose and meet its customers’ needs. In turn, the measure was understandable to the client on a visceral level because it was firmly planted in their business objectives.

“Emblematic measures ensure that the data are always in the service of the business,” writes Avinash Kaushik, author of Web Analytics: An Hour a Day. “I can’t tell you how many times I run into data simply existing for the sake of data with people collecting and analyzing data with the mindset that the sole reason for the business’s existence is so that it can produce data (for us to analyze!).”

However, not all of the measures we deliver to clients are emblematic, nor should they be. Emblematic measures need to epitomize the entire study’s findings eloquently and elegantly. In layman’s terms, emblematic measures are a lot like the best line from a classic movie: It’s not the only line, but it’s the one that is remarkable, memorable, and eminently quotable.

Emblematic measures are far from prescriptive, static, or context-free, too. With every bit of user experience research we conduct and on each and every site, the measures will surely vary, given the context of testing, the sample, the tasks assigned, the business objectives for the site, the functionality being studied, and so on.

Therein lies one challenge of our daily work.

 

The Site Abandonment Measure

Fast forward from 2002 to Summer 2006. During a usability test of a philanthropic extranet for a large foundation, we measured the occurrence of something we had seen happening a million times. We used to think it was just too obvious to formalize and report to clients.

 

But this time, we found our emblematic measure.

We call this measure a Site Abandonment Measure (SAM). We define a SAM as the percentage (or number) of participants who give up on a specific task (or set of tasks), leave a site altogether, and turn to another source—any source—to get a task done. Put simply, it’s the “I quit—I’ve had it with your site” rate.

When we asked our 15 participants to make a recommendation for a grant in support of a local Special Olympics team, 53 percent of the sample abandoned the task all together. Participants told us they would complete the task elsewhere (usually by using a phone to call the Special Olympics or the foundation directly).

We also found the SAM was significant for informational tasks. When we asked participants to get the latest tax return for the Special Olympics group, 40 percent of the sample left the site all together and went directly to the Special Olympics site for the information.

Overall, the SAM for the foundation’s site was 38.6 percent for the ten key tasks on the extranet. This showing was pretty dismal, especially given the context of our research. We were, after all, testing an extranet with the sole purpose of letting users manage their philanthropic funds—not an e-commerce site and click-through rates on ads. (There are no formal usability standards for unacceptable SAMs rates, as far as we know.)

This means that, on average, on any one task, about six out of every 15 participants agreed to take on the task we asked of them, went through the first motions, and then eventually gave up not only on the task, but on the entire site.

When we presented the findings to the client, the show-stopper was the Special Olympics task and the corresponding SAM. How could they have laid down cold, hard cash for a site that failed to let over half of the test participants make a grant recommendation online?

 

SAMs vs. SARs

SAMs may bring to mind Web analytics and their main use on e-commerce sites. Under the hood, of course, the data is as different as a Porsche from a Prius.

 

Web analytics, such as conversion rates and the more narrow site abandonment rates (SARs) for measuring user interaction with shopping carts, leverage quantitative data extracted from transactional logs to measure macro-level interactions across a large sample of users. SAMs, on the other hand, use behavioral data from one-on-one test sessions to measure micro-level interactions with a small set of representative users.

As a measure in our toolbox, SAMs can tell us things about users that SARs cannot. When users “think aloud” during usability sessions, SAMs can give us some information about the rest of the story behind the quantitative measure. They can collect qualitative data about users’ frustrations, annoyances, barriers and solutions. (Granted, there is always the issue “self report” in usability test sessions.)

According to Kaushik, there are, of course, emblematic Web analytics, too. And bounce rate, which measures the number of visitors who see only one page and leave, is a frequent one.

“Everyone (from the CEO on down) gets this metric right away and can understand if it is good or bad,” Kaushik says. “It is so hard to acquire traffic and everyone cares about the percentage of traffic that leaves right away. When I report that ‘your bounce rate is 60 percent,’ it simply horrifies the client and drives them to ask questions to take action.”

 

What’s in a Name?

Relying on a sexy metric or one type of usability measure alone is not always a sure way to reach a client with a call for change though. The underlying data also has to speak to clients. This means practitioners have to work at breathing life into the data they package and deliver.

 

Kaushik recounts a story about taking existing metrics and segments and simply renaming them to make them more emblematic: “We were measuring five customer segments: (1) those who see less than one page on a site, (2) those who see three pages or less, (3) those who see more than three pages and did not buy, (4) those who place an order, and (5) those who buy more than once. These were valuable segments and something worth analyzing, but the internal clients would simply not connect with the segments until we renamed them to ‘Abandoners,’ ‘Flirters,’ ‘Browsers,’ ‘One-off-wonders,’ and ‘Loyalists.’”

The simple change in how the data was communicated had a huge impact by creating a story around it. Kaushik’s client had a greater understanding and instantly began asking how they could turn Flirters into Loyalists.
 

Hitting Clients Right between the Eyes

Sophocles wrote, “The truth is always the strongest argument.” Likewise, many practitioners rely on data to provide the best approximations of truth they can. With so much of our research focused on striving for accurate representations of something as amorphous, varied, and hotly debated as user behavior, we are a profession usually awash in data, practicing a less-than-perfect science.

 

When I was in graduate school, we discussed ””construct validity”:http://books.google.com/books?id=eAdbEn-yZbcC&pg=PA190&lpg=PA190&dq=babbie+and+construct+validity&source=web&ots=k8tB76zIaW&sig=6-ww5WOHJhLKFk5siib3qUYheis#PPA190,M1.” Construct validity refers to the extent to which a test offers approximate evidence that a certain measure (e.g., the task of finding a library book) accurately reflects the quality or construct (the proficiency of users in carrying out a frequently conducted task on a library site) that is to be measured.

It is essential, of course, to weigh the validity of the tasks we develop and the results delivered. But collecting all of the “right” data is not always enough.

“The problem is that we are so immersed in data in our professional or academic worlds that, to a great extent, we become disconnected with reality,” Kaushik says, “especially when we lose touch with the business side of things and we lose touch with customers and base our analysis on how four people in a lab carried out a task.”

Do your rigorous research justice by communicating the data in such a way that it reveals any significant shortcomings. No matter the size of your project, look for the emblematic measures. They will allow you to tell stories that hit clients right between the eyes and move them to action.

 

Acknowledgements

Many thanks to Avinash Kaushik for his email interview for this article on October 4 and 5, 2007.

 

Kaushik is the author of the book, Web Analytics: An Hour A Day writes the blog, Occam’s Razor, and is the founder of Market Motive, a Silicon Valley startup that focuses on online marketing education. He is also the Analytics Evangelist for Google.

Ease of Use Outside the Box

Written by: Mike Padilla

As user experience designers in an enterprise, we find ourselves knee deep in pixels. Should we use a dropdown element or a set of radio buttons? 10pt or 12pt size font? A broad-and-shallow or narrow-and-deep information architecture? While such design considerations are necessary and important, we miss huge user experience opportunities outside the webpage, outside the website, outside the browser. By tackling inter-application usability opportunities, user experience (UX) professionals can make things easier in a big way.

Ease of Use Outside the Box

Since enterprise usability issues affect the entire organization, even small gains in improved ease-of-use can reap large benefits in aggregate across the entire user base. Whereas we traditionally focus on intra-system usability, we can also advocate inter-system usability, basically greasing the skids between systems so that all systems are easier to use. We could champion the merits of large monitors, decry unrealistically complex password policies while offering password management solutions, and develop easy-to-remember URL shortcuts for all websites that our colleagues access.

Fundamentally, user experience design strives to optimize the efficiency with which users communicate with other users through a computer. Users retrieve, consume, and input information. Within a system, we design the interfaces that allow users to efficiently perform those tasks. But users access systems in context of the environment that they are in. A system’s user experience may be drastically impaired when users access a system in a suboptimal context.

Take an application with a very well-designed user experience, say Apple’s iTunes. Fire it up, search, sort, categorize, play, and buy songs with ease. Well, perhaps not so easily. What if you were running the application on a computer with a 233MHZ processor, 32Mb of RAM, 640×480 monitor, 28.8Kb modem, and one tinny speaker? What if your iTunes password had to be changed every 15 days, must be 12 characters long, and include at least one number and one non-alphanumeric character? What if simply finding the icon to launch iTunes was a chore?

You would have a drastically different (worse) overall user experience than what you’re probably used to, in spite of the application’s well-designed user experience. Software makers understand the impact that the context in which an application is served can have on the user experience that is actually experienced. Hence the ubiquitous “System Requirements” that helps to ensure that an application is being used in its prescribed context.

Clear Path to Information

A primary system task for users is retrieving information. How can we make it easier for users to get the information they need? Unfortunately, we almost always assume an intra-system perspective—one where the information that the user needs is accessible via the system and the user is already in the system. But what if we were to take a step back and look at the larger context in which the system is accessed? What we’d find are multiple usability hurdles between users and information. In fact, there are many hurdles between the users and the applications.< Let’s take a look at three key inter-system usability issues and how they can be addressed: # Viewport size – How much information can you view at a single time? # Authentication – Can you securely and easily log into your systems? # URLs – How easily can you get to your systems?

Even High Def is Low Def

Walk into any big box electronics store and you’ll see the ubiquitous wall of so-called high-def TVs. Great, stunning, crisp pictures – right? But if you were to compare the resolution of the world’s best hi-def TV to that of printed paper, the paper would easily win. The LCD technology that many hi-def TVs use is the same as that of our computer displays. We are constrained with limited information density.

Computer displays are the viewport though which the majority of communications between the user and computer occur. Because that channel is choked by relatively low resolution and small overall area, communication throughput is limited.
Alleviating this problem is easy – increase the display area. Either get a larger monitor or, better yet, get two larger monitors and use a virtual expanded desktop that spans both. Research has shown that users can complete tasks 10% – 44% quicker with larger screens and that multi-tasking was less, well, tasking.[1] With prices of large LCD monitors drastically dropping, you can have such a setup for under $500. The increased work efficiencies that you’ll gain can easily justify the relatively small upfront expense. In fact, usability guru Jakob Nielsen states, “anyone who makes at least $50,000 per year ought to have at least 1600×1200 screen resolution.”[2]

When More Secure is Less Secure

Everyone logs into applications in the workplace. Whether you’re submitting an expense report, entering worked hours, or just logging into an intranet, you have to authenticate yourself as a valid user. The integrity of authentication lies primarily with password policies that govern password complexity and required frequency of change.

A good password is one that cannot be guessed. And there within lies the problem. What is difficult to guess is most likely difficult to remember. This problem is mulitplied when you have many applications that require authentication, each with its own password policy that dictates password complexity and mandatory resetting. So while a hacker may not be able to guess your passwords, you most likely will not be able to remember them either. So what do you do? Do what everyone else does (but knows they shouldn’t) – write your passwords down on the small piece of paper in your desk drawer. Not exactly the most secure practice.

The problem here is that the security folks design their password policies in a theoretical world where they only consider computers and hackers. Make the passwords very strong. But the primary end users, the people who actually log in appropriately, are not considered. The ultimate result is systems that are less secure. People are people. Defining password policies without considering the complete human context in which they are applied results in lower security.

As usability experts we should prescribe password management utilities. Password management utilities lock all your credentials to multiple applications under one master credential. The master credential is often a master password or a fingerprint scan. Once you have authenticated yourself with the master credential, the password management utility can then submit the individual credential to the respective applications as you access them. Since you no longer have to remember each password, you can realistically use tough-to-hack passwords for each application. Because you only have to remember a single master password, you can be realistically expected to use a strong master password.

Do You Speak URL?

It’s not uncommon to have a dozen websites that you need to access in the workplace. You need to go to one website to track your work hours, another for expenses, another for benefits enrollment, and yet another to log help desk tickets. Just arriving at these websites is often a challenge in-of-itself because each has its own long, cryptic URL. This is especially the case with internally deployed applications where the URL may include the server name, port number and even URL parameters.

A URL for an internally deployed PeopleSoft application such as http://psoft-production.hostinghub.companyname.net:8080/asp/ASPPROD/?cmd=login is not uncommon. Using your browser’s “favorites/bookmark” functionality can alleviate the problem, but that still places unnecessary burden on the users to bookmark each website and organize them. Even if the websites are bookmarked well, each time the user has to access a website, he must open his bookmarks, browse, find, and click.

Fortunately, there is an easy to implement solution that addresses the problem. URL “jumpwords” are words that you can type into your browser address bar that take you directly to a website. Think AOL “keywords,” but more persistent because they are integrated directly into existing browser functionality. So rather than having to bookmark http://psoft-production.hostinghub.companyname.net:8080/asp/ASPPROD/?cmd=login to access your Peoplesoft application, you would be able to just type “peoplesoft” in the address bar and you would be taken to the application.

Catching and rerouting the user can only work within an organization’s network (this does not work across the Web in general for obvious reasons). There are two main steps to set it up. First, you must make an internal DNS entry that catches and routes all jumpwords. All jumpwords are routed to a single, simple application page that maps the jumpword to the specific full URL and then bounces the user to that URL. You’ve then literally brought your organization’s websites to employee’s finger tips.

Big Picture Ease-of-Use

Whether designing a user interface or conducting a usability test, we generally assume that the user has already accessed the system in a predefined context. Take a step back and apply ease-of-use fundamentals to the factors that lie immediately outside of individual applications. By keeping our eyes open for opportunities to improve the user experience in a larger context, we can increase the communication efficiency within organizations and use simple solutions to reduce frustration and confusion of the people using the systems.

1.”Meet the Life Hackers”:http://www.nytimes.com/2005/10/16/magazine/16guru.html?ei=5090&en=c8985a80d74cefc1&ex=1287115200&partner=rssuserland&emc=rss&pagewanted=print New York Times Magazine, October 16, 2005
2. “Jakob Nielsen’s Alertbox”:http://www.useit.com/alertbox/screen_resolution.html , July 31, 2006

Blasting the Myth of the Fold

Written by: Milissa Tarquini

The Above-the-Fold Myth

We are all well aware that web design is not an easy task. There are many variables to consider, some of them technical, some of them human. The technical considerations of designing for the web can (and do) change quite regularly, but the human variables change at a slower rate. Sometimes the human variables change at such a slow rate that we have a hard time believing that it happens.

This is happening right now in web design. There is an astonishing amount of disbelief that the users of web pages have learned to scroll and that they do so regularly. Holding on to this disbelief – this myth that users won’t scroll to see anything below the fold – is doing everyone a great disservice, most of all our users.
First, a definition: The word “fold” means a great many things, even within the discipline of design. The most common use of the term “fold” is perhaps used in reference to newspaper layout. Because of the physical dimensions of the printed page of a broadsheet newspaper, it is folded. The first page of a newspaper is where the “big” stories of the issue are because it is the best possible placement. Readers have to flip the paper over (or unfold it) to see what else is in the issue, therefore there is a chance that someone will miss it. In web design, the term “fold” means the line beyond which a user must scroll to see more contents of a page (if it exists) after the page displays within their browser. It is also referred to as a “scroll-line.”
Screen performance data and new research indicate that users will scroll to find information and items below the fold. There are established design best practices to ensure that users recognize when a fold exists and that content extends below it1. Yet during requirements gathering for design projects designers are inundated with requests to cram as much information above the fold as possible, which complicates the information design. Why does the myth continue, when we have documented evidence that the fold really doesn’t matter in certain contexts?

Once upon a time, page-level vertical scrolling was not permitted on AOL. Articles, lists and other content that would have to scroll were presented in scrolling text fields or list boxes, which our users easily used. Our pages, which used proprietary technology, were designed to fit inside a client application, and the strictest of guidelines ensured that the application desktop itself did not scroll. The content pages floated in the center of the application interface and were too far removed from the scrollbar location for users to notice if a scrollbar appeared. Even if the page appeared to be cut off, as current best practices dictate, it proved to be such an unusual experience to our users that they assumed that the application was “broken.” We had to instill incredible discipline in all areas of the organization that produced these pages – content creation, design and development – to make sure our content fit on these little pages.

AOL client application with desktop scrollbar activated

AOL client application with desktop scrollbar activated

As AOL moved away from our proprietary screen technology to an open web experience, we enjoyed the luxury of designing longer (and wider) pages. Remaining sensitive to the issues of scrolling from our history, we developed and employed practices for designing around folds:
* We chose as target screen resolutions those used by the majority of our users.
* We identified where the fold would fall in different browsers, and noted the range of pixels that would be in the fold “zone.”
* We made sure that images and text appeared “broken” or cut off at the fold for the majority of our users (based on common screen resolutions and browsers).
* We kept the overall page height to no more than 3 screens.

But even given our new larger page sizes, we were still presented with long lists of items to be placed above the fold – lists impossible to accommodate. There were just too many things for the limited amount of vertical space.
For example, for advertising to be considered valuable and saleable, a certain percentage of it must appear above the 1024×768 fold. Branding must be above the fold. Navigation must be above the fold – or at least the beginning of the list of navigational choices. (If the list is well organized and displayed appropriately, scanning the list should help bring users down the page.) Big content (the primary content of the site) should begin above the fold. Some marketing folks believe that the actual number of data points and links above the fold is a strategic differentiator critical to business success. Considering the limited vertical real estate available and the desire for multiple ad units and functionality described above, an open design becomes impossible.

And why? Because people think users don’t scroll. Jakob Nielsen wrote about the growing acceptance and understanding of scrolling in 19972, yet 10 years later we are still hearing that users don’t scroll.
Research debunking this myth is starting to pop up, and a great example of this is the report available on ClickTale.com3. In it, the researchers used their proprietary tracking software to measure the activity of 120,000 pages. Their research gives data on the vertical height of the page and the point to which a user scrolls. In the study, they found that 76% of users scrolled and that a good portion of them scrolled all the way to the bottom, despite the height of the screen. Even the longest of web pages were scrolled to the bottom. One thing the study does not capture is how much time is spent at the bottom of the page, so the argument can be made that users might just scan it and not pay much attention to any content placed there.

This is where things get interesting.

I took a look at performance data for some AOL sites and found that items at the bottom of pages are being widely used. Perhaps the best example of this is the popular celebrity gossip website TMZ.com. The most clicked on item on the TMZ homepage is the link at the very bottom of the page that takes users to the next page. Note that the TMZ homepage is often over 15000 pixels long – which supports the ClickTale research that scrolling behavior is independent of screen height. Users are so engaged in the content of this site that they are following it down the page until they get to the “next page” link.

Maybe it’s not fair to use a celebrity gossip site as an example. After all, we’re not all designing around such tantalizing guilty-pleasure content as the downfall of beautiful people. So, let’s look at some drier content.
For example, take AOL News Daily Pulse. You’ll notice the poll at the bottom of the page – the vote counts are well over 300,000 each. This means that not only did folks scroll over 2000 pixels to the bottom of the page, they actually took the time to answer a poll while they were there. Hundreds of thousands of people taking a poll at the bottom of a page can easily be called a success.

AOL News Daily Pulse with 10x7 fold line and vote count
AOL News Daily Pulse with 10×7 fold line and vote count

But, you may argue, these pages are both in blog format. Perhaps blogs encourage scrolling more than other types of pages. I’m not convinced, since blog format is of the “newest content on top” variety, but it may be true. However, looking at pages that are not in blog format, we see the same trend. On the AOL Money & Finance homepage, users find and use the modules for recent quotes and their personalized portfolios even when these modules are placed well beneath the 1024×768 fold.

Another example within AOL Money & Finance is a photo gallery entitled Top Tax Tips. Despite the fact that the gallery is almost 2500 pixels down the page, this gallery generates between 200,000 and 400,000 page views depending on promotion of the Taxes page.

It is clear that where a given item falls in relation to the fold is becoming less important. Users are scrolling to see what they want, and finding it. The key is the content – if it is compelling, users will follow where it leads.

When does the fold matter?

The most basic rule of thumb is that for every site the user should be able to understand what your site is about by the information presented to them above the fold. If they have to scroll to even discover what the site is, its success is unlikely.

Functionality that is essential to business strategy should remain (or at least begin) above the fold. For example, if your business success is dependent on users finding a particular thing (movie theaters, for example) then the widget to allow that action should certainly be above the fold.

Screen height and folds matter for applications, especially rapid-fire applications where users input variables and change the display of information. The input and output should be in very close proximity. Getting stock quotes is an example: a user may want to get four or five quotes in sequence, so it is imperative that the input field and the basic quote information display remain above the fold for each symbol entered. Imagine the frustration at having to scroll to find the input field for each quote you wanted.

Where IS the fold?

Here is perhaps the biggest problem of all. The design method of cutting-off images or text only works if you know where the fold is. There is a lot of information out there about how dispersed the location of fold line actually is. Again, a very clear picture of this problem is shown on ClickTale. In the same study of page scrolling, fold locations of viewed screens were captured, based on screen resolution and browser used. It’s a sad, sad thing, but the single highest concentration of fold location (at around 600 pixels) for users accounted for less than 10% of the distribution. This pixel-height corresponds with a screen resolution of 1024×768. Browser applications take away varying amounts of vertical real estate for their interfaces (toolbars, address fields, etc). Each browser has a slightly different size, so not all visitors running a resolution of 1024×768 will have a fold that appears in the same spot. In the ClickTale study, the three highest fold locations were 570, 590 and 600 pixels—apparently from different browsers running on 1024×768 screens. But the overall distribution of fold locations for the entire study was so varied that even these three sizes together only account for less than 26% of visits. What does all this mean? If you pick one pixel location on which to base the location of the fold when designing your screens, the best-case scenario is that you’ll get the fold line exactly right for only 10% of your visitors.

So what do we do now?

Stop worrying about the fold. Don’t throw your best practices out the window, but stop cramming stuff above a certain pixel point. You’re not helping anyone. Open up your designs and give your users some visual breathing room. If your content is compelling enough your users will read it to the end.

Advertisers currently want their ads above the fold, and it will be a while before that tide turns. But it’s very clear that the rest of the page can be just as valuable – perhaps more valuable – to contextual advertising. Personally, I’d want my ad to be right at the bottom of the TMZpage, forget the top.

The biggest lesson to be learned here is that if you use visual cues (such as cut-off images and text) and compelling content, users will scroll to see all of it. The next great frontier in web page design has to be bottom of the page. You’ve done your job and the user scrolled all the way to the bottom of the page because they were so engaged with your content. Now what? Is a footer really all we can offer them? If we know we’ve got them there, why not give them something to do next? Something contextual, a natural next step in your site, or something with which to interact (such as a poll) would be welcome and, most importantly, used.

References

fn1. Jared Spool UIE Brain Sparks, August 2, 2006:”Utilizing the Cut-off Look to Encourage Users To Scroll”:http://www.uie.com/brainsparks/2006/08/02/utilizing-the-cut-off-look-to-encourage-users-to-scroll/

fn2. Jakob Nielsen’s Alertbox, December 1, 1997: “Changes in Web Usability Since 1994”:http://www.useit.com/alertbox/9712a.html

fn3. ClickTale’s Research Blog, December 23, 2006: “Unfolding the Fold”:http://blog.clicktale.com/2006/12/23/unfolding-the-fold/

Practical Plans for Accessible Architectures

Written by: Frances Forman

If the relationship between accessibility and architectures intrigues you, see the Editors’ Note at the end of this article for more information about what we’re doing and how to get involved.

The United Nations recently commissioned the world’s first global audit on web accessibility. The study evaluated 100 websites from 20 different countries across five sectors of industry (media, finance, travel, politics, and retail). Only three sites passed basic accessibility checkpoints outlined in the Web Content Accessibility Guidelines(WCAG 1.0), and not a single site passed all checkpoints.

These guidelines are well established and were first advocated by the W3C in 1999. They simplify the knowledge required to produce accessible code and content. Despite developments in assistive technologies and web content, these guidelines are still invaluable today. They provide developers and editors with a foundation for creating accessible design, which is essential to people who have different access requirements. A second version of the WCAG is now available as a public working draft (WCAG 2.0).Nevertheless, a challenge remains in determining which members of the design team are responsible for accessibility. As more people are involved in the design, development, and editorial process, there needs to be agreement on how to best design for content management and customization, while also allowing for greater accessibility.

Accessible design requires a deeper understanding of context. It’s about providing alternative routes to information, whether that route is a different sense (seeing or hearing), a different mode, (using a tab key or a mouse), or a different journey (using an A to Z site index instead of main navigation). However, accessibility is much easier to achieve when the right foundations are put in place as prerequisites during site planning and strategy.

Approaches to Designing for Accessibility

 

Labeling and controlled vocabularies

Controlled vocabularies can have a positive impact on accessibility by supporting the development of contextually relevant navigation and consistent, understandable labeling. The WCAG 1.0 , Guideline 13 in particular, is written to ensure that navigation is meaningful to people who process information in different ways.

A person using a screen reader can call up a link summary for a given page or tab through links to obtain a general gist of the site. These browsing methods requires web developers to create navigation link descriptions that can be understood without reference to surrounding page context. For example, the purpose of a link should be clear. A user should not have to rely upon nearby visual elements or textual content to understand its meaning.

A dialog showing an automatically generated list of links over the Boxes and Arrows events page

Image 1: A link summary from a Boxes and Arrows page.

Different types of navigation taxonomies allow pages to be defined using both concise and longer contextual link names. This allows naming conventions to retain their consistency and ensures that the right amount of context is displayed across the entire site, not just in the main navigation. For A to Z indexes and site maps, more context is needed to distinguish choices presented to the user. Using contextual names in an index ensures that users are not confused by identical labels that might lead them to different destinations.

Navigation frameworks and wireframe design

When designing indexes and supplementary navigation, it is important to consider how different HTML elements can help shape information. Sometimes headings are a useful way to cluster menus, but it’s often more helpful to make headings active links to prevent important information from being lost.

An easy way to evaluate the hierarchy or order of page elements, such as headers and lists, is to use the “Firefox Accessibility Extension;”:https://addons.mozilla.org/en-US/firefox/addon/1891 its navigation tool displays the underlying semantic structure and ordering of HTML elements.

A headers dialog showing the structure of the Boxes and Arrows events page

Image 2: Selecting the navigation menu from the Firefox toolbar to display a list of page headers.

When developing a framework for navigation, it’s important to consider how users move between different menu systems. A front end developer will need to determine whether a skip link should read “skip to main menu” or “skip to content”. However, if IAs choose to use several different menu systems—perhaps to maintain organization—they should document the priority of menus and suggest practical uses for skip links. This will help developers define document structure and effectively use skip links. The structure of information elements depends on whether the page is high level and navigation-focused, or low level and content-focused.

The diagram below shows how the tab key allows users to move between links on the Boxes and Arrows events page. The heading levels serve as a guide so users can access menus (with the aid of shortcut keys). This type of semantic markup helps users of assistive technologies navigate more easily. Even if they are using a linear form of access such as audio, it’s possible to skim the page for information.

A page with styling removed and the order highlighted between different structural elements
Image 3: A wireframe illustrates page order and structure, as well as ideas for skip links

Ready deliverables

These diagrams can help web developers define and optimize page structures by organizing information elements early in the design process. Dan Brown’s page description diagrams demonstrate an effective way of communicating information elements, rather than using a set layout to display them.

When wireframing, IAs can use annotations to support designers and developers in meeting accessibility checkpoints. Nick Finck’s wireframe stencils can be used to identify Headings 1, 2, 3, etc., and illustrate how they should be ordered to display a logical hierarchy on the page. For example, the events list above is structured using the H3 tag. Describing each event as a list item conveys that the section is an index of upcoming events. This is beneficial to people who use assistive technologies, as there are commands that allow users to to skim the page from header to header or list to list.

While these approaches are helpful, they are not panaceas. There is ambiguity regarding correct semantic markup for complex pages. Consider homepages, where one may have top story headings, event listing headings, page section headings, and menu headings. How does the developer or designer determine the proper hierarchical structure for headings; how should they simplify the page so its purpose and structure are more understandable?

Widgets vs. Browser functionality

Another debate centers on how and where accessible design should be implemented. Consider text change widgets that enable users to enlarge or decrease font size. Resizing text is better left to user agents and browsers so that content displays more consistently. However, text widgets have become common page tools, as browser-based text resizing is hidden from users and most are unaware of this feature. Solutions to this problem include creating a user help page explaining how browser functionality works, or referring users to sites that explain browser customization and the benefits of assistive technologies. The BBC’s My Web My Way offers such examples. This site provides instructions on changing browser settings, text-background color, font size, and explains how to use assistive technologies.

However, one needs to consider whether its more beneficial to provide instructions, which may eventually become outdated, or direct users to an external site to learn about a particular technology.

Alternative ways to view content

An important aspect of accessibility is providing alternative ways to use and view information. Presenting information in a user’s desired format (indicated by his or her profile) would increase accessibility. Strategies to deliver alternative views include the following:

  • Reuse of information through alternative formats. For example, the ability to change the format of a document on the fly from PDF to RTF or XML is supported by some CMS systems.
  • Google Map’s HTML view conveys information about geographic locations in a format that can be read aloud by a screen reader.
  • Multimedia resources can be made more accessible through the addition of captions or transcripts. Inexpensive, “automated captioning web-based services”:http://www.automaticsync.com/ and websites that enable users to “tag video content”:http://www.viddler.com/ are making translations of this sort easier to implement.
  • Leveraging application programming interfaces (APIs), as “demonstrated by T.V. Raman”:http://googleblog.blogspot.com/2007/02/web-apis-web-mashups-and-accessibility.html of Google, can produce mashups, which offer alternative ways to view a particular data source. They enhance accessibility by providing customized views when a one-size-fits-all solution does not work. For example, services that offer map data at two times the normal magnification could be a resource for users with poor or corrected vision.

These examples demonstrate that accessibility is not just about compliant code. It requires an understanding of how information can be structured and transformed to make user interaction more flexible.

Customization strategies

Taking the idea of tailoring information access one step further, there is also scope for implementing user profiles or customization strategies to meet a specific audience’s needs. Imagine a website where navigation, search results, and content can all be accessed dynamically according to a user’s barriers to information, content needs, or disabilities.

Hildegard Rumetshofer and Johannes Kepler explore requirements (paid download) for providing comprehensive accessibility as part of a tourism information service. They illustrate how a system could deal with four distinct questions about a person’s information and service needs:

1. Does the tourism service meet individuals’ access requirements? (medium profile)
2. Is the tourism service able to meet individuals’ specific interests? (user profile)
3. Is information presented in an accessible, barrier-free format? (WAI guidelines)
4. Can search or access services be specifically tailored to individuals’ disabilities? (search and metadata strategy)

Accessibility is becoming more dependent on the design of an information system’s components, and no longer a simple question of how content is presented. This is an arena where IAs excel.

Getting started

Understanding the importance of accessibility in the design process is only the beginning. Information architects need to understand accessibility considerations so they can design practical, inclusive solutions. A good starting point is the Web Accessibility Initiative’s (WAI) website and its guidelines and techniques page. The following guidelines provide additional information and resources.

  • Web Accessibility Content Guidelines : http://www.w3.org/TR/WAI-WEBCONTENT/(WCAG 1.0, 2.0 in draft)
    Established checkpoints written for designers and developers to help them create accessible code and content.
  • Authoring Tool Accessibility Guidelines: http://www.w3.org/TR/WAI-AUTOOLS/(ATAG)
    Useful for projects that concern requirements for information management or the procurement of a new CMS.
  • User Agent Accessibility Guidelines: http://www.w3.org/TR/WAI-USERAGENT/ (UAAG)
    Intended for the developers of assistive technologies and browsers, these recommendations can help IAs understand the interplay between different sets of guidelines. For example, some information design problems are better handled by browsers and assistive technologies, while others are better handled by individual sites.

Conclusion

Accessibility audits and benchmarks remind us of the difficulties disabled people encounter when attempting to negotiate their way through today’s online media. Information architects need to think about representing pages of information as linear streams, not just wireframing a collection of adjacent menus and content.

Creating an accessible web experience requires the coordination of independent groups. Initiating, managing, and designing for accessibility starts with strategy and ends with site evolution, content creation, and quality assurance. As IAs we should be advocating, designing, and supporting teams that provide equal access to information, as well as easier access for our primary personas. We should be looking for practical, design-driven ways to make accessibility a consideration through every phase of a project, and not just an afterthought.

While accessibility requires expert web developers to maintain high levels of access (especially on larger sites), it still needs the help of IAs who understand the scope and constraints that lead to accessible design, and who are conscious of the duty to prevent discrimination when making information management decisions.

Resources

Introductions
“BBC’s My Web My Way”:http://www.bbc.co.uk/accessibility
“W3C Web Accessibility Initiative”:http://www.w3.org/WAI/
A to Z Site Indexes

“BBC”:http://www.bbc.co.uk/a-z/a.shtml
“Somerset County Council”:http://www.somerset.gov.uk/somerset/atoz/index.cfm?letter=a

Alternative Views on Content

“Google Blog: Web APIs, Mash Ups and Accessibility”:http://googleblog.blogspot.com/2007/02/web-apis-web-mashups-and-accessibility.html
“Google Video Help Center”:http://video.google.com/support/bin/answer.py?answer=26577
“RoboCal”:http://www.robocal.com/prod/robocal/main.php
“Semantic Maps and Meta-data Enhancing e-Accessibility in Tourism Information Systems, IEEE Computer Society”:http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/dexa/&toc=comp/proceedings/dexa/2005/2424/00/2424toc.xml&DOI=10.1109/DEXA.2005.176
“Tagging Multimedia Content”:http://www.viddler.com/

Communities

“Accessify Forum”:http://www.accessifyforum.com/
“Dublin Core Accessibility Metadata Community”:http://dublincore.org/groups/access/

Debates and Futures

“Accessibility Panel, UK”:http://www.isolani.co.uk/blog/access/BarCampLondon2AccessibilityPanelThoughts
“The Great Accessibility Camp-out”:http://accessites.org/site/2006/10/the-great-accessibility-camp-out/

Standards and Guidelines


“Introduction to WCAG Samuri Errata for Web Content Accessibility Guidelines (WCAG 1.0)”:http://wcagsamurai.org/errata/intro.html
“Web Standards Project”:http://www.webstandards.org/action/atf/

 

Editors’ Note

 

Our “podcast with Derek Featherstone”:http://www.boxesandarrows.com/view/straight-from-the19 marks the start of a theme for Boxes and Arrows. Accessibility guidelines are not only beneficial to those who need special affordances to experience our products more fully. Taking these recommendations and “web standards”:http://www.webstandards.org/ into considerations will guide what you build, strengthen its inherent structure, and help to encourage the development of better products for everyone.

Over the next several months, we’re looking to further explore these ideas as they apply to designers and the designer-developer relationship. Contribute to the series by sending an idea. (Eds.)