“We recognized that the “warm, fuzzy feeling” that people get when contributing to the greater good would wear off once the designers recognized the amount of time writing a good pattern requires.”
Yahoo’s multiple business units, each containing decentralized user experience teams, have a natural tendency to design different solutions to similar problems. Left unchecked, these differences would weaken the Yahoo! brand and produce a less usable network of products. Designers and managers have discussed “standards” as a way to solve this problem but this standards content (often contained only in the memories of designers) has never existed in a commonly accessible format.
Our first goal was to find a way to communicate standards for interaction design to increase consistency, predictability, and usability across Yahoo! with the ultimate intention of strengthening the brand. This aligned with the business goal of increasing both the number of return visits and the average number of products used per session. Our second goal was to increase the productivity of the design staff by reducing time spent on “reinventing the wheel.” If we were successful, other designers could re-use the solutions contained in the library, reducing development time.
We designed and built a repository for interaction design patterns, created a process for submitting and reviewing the content, and seeded the resulting library with a set of sample patterns. We organized the content to make it findable, structured the content so it was predictable, and tested and iterated the design of the user interface of the tool to make it usable. Throughout this process, we introduced incentives for participation for both the contributors and management to encourage submissions and support.
We took the following approach, broken down into the following stages:
- Understanding and agreeing on the problem
- Developing a workflow
- Generating organizational buy-in (evangelizing)
- Selecting, designing, and building a application
- Using the pattern library as a body of standards
Understanding and agreeing on the problem
We made use of existing research.
We were lucky to have the results of a contextual inquiry conducted a few months previously with the Yahoo! design staff. The findings pointed out that the staff wanted a central place to pool their collective knowledge. They wanted shared interaction design solutions, but no one ever had the time to develop and document them.
We wrote a lightweight product requirements document (PRD).
We began by reviewing the research and drafting a lightweight requirements document. Once the outline was done and some thoughts were fleshed out, we had meetings with interaction designers and design managers to test our assumptions. Were we heading in the right direction? Did the proposed solution seem useful? Feedback was incorporated into the PRD.
Developing a workflow
Before we could build an application for managing the patterns, we needed to determine where the content would come from, how it would be reviewed and published, and who would maintain it. To that end, we designed a workflow noting the prerequisites for each step as well as the participants and their responsibilities. We vetted the proposed process with each user experience team before moving on to building the application. Wherever possible, we attempted to build “hooks” into Yahoo!’s existing design process. For example, we knew that new interaction design solutions are often identified during design reviews, so the step of “identify pattern” was added to our existing process.
Figure 1 – The pattern library workflow, [PDF of this file]
We defined processes for communication.
We recognized that it would be useless to have a great library of content if no one knew about it, but at the same time didn’t want to be emailing the designers about every new contribution. To solve this, we designed a communication roll-up method. Calls for authors, announcements of new patterns, notices of patterns needing to be reviewed, and updating the designers regarding the most recent pattern ratings would be rolled up into a weekly email. In this way, the team would be aware of the activity in the pattern library without being continually spammed.
Generating organizational buy-in (evangelizing)
We involved the contributors and consumers of the content.
We conducted a low-fidelity usability test on the draft UI. This, in addition to the contextual inquiry, and the designers’ involvement in the definition of the requirements and workflow helped ensure that we built the right product for our audience.
We defined (and are still defining) incentives for contributors.
We recognized that the “warm, fuzzy feeling” that people get when contributing to the greater good would wear off once the designers recognized the amount of time writing a good pattern requires. To that end, we set out to create incentives for participation. Our ideas fell into three categories:
Raffles and contests. Shortly after releasing the pattern library application, we raffled off an iPod Mini. Every time a person authored, contributed to, or submitted a pattern for review, they received a virtual ticket. At the close of the raffle, a ticket was randomly picked. The raffle not only helped increase participation, it also generated buzz about the library.
Peer recognition. Presently, we’re considering adding functionality so that users of the library can rate each pattern’s usefulness. Once we know which the most useful patterns are, we can recognize their authors.
Performance evaluation. Perhaps the most compelling incentive is to write job descriptions so that contributing to the library is on each designer’s list of quarterly goals. We’re currently in the process of defining this and pitching it to the design management team.
We held training sessions.
We presented an “EZ-bake recipe” to the interaction designers that stepped them through the pattern-writing process and provided tips on how to write for their peers.
Figure 2 – Slides from the tutorial on writing effective patterns
We defined incentives for management.
We found that the best incentive for getting management buy-in was to align the project’s goals with stated business goals. For example, we were able to make the case that increased consistency across the network would increase the number of return visitors and the average number of products used per session. We also demonstrated to the Chief Product Officer how he and his staff could use the library when reviewing major products before release.
Selecting, designing, and building the repository
We determined the repository should:
- be scalable
- be customizable
- be easy to use
- encourage collaboration
- allow categorization
The primary decision was whether to build versus buy. We looked into a few commercial applications, but the upfront costs and the inability to modify them easily as our needs change discouraged us from going that route. Because we had a server for the design group and some technical know-how, we decided that open-source would be the best for us.
Within the open-source community, there’s a myriad of programming languages and databases. Since we had a UNIX server running some internal apps using MySQL and since PHP was the Yahoo! standard, we focused on content management systems that matched those technologies, although we did consider applications written in other languages.
Some of the solutions we considered included:
- Blog applications (e.g. Movable Type)
- Open source CMSs (e.g. pMachine, PHPNuke, Drupal)
- Groupware (e.g. PHPCollab)
- Wikis (e.g. Tikiwiki)
Some things we thought about when choosing our CMS:
- How easy is it to update content?
- Does it support collaboration? Can it generate diffs or do rollbacks?
- How extensive are the classification tools? How many vocabularies are supported? Does it support parent/child relationships?
- How does it handle rights? Can we set different rights for contributors, editors, and administrators?
- How easy is it to customize and extend?
Ultimately, we chose Drupal because of its breadth of capabilities, powerful taxonomy, and extensibility.
We designed and tested the UI.
Using the requirements and workflow as our guide, we created wireframes of the pattern submission and retrieval application and conducted low-fidelity user tests with our end users. Free lunch was offered as an incentive for participation in the tests.
Figure 3 – The paper prototype used to test the pattern library tool
We structured the content to make it predictable.
We developed an input form for pattern creation so that a pattern’s contents would be structured and predictable. We surveyed pattern libraries on the web to devise a base set, and after some trial and error, settled on the following fields:
Figure 4 – A sample pattern
- Title. Usually the name of the problem, solution, or element type in question.
- Author. Each pattern has one principal author.
- Contributors. For when there are co-authors.
- Problem. Written in user-centered terms, i.e. what is the problem presented to the end user?
- Sensitizing example. A single screen shot to serve as the picture worth a thousand words. Additional images may be added to the other fields; this is the one that really needs to count.
- Use when. A statement to describe the context for the problem/solution pair.
- Solution. A prescriptive checklist of to-dos. We found that this format was the most easily consumable by our time-pressed audience.
- Rationale. A set of statements that reinforce the solution above. We separate all rationale information from the solution to make the solution easier to scan and consume. This field can also be used to summarize the “forces” that other pattern languages describe.
- Special cases. Known exceptions. Often these exceptions warrant their own patterns.
- Open questions. Unknowns. Useful for documenting areas that require further research.
- Supporting research. For linking to usability reports, audits, etc.
- Parent pattern. If this pattern is a specific solution to a broader pattern, this field is used for selecting its parent.
- Related Standards. For cross-linking to related patterns and visual standards. (See Using a Pattern Library as a Body of Standards.)
- Categories. Contains the pattern library’s four vocabularies to allow users to browse by category.
- Importance of adherence rating. The application computes the median of the submitted ratings. The visualization of the rating shows 0-5 bars.
- Comments. Notes and feedback from pattern’s consumers.
The fields required to define a pattern are the Title, Problem, Use when, and Solution fields. Other fields that aren’t filled out don’t show up on the pattern detail page.
We made the content findable.
We realized that as the pattern library grew, finding a solution to a given problem in the library would become increasingly difficult. To this end, we developed four vocabularies for classifying the patterns:
- Element type. A list of nouns that describe the “what” of the pattern. If the pattern describes an element such as a button, field, page, or module, you’ll find a term in this vocabulary for it.
- Task type. A list of verbs that describe the “how” of the pattern. If the pattern describes a method such as sorting, navigating, searching, or communicating, you’ll find a term in this vocabulary for it.
- Application type. Terms that distinguish among patterns that are intended for different applications such as for the web or a compiled application.
- Device type. Terms that differentiate between patterns for desktop computers and those for mobile phones, TVs, PDAs, cameras, etc.
These categories didn’t spring forth from the forehead of Zeusâ€”they emerged after studying sample content and by listing the content we anticipated. Several of the vocabularies that were initially suggested had to be scrapped. In particular, we found it was counter-productive to classify patterns by their product type, location, or language. In the future we may add additional vocabularies, for example to distinguish patterns that are relevant only to double-byte character sets.
Because most of the patterns submitted are individual articles, not extensive families, one of the challenges to date is creating a coherent “language” that ties the patterns together so that the collection is greater than the sum of its parts. The library’s editor attempts to group and cross-link patterns using broader (parent), narrower (child), sibling, and related relationships. Because of the large number of authors, creating these relationships can be arduous, however.
In addition to navigating the patterns by category or by their relationship to other patterns, we also present the contents in a number of lists:
- Table of contents – an alphabetical index of the broadest patterns with the narrower patterns shown indented below their parents
- Sortable index (planned)
- By title
- By author
- By rating
- What’s new
- Recently submitted
- Recently modified (planned)
- Recently commented upon (planned)
- Recently rated (planned)
- Review queue – shows the patterns under review
Figure 5 – Selections from the pattern index and review queue
We seeded the library with content.
We decided to launch the library with content for several reasons. First, we figured having a grand opening for “an empty room” wouldn’t be compelling. Second, creating the content up front allowed us to structure the documents appropriately and build the right classification methods. Third, it allowed us to debug the application. Lastly, it provided examples for other contributors to follow.
While the library was under development, we collected patterns using a simple Microsoft Word template. Designers filled out the templates, then emailed them to the editor. These patterns were ported into the content management system in a relatively static format. When the pattern application was up and running, the content was then re-ported into the new forms. If this process taught us nothing else, it was that Microsoft Word and e-mail are terrible group-ware solutions. We did, however, collect a half-dozen patterns that we were able to include at launch and it wasn’t long before additional contributions began to roll in.
Using a pattern library as a body of standards
Our goal wasn’t to simply gather a body of solutions to common problems and have it sit on a dusty corner of our intranet. Instead, these patterns were meant to have some teeth. If solutions were recognized as being “The Yahoo! Way,” then we needed to ensure that they would be consistently applied across Yahoo! products.
We decided on a ratings scale.
In order for the library of interaction design patterns to serve as a Yahoo!’s book of Interaction Design Standards, the patterns needed to be rated so that expectations for compliance on the part of designers could be set.
We looked at several possible ratings:
- Importance of adherence
- Strength of evidence
- Quality / Usefulness / Clarity
Both “importance of adherence” and “strength of evidence” were borrowed from the standards put together by the National Cancer Institute and available at http://usability.gov/guidelines/index.html.
We settled on “importance of adherence” as our only rating. Its purpose is to describe how important it is for a designer to adhere to the pattern when designing Yahoo! products. In a sense, it’s describes, “how important is this behavior to the Yahoo! brand?”
We abandoned “strength of evidence” as a rating after consulting with the Design Research team at Yahoo!. The design research group was at a loss for how the patterns could be evaluated against existing evidence (both conducted at Yahoo! and researched on the web) in a systematic and affordable way.
We’re still considering a rating for quality or usefulness. This could be used to reward authors with community recognition for their well-crafted (and readable) patterns.
We quickly found that the ratings were ineffective unless the designers (and reviewers) knew how to interpret them. A 5-star system with “love it/hate it” describing the two ends of the spectrum wasn’t going to cut it. We came up with the following decision tree to determine what rating each pattern received.
Figure 6 – The pattern review decision tree
This common set of criteria helped normalize the ratings. Pattern ratings that are all over the board (some 1-bar ratings, some 4-bar ratings, for example) are marked as “contentious” and the median rating is not exposed in the application. We’ve yet to have a contentious pattern. Our current algorithm permits votes that are one bar above or below the median, and up to one vote that is two bars above or below. If we do have contention, the plan is to use our regular monthly meeting to come to a consensus (or at least give those with outlying ratings a chance to be heard). Once an agreement is reached, votes can be amended and the median rating will appear in the application.
We currently collect votes from a team of about two-dozen reviewers, of which about a dozen are active. Once nine votes are entered for a given pattern, the pattern’s median rating is exposed. The users of the library can see who has rated each pattern, but the ratings given by specific individuals are kept hidden. Both of these strategies were put into place to reduce groupthink.
We assembled a review team.
We initially nominated a group of reviewers from different business units and from different disciplines (ID, visual design, research). We found (non-surprisingly) that the IDs were the most motivated reviewers. In the future we hope to tie a designer’s membership in this group more closely to his or her quarterly objectives. In this way, each reviewer will have more incentives to participate and each design director will have more say in who participates.
We continue to avoid being labeled as the “standards police.”
The ratings themselves are not the final word on compliance; they merely show the expectations of the review team. The product team and the design reviewers have the responsibility of interpreting the standards during design review.
We use design reviews to test assumptions about the presented solutions, to inform the designers of new patterns, and to facilitate close team collaboration and the discussion of emerging standards. We have consciously put ourselves in the position of information broker or facilitator rather than design cop. This approach has contributed to wider acceptance of the process and a marked improvement in the quality of the design work. As a result, we’ve enjoyed watching as consistent design solutions leapfrog from group to group.
We decided to separate out visual design and code from the pattern library.
The library of interaction design patterns is only one part of a three-pronged strategy to capture and communicate standards for Yahoo!. We are also collecting standards for visual design and code samples into their own libraries. We’ve kept these three initiatives separate from each other for several reasons.
First, the standards for interaction design, visual design, and code change at different rates. For example, the visual style for a button may change more frequently than a solution for paginating search results.
Second, they do not necessarily map to each other. For example, a pattern for Menu Item Order may not require a corresponding visual standard and there may be a dozen visual standards for typography that do not map to any one interaction design pattern.
Third, the content for interaction, visual design, and code repositories comes from different sources and the reviewers of this content have different expectations for compliance:
- The interaction design patterns are more of a grass-roots effort, coming mainly from the group of interaction designers at Yahoo! (bottom-up). This is in part due to the vast number of contexts in which the solutions are needed and that the central standards group is too small to capture solutions to such a wide variety of problems. The interaction design patterns are rated by a group of representative interaction designers.
- The visual design standards and assets are centrally managed (top-down) and are designed, written, and edited by a central group. These are tightly managed to allow the stewards of the Yahoo! brand to more easily shape Yahoo!’s online brand identity. The visual standards are vetted in design review but are essentially dictated by the creative director.
- The sample code is contributed by Yahoo!’s web development group (bottom-up) but best practices for writing code are centrally managed (top-down).
Our plan is to maintain these repositories separately but ensure they are heavily cross-linked.
Current activities and future plans
We’re currently projecting 10 – 15 new patterns per month over the next year to add to the sixty patterns currently in the library. Meanwhile, we’re collecting a list of enhancements for the pattern library application and designing and building the repository for visual standards. After the visual standards tool is in place, we’ll work with engineering on the best solution for linking these two tools with code samples. Ultimately, we plan on rolling out toolkits containing approved visual assets and code that conform to the visual and interaction standards to further reduce development time and aid under-resourced business units.
The pattern library allowed our small, centralized group to tap into the broad expertise of the Yahoo! design staff. What would have been impossible to write (authoritatively) by a small team is now being contributed to and reviewed by an expert staff. We were able to achieve this by understanding and agreeing on the problem, building a workflow that fit with the existing design process, generating buy-in by creating incentives for contributors, and by carefully designing and building an application with attention to user feedback.
We were then able to convert this library of patterns into a workable set of standards by agreeing on an appropriate rating scale and by assembling a representative group of reviewers who rate the content according to the same criteria.
Ultimately, we expect that pattern library will result in a strengthened Yahoo! brand and a more efficient design staff.
- Appleton, B. (2000), Patterns and Software: Essential Concepts and Terminology
- Brand, S. (1994), How Buildings Learn, pp. 13 – 23.
- Mahemoff, M. & Johnston, L. (1998), Pattern Languages for Usability: An Investigation of Alternative Approaches.
- Tidwell, J. (1999), Common Ground: A Pattern Language for Human Computer Interface Design.
- Van Duyne, D., Landay, J., & Hong, J. (2003), The Design of Sites.
- Van Welie, M. & Traetteberg, H. (2000), Interaction Patterns in User Interfaces.
This paper, our slides, and printable versions of selected figures are available at http://leacock.com/patterns.
Compare and test drive CMSs.
The Yahoo Pattern Library is now public