UI Pattern Documentation Review

Written by: Patrick Stapleton

Introduction

User interface (UI) patterns have the potential to make software development more efficient. The prospect of such efficiency gains has led to interest in user interface (UI) patterns by individuals and organizations looking for ways to increase quality while at the same time reducing the costs associated with software development.

The very nature of UI patterns requires that they be familiar to end-users. An individual UI pattern is a discrete, repeatable unit of user experience. I refer to collection of patterns as a library.

In many cases, less proprietary patterns are more useful in solving a design problem as they can be implemented more uniformly across platforms. This characteristic and the efficiency gains make patterns an excellent opportunity for software companies to come together and promote UI patterns to the wider development community.

Producing a common pattern library, however, implies that the patterns presented are at the very least, consistently documented and most probably presented in the same single classification system. Currently though, patterns are classified and documented in various manners across publishers with no clear standard evident.

The problem

To date, the most common approach to propagating a single user experience standard is the development of UI guidelines and principles documentation within an organization. Development teams  — usually incorporating a user experience specialist — then reference this documentation during implementation and upgrade processes.

However, as the numbers of systems grow within an organization, so does the effort needed to maintain the quality and consistency of the user experience. For many organizations, it is now impossible to assign much, if any, time of a user experience specialist to all implementation efforts, and experience has shown that the UI guidelines and principles approach to propagating a single user experience standard does not scale well.

There are two common issues, both major.

The first issue is ensuring developers are familiar with all the principles and guidelines.

Documentation to fully describe a UI standard is, by its nature, extremely detailed and complex. Getting developers to know all this information intimately is an ongoing and often un-winnable battle.

The second major issue is that the application of guidelines and principles can be open to wide interpretation.

Requiring developers to combine guidelines and apply principles together to create a complete UI can be inefficient. This synthesis process can result in widely-varying solutions to a single design problem across teams — especially when working with widely distributed and possibly culturally diverse groups. Removing these variances to create a more consistent user experience requires rework.

The solution

UI patterns to a great extent mitigate the problems of weight and interpretation experienced with the principles and guidelines documentation approach of the past. In essence, patterns can be seen as prepackaged solutions based on guidelines and principles.
Patterns and pattern libraries are more convenient for developers because they solve common higher-level design problems without the need for deep knowledge of often-complex guidelines and principles documentation. Also, they implement best practices, so developers don’t synthesize what are often “slightly original” solutions that would need to be reworked later.

Much of the value of a pattern to the developer is its less granular and more physical nature. Principles of good UI design dressed up as UI patterns add little value over traditional guidelines and principles documentation, as seen in many of the UI patterns as described in the Design of Sites; examples such as “Low Number of Files” — while an important design principle or guideline — do not deliver up a usable UI component.

Also important is creating the patterns to begin with. The guidelines and principles that form the foundation of patterns still need to be developed before any patterns themselves are developed.

Integrating UI patterns

Integrating UI patterns into the culture of software development is to a large extent still beginning. Next-generation development tools such as those proprietary ones being developed by enterprise software companies that implement patterns natively are now or will be soon in the hands of developers around the world.

Embedded drag and drop UI patterns hold the promise to empower developers to create better user interfaces, faster — unsupervised by user experience specialists. While this may strike fear in the hearts of many a user experience specialist, issues of scale dictate such a pragmatic approach. Be aware though, that they also can perpetuate problems if the UI patterns implemented are out of sync with end-user expectations.

Why standardize UI patterns?

Currently, there is no recognized standard for the classification or documentation of UI patterns, as seen by browsing through pattern libraries from:

  1. Martin Welie’s UI patterns
  2. Jennifer Tidwell’s UI Design Patterns
  3. Sari Laakso User Interface Design Patterns
  4. The Design of Sites: Patterns, Principles and Processes for Crafting a Customer-Centered Web Experience by van Duyne, Landay and Hong.
  5. Yahoo Design Pattern Library

The variety isn’t surprising, since applying the pattern concept to user experience design is a relatively recent phenomenon. However, the successful introduction of a single classification and documentation standard could significantly increase the value of a UI pattern library to developers by…

  • Reducing confusion among pattern versions across collections. Not surprisingly, many of the same patterns exist across collections. A standard classification system (discussed below) can help developers make sense of both these patterns and their different versions in collections across the web and in paper publications.
  • Promoting development of net new UI patterns. A clear classification taxonomy is likely to make the “holes” in the current crop of pattern libraries more apparent, which in turn hopefully will increase the pace of development of new UI patterns.
  • Providing a standard UI pattern interface. As the number of patterns increases, pattern search tools will become more important. A standard classification and documentation approach will enable developers to quickly display their UI options.
  • Promoting UI pattern adoption. A clear classification taxonomy is likely to have the effect of making patterns easier to find and in turn increase their use.

Problems with the solution: UI pattern Classification Approaches

The following is a high-level analysis and discussion on classification approaches of the previously mentioned UI pattern collections. Each collection is mapped and discussed from a classification and documentation perspective.

Martin Welie’s patterns

Classification Analysis

Patterns in Interaction Design

Cropped version of Welie's patterns

Figure 1. Classification Map (click image to enlarge)

Welie divides the patterns into three delivery methods: Web design patterns, GUI design patterns, and mobile design patterns. Within the web design patterns channel (the focus of this document), the patterns are categorized into ten groups based on a mix of content and functional subjects.

Documentation Approach

Cropped version of Welie's approach

Figure 2. Documentation Map (click image to enlarge)

Welie’s documentation approach is simple, with a focus on visual elements to explain the function of the pattern. It can be broken into three main parts:

  • Description: This area of the documentation provides the name and image to describe the pattern.
  • Rationale: This area provides a description of the problem that is solved by the pattern, how it works, and the scope of its use.
  • Associations: This area provides links to other patterns related to the current pattern.

Jennifer Tidwell’s patterns

The following is a map of Jennifer Tidwell’s UI Design Patterns. (Click image to enlarge.)

Cropped version of Tidwell's patterns

Unlike Welie, Tidwell does not take into account different delivery methods. The eight categories she does specify look to be based on functional subject areas only.

Sari Laakso’s patterns

The following is a map of Sari Laakso’s UI patterns. (Click image to enlarge.)

Cropped version of SL's patterns

Like Tidwell, Laakso does not differentiate between delivery methods; he bases all seven of his categories on functional subject areas.

The Design of Sites’ patterns

The following is a map the patterns presented in “The Design of Sites.” (Click image to enlarge.)

Cropped version of Design of Sites' patterns

The most extensive pattern collection of the four sampled, Design of Sites does not specify delivery methods, and, in some cases, the items presented could be regarded as design guidelines or principals rather than patterns. Twelve categories are presented with a mix of content and functional subjects.

Summarizing the classification types

From this analysis three main types of classification are present — content subject, functional subject, and delivery platform.

Content subject classifications normally specify an application genre (for example, ecommerce and supply chain management). Examples of content subject based classifications can be found in the Design of Sites collection under “Site Genres” and in Welie’s collection under “Site Types.”

Functional subject classifications are based on logical breakup of functionality (for example, shopping cart and two-panel selector). This is the most common prevalent classification type and is found in all the collections sampled.

Delivery method is used to describe the platform on which a pattern has been designed to operate. This classification type opens up the possibility for unique patterns to be developed for the same subject classifications across platforms. This classification type has the potential to provide more resolution for developers looking to offtake a pattern within a specific UI delivery platform such as mobile, desktop, or web.

Based on the publicly available pattern libraries available today, there is no clear indication as to whether “delivery method” is a valid classification type. An argument could be made that the process of binding a pattern to a specific technology is will reduce the life of the pattern as platforms develop. However, the timelessness of a pattern is of little consequence to developers whose primary goal is product delivery rather than pattern lifecycle.

Another Classification type – Level

This author would like to include an additional classification type: Level.

The level classification would further divide patterns into the following areas of concern:

  1. Navigation architecture: Patterns relating to the navigation of content within an application
  2. Screen architecture: Patterns which position functionality and content within a screen
  3. Site furniture: Patterns for formatting functionality and content

In the case of the collections previously reviewed, the great majority of patterns would be classified as falling under the “site furniture” level type. However, it is this author’s view that considerable potential remains to develop patterns within the proposed navigation architecture and screen architecture level types.

A proposed classification system

Cropped version of the classification system

The above diagram (click image to enlarge) describes a potential pattern library classification hierarchy. In this case, client classification nodes are presented at the top of the tree similar to that of the Welie collection; the proposed new level classification nodes are added above subject.

Content and functional subjects would be implemented as tags because these classifications would occur across levels.

Why have a classification hierarchy — aren’t filters or tags more useful?

In many cases, being able to filter by classification node as required is more flexible than drilling down through a preset hierarchy. However, a present classification tree is also useful to:

  • Automate the generation URLs to enable cross linkages within the UI pattern library.
  • Provide a simple drill experience for end users who have no specific problem to solve but rather just wish to browse to learn and or generate ideas.

UI pattern Documentation Proposal

The value of a standardized UI pattern documentation to developers is a single interface for search tools. Such tools hold the potential to streamline the off take of UI patterns by developers with specific problems to solve in a world with hundreds and potentially thousands of UI patterns to choose from.

UI patterns are by their nature visual. It must be noted that strong support for pictorial content would seem obvious and reduce the necessity for long verbal descriptions that add little value next to their visual equivalents.

Photos for interaction

Written by: Milan Guenther

When developing user interfaces, designers increasingly use custom graphical elements. As the web browser becomes basic technology for software interfaces, more and more elements derived from graphic and web design replace the traditional desktop approaches to the concrete design of human-computer interfaces.

In the near future, this development will become even more relevant. The barrier between web pages and desktop software is beginning to disappear, and modern rich client user interface technologies such as Silverlight/WPF, Air, or Java FX enables designers to take the control over the whole user experience of a software product. Style guides for operating systems like MacOS or Windows become less important because software products are available on multiple platforms, incorporating the same custom design independently from OS-specific style guides. Software companies and other parties involved begin to use the power of a distinct visual design to express both their brand identity and custom interactive design solutions to the users.

While this implies a new freedom for designers working in the field of interactive software products, it strengthens the importance of visual design for the design of user interfaces. Designers working on concrete graphic solutions for a specific interface are breaking away from established standards defined by a software vendor. It is now the responsibility of those user interface designers to choose graphical elements wisely to make a product’s interaction principles visible and usable.

Elements of interactive visual design

Following the roots of visual design in print and online communication, the design of a visually compelling and functional application must take into account different requirements, even though it takes the same methods to realize its goals: A dynamic visualization of the interactive product in form of text, images, and colors. In contrast to pure one-way communication design striving to create identity and media, the main goal of such a design process for interactive products is much closer to product or industrial design — namely the creation of a product that serves the user in a optimal way. It requires a strong collaboration with the disciplines of interaction design, software development, and product management.

The role of photography in software user interfaces

Photography has both challenges and opportunities as graphical element in user interface design. I chose photography as an example for a classic communication design instrument,  but the ideas are also applicable to typography, illustration, motion design, graphics, and the like. One important aspect of these thoughts is the required collaboration between the different design disciplines involved in the creation of a user experience, and how to optimize team performance for most valuable ideas and outcomes.

Case 1: Photography as content

In software applications, photography in most cases is used as content element, since photos express situations of human life very well and thus are well suited to capture and represent a certain message. The images have a semantic meaning, communicating information to the viewer and user of the respective web or software application.

Examples for this type of application can be found not only in private photo collection software such as iPhoto but also in enterprise content management solutions for web sites and product catalogues, or the web shop itself. To the user, the photo is not an element of decoration or design, but it is the actual content or a part of it.

On the visual design side, the challenge is to present this content in a way that makes it visible and reveals context and meaning. Photographic content tends to come to the fore due to its strong graphical impact, so other elements should be designed to support that effect and not to compete with it for the viewer’s attention.

The challenge of well-representing imagery content elements in a user interface is often to provide adequate metadata-driven tools to allow enhancing images with meaning; take tagging people at Facebook as an example, which turns photos into something findable. Finding a a meaningful visual representation of photographic content and this data is a common challenge to visual design and information architecture.

Case 2: Photography as design element

While the use of photography as a design element in user interfaces is rather new, there is a long tradition of using photography as a design element in advertising-related online media. This treatment as design element follows the rules of brand communication and takes photography as integral element of the web site design.

But contrary to its usage as a content element, the image is used in web design as a medium to communicate a message to the user in order to create a certain context for the real content. Some sites, such as financial institutions or software suppliers, are working with stock-like photography showing photos of people or buildings, while other businesses can combine site content and corporate communication in one image, like on fashion sites.

Benetton Web Site

Benetton uses the photo on their home page to convey both a product and a brand message to the visitors. The photo is in the focus, but is receipt more like a visual expression of emotion than as actual site content. The web design uses the photo like an advertisement would do: It is part of the site’s visual design and has been chosen by the designers. The product, derived from the site’s content, is turned into the medium to make an impression to the visitor.

Photography in interactive media is often a trigger for engagement and interaction. Interaction designers working on the product’s interaction flows can thus provide visual designers with key information to select and apply visual elements, in order to start the conversation, and keep it alive.

3. Photography in software UI Design

Unlike other digital products, the visible part of software usually makes no significant use of photography by means of communication design. Today’s desktop software interfaces consist of text, rectangle areas, and icons, along with with a lot of transparency or 3D effects. If not a necessary content element, photos are only used in splash screens of desktop applications.

In web interfaces, static images in header bars are quite common, resulting from the "hybrid" characteristics of those applications between a software product and a web site. In most cases, the photo serves as decorative element with no semantic meaning and is thus reduced to a very small amount of space of the screen; it is not important for the product’s original purpose. This is done in order to provide as much space as possible for the informational content that is useful to the user.

SAP Enterprise Portal

The image above shows SAP’s enterprise portal product in a standard visual design. The small photo showing a bridge in the header bar is part of the UI design, while the images at the bottom are content elements related to the text messages.

Like in web design, the image is used here as an element of design but loses all its visual power due to its jammed position in a design that puts all emphasis on the representation of information. The "mise en scène" of the interface suffers from the poor integration of the photographic element, totally separated from all information. Its meaning in the application context is reduced to a vague bridge metaphor referring to the function of a portal.

The best of both worlds: towards a new quality

With every release, software providers make a step towards a custom graphical representation and improve the visual design quality of their products. To take a real advantage of photography as a medium, there is a need to treat it differently than it is done today in the software industry.

At same time, a lot of effort is being made to make applications more "shiny and glossy", to better imitate real world structures on the screen. Sometimes, like in current reporting tools for business intelligence, this additional glitter reduces the visual perception of information instead of enhancing it.

The following examples and recommendations are not always easy to follow, because a meaningful integration of this medium in a UI design that centers around representation of information and providing a tool for efficient usage is a difficult task. Nonetheless, visual elements such as photography have the power to reveal a message instantly and powerfully to the user to complete and to establish a visual identity. Designers should use these possibilities to trigger the user’s attention to support a holistic interaction design and not to distract her by decorative elements and visual clutter.

Examples for photography in interactive applications

Designklicks Designklicks

This example screenshot shows Designklicks (now seen.by), a German website that collects and tags user-generated imagery. Just like Flickr and other photo-centric web sites, the images are in the focus of the design and are visually strictly separated from other design elements like icons, logos, buttons, and links. For a visual representation of the complex information architecture, it allows the user to sort and present the content in different ways, from a simple grid to a navigable 3D space.

Space by the Barbarian Group for Getty
Space by the Barbarian Group for Getty Space by the Barbarian Group for Getty

These screens are taken from an art project for gettyImages, done by the barbarian group. It uses widescreen photos to build a three-dimensional flow of cascaded rooms, connected to each other by graphical signage elements appearing in the images.

Société Générale Customer Portal

The bank Société Générale used a photo as main art on their web site, emphasizing the fact that they address everyone with their services. The main navigation appearing on the start page is embedded into the photo, but at the same time arranged in a clearly separated layer above the image.

VDW Fine Art Website

Photography is the main design element of Van De Weghe Fine Art, an art gallery in New York. All graphic design elements remain very reduced while the full screen photo is used to create a virtual room for information and interaction.

Take the blinkers off, and think about experiences as a whole

People in the roles of information architects or interaction designers tend to concentrate on their part of the job and leave subsequent visual decisions to the graphic or visual designers, which is of course always a good way to start. Nevertheless, all designers (including the two disciplines mentioned before) should be able to actively think about and contribute to the concrete, sensual appearance of the final product, since this is what design is all about.

So why posting this on a site dedicated to the "design behind the design"? Because interaction designers and information architects have become strong conceptual thinkers, driving an experience in terms of concept as well as it’s soul.  Visual design should enhance and implement this vision, which is in fact in most cases the contratry of "making things pretty."

Recommendations for photography in next-generation interfaces

  • Integrate the images into the interaction design. This can be achieved by making areas responsive to user behaviour, enhancing its function from a visual element to an instrument of interaction. Due to its realistic and nonverbal nature, photography can be equally or more powerful than icons, buttons or other classic interface elements.
  • Work with screen space. Place images in a way that they have a real impact on the overall appearance instead of putting them into small banner-like screen areas.
  • Photography invokes an emotional reaction and has the capability to create a certain ambiance more easily than other media. Use pictures that make the user feel comfortable and adequate to the application context.
  • Clarity, structure, movement, separation, union – photos can convey messages instantly to the viewer, by means of blur, motion, composition, and of course motive. Work with these as design elements.
  • If used as content element, think about alternatives to simply placing photography on a grid. There are a lot of possibilities to make images "tangible" to the user. Think of multiple layers, movable objects, or 3D approaches.
  • Keep the subject of the application and the nature of the content in mind while designing. Choose photos that convey a real meaning and make sense in the application context. Avoid standard (stock) images or those with only decorative function. Prefer custom-made images tailored to your intentions and requirements.
  • Combine and integrate all elements to create a holistic interface design where all visual elements work together and make the interface.

See also:

Interactive Identity: Bridging Corporate Identity and Enterprise IT
Visible Narratives: Understanding Visual Organization by Luke Wroblewski
10 ways by gettyImages
seen.by

Coming soon:
Part II – Typography in User Interface Design

All It’s Cracked Up To Be

Written by: Chris Baum

When the Web first emerged, there was a hole in the user experience world. Many people practiced interaction design, but their community was shoe-horned into those of other disciplines like IA, HCI, and Usability. At the time, most of the UX community felt like that was sufficient.

Then the Web started to change, and the conversation with it. That hole seemed all the more gaping. The “IxDA”:http://www.ixda.org/ was formed in 2003 and fit nicely into a space amongst those other communities of practice. The IxDA discussion list was (and still is) an interesting place to be; it nurtured conversations different than were happening elsewhere. They were all about interaction design, and that purity lent a focus that took the ideas of Web 2.0 and ratcheted up the thinking several notches.

As they reached a certain concentration point, it was obvious to the IxDA faithful that a conference was a logical next step. All those other major practices in the UX community of practice had their own events. The lack of an interaction design gathering during the conference gauntlet was quite obvious, but who had time to put on the show?

So they set out to design a conference that was completely about interaction design, for interaction designers, and designed by interaction designers.

I attended that first conference, “Interaction 08”:http://interaction08.ixda.org, last year in Savannah. The event was fantastic. The partnership with the “Savannah College of Art & Design”:www.scad.edu/ (SCAD) raised the bar, giving the conference unusually deep connections into the community. That’s interaction design.

The speakers were experienced and incredibly varied, covering the hallowed ground (Alan Cooper, Bill Buxton) and the vanguard (Bill DeRouchey, Matt Jones, David Armano), all the while adding with some key folks from other disciplines (Jared Spool, Malcolm McCollough, Chris Conley). See “last year’s recordings”:http://interaction08.ixda.org/videos.php. Even though they are one year old, you will find something inspiring.

Your inspiration might be:
* Bill Buxton’s admonishment to throw away five designs before keeping one
* Matt Jones telling the world how they created Dopplr, making great design sound like no big deal
* Sigi Moeslinger’s images of the NYC subway car that she designed so that kids are not able to climb the hand rails

All of the sessions were filmed and placed online within days of the conference ending. (Though, as an IA, I have to protest that the slides were often not given sufficient treatment.)

Interaction08 IRL: Wayfinding arrows from Savannah
(c) L. Halley as posted on “flickr”:http://flickr.com/photos/lanehalley/2254195910/

We constantly came across little touches that you would never expect from a conference. A great example of this was the arrows chalked on the sidewalk between the three buildings that housed the conference sessions. Color-coded by track (as shown on your badge), they were incredibly useful and, like Savannah, quite charming.

No, not everything was perfect, but that should never be the expectation. I challenge you to find a better-run example of a conference’s initial event, especially one that was planned for 250 people, but 400 showed up. And placing the conference in the small, but interesting and cozy hamlet of Savannah, GA, was a stroke of genius. Interaction 08 had an intimate air. Plenty of distractions allowed us to escape from the geek talk, but the city didn’t pull you away as do most.

Vancouver is a great town, but I hope that the IxDA will consider doing something similar to Savannah next year.

Why am I talking about this now? Well, to be honest, I feel like I haven’t done enough to let people know how great the conference was. Here we are, two weeks from to “Interaction 09”:http://interaction09.ixda.org/program.php in Vancouver, and I feel nostalgic about last year and realize that I will miss not being able to go this year.

Even if you can’t don your superhero costume and get to Interaction 09 this year (which you should do if you can), think about it next year. Interaction is fresh, vibrant, and takes a usefully different perspective on the issues we encounter every day. I know I’ve designed differently because of that experience.

Keep an eye out on Boxes and Arrows as we cover the conference in Vancouver. Whitney Hess (“twitter”:http://twitter.com/whitneyhess/ “website”:http://whitneyhess.com/blog/ )will be there to keep an eye out for the nuggets of wisdom. After the conference, she’ll provide a full report.

Viva la Interaction 09, and Happy Conference Season to all!

Flowmaps and Frag-Grenades, Part 2

Written by: Bryce Glass

I’d like to talk specifics a bit. I’m sure there will be some readers at B&A who aren’t gamers, and probably even more who haven’t played Halo—so my apologies to those folks— but… describe in some detail exactly what you contributed to the finished product.

When I look at Halo 3, what ‘pieces’ of the experience did you work on?

I worked on the IA, navigation and screens for the game shell; the social design for the game for systems such as the party system, matchmaking systems and sharing systems; on rewards systems such as the stats, medals and experience ratings; and also on how that user experience extended to the web through Bungie.net. I also worked on the theater features such as film clips and screenshots, and on the Forge “in-game” UI. My compatriot David Candland handled the in-game HUD in addition to collaborating with me on the design, look and feel for the overall UI and specifically handling the visual design for the game. Aaron Lemay was the art and graphic design lead for our team, including Bungie.net. Max Hoberman was the lead for the entire multiplayer and UI team during the planning stage of the project.

The information architecture and navigation includes all of the screens and flow to support the game experience outside of the game—we refer to this as the “Game Shell” UI. With Halo 3 we started by identifying what the “core game experiences” would be for the game and grouping them into “modes”.

These modes were:

  • Campaign:The story mode where players play through an adventure either solo or cooperatively.
  • Matchmaking:Players are matched with other players over the internet based on similar skills or experience and based on game preferences to play games that are controlled by Bungie matchmaking.
  • Custom Games:Players set their own game rules and maps in a player-hosted game lobby.
  • Forge:Players can customize maps to play in Custom Games or to share with the community.
  • Theater:Players can view films from any game mode and take screenshots.

Do these modes then inform the IA of the shell?

Grouping the experiences as modes allowed us to start with a foundation for the overall player experience and a baseline for the information architecture. Each of the modes support many options within the mode, but these 5 modes have unique characteristics that support a”focused” player experiences within the mode over a period of time. With the priority that “everything is social,” each of these modes are designed to support from 4 to 16 players either locally, on System Link or over Xbox Live, so we gave each of these modes its own “lobby” where players could gather to share the experience.

In addition to focusing the core experiences in the game, this lobby system sets up the infrastructure for our party system. In Halo a “party” is a group of players that gather to play together, particularly over Xbox LIVE. The party leader is the player who makes decisions for what the party will do together, and the system allows players to stay together and do anything they want without breaking up. In Halo 2 this was termed the”virtual couch”…

Yeah, I recall that H2 was really revolutionary at the time—made it so easy to form a group and hang out for the night…

It’s like sitting on the couch together—if you decide you want to switch from one game mode to another on Xbox LIVE you can do it together just like if you were sitting on the couch with your friend. This is a very big deal on consoles because many online systems do not have this flexibility and it is not always easy to get together and stay together online.

The end result was a fairly simple information architecture for our game shell. Each mode has a lobby. Within the lobby, the specific options are contextual to the game mode. For example, in Campaign the main options are to select a level or difficulty for the story, whereas in Custom games the main options are to select a game type or map to play. The lobbies themselves are “locations” for players to gather into a party and play together and once players are together they can easily switch modes from within the lobby system to travel together to try a different mode. For example, a party of players may decide to customize a map together in Forge, then switch over to Custom Games to play on the map they just created.

The other major areas for player experience are community, personal identity, sharing, and settings. These are very much tied to a player’s personal profile and so in the information architecture these are all presented in a global menu that can be accessed anytime by pressing the “Start” button. The menu is always tied to the identity of the player who presses the button.

Regarding navigation and orientation, our goal was that the player always understands where they are in the game and that menus are in most cases only a couple of levels deep. In most cases the player is only a few clicks away from a core location. Another benefit of grouping the experiences into modes is that the main experiences for the game are easily discoverable from the main menu.

What kind of process did you follow?

The overall timeline for game development was “pre-production” where the studio teams plan what we wanted to do for the project and evaluates scope, then “production” where we execute on the design. At the end of pre-production each team submits an overall design document to the leadership group and the project features are approved. For the interface and experience this was a pretty detailed document for the overall information architecture and screens for the game. This is similar to a product requirements document, but in the games world these are design documents. Over the course of the project the design evolved in some places or was scaled back in other places. A great idea may be recognized well into production and is never discarded automatically, but anything new that is proposed during production is weighed against other features that are in development.

Regarding design process, we targeted the foundation first. The information architecture and systems that would support the different features in the game, as well as the overall guiding principles for the game. This allowed us to understand where everything fit.

Then we tackled the major features based on scope and dependencies. Each of these “major features” would cover many areas of the game. For example, the lobby system would provide the foundation for many other features and was also a dependency in supporting the overall IA for the project. It included the “shell” for the interface, the player roster that shows who is in your game lobby and the core navigation for the information architecture. For each major “feature” set, I would put together a proposal for the feature using screen flow “posters” that outlined flow and also detailed screen requirements. We would then review these proposals with the team members that had an interest in the UI. From there we would refine and build out detailed design documents to support the development. Once the feature was built and in the game we would verify that the features were working according to specification through in-game testing.

We also had great support from the Microsoft usability lab. User researchers were part of our review process and provided heuristic analysis of the proposed designs, and also supported usability testing for both the early “prototype” ideas and later with the actual game.

Would your design artifacts look totally familiar to most practicing Interaction Designers? Wireframes, flows, that kinda thing?

Absolutely. The format I found most useful were poster flows.These are large format posters with detailed wireframe screens, navigation and flow decisions for a feature area. These would include detailed specs and use cases for specific features near the screen or decision point on the poster where they were relevant. I would print these out and post them on the wall near the UI pit, and also post them internally as pdf documents.

The posters allowed everyone in the studio to get an overview of the feature by reviewing the printed poster on the wall, and the engineers and QA team would use the pdf version as the spec while developing and testing the feature. I preferred this format because it was a format that outlined “the big picture” graphically, so it was easy to collaborate and refine as a team. It was also easier to update than a detailed 50 page word document. In many cases, the poster on the wall would be the “most up to date” spec because—as we were developing the feature—our team collaborated to work through issues together using the printed posters, and we would update the poster specification with markers as we refined the direction. The QA team calledthe poster wall the “wall of truth”.

I also put together design documents for the main feature areas such as matchmaking, the party system, sign-in and profile, etc. These were word documents with detailed specs, or in some cases excel spreadsheets. The word documents started with an IA diagram and overview of how the feature worked in context with the core shell UI and that then outlined specific specifications for each feature. Early in the project I also had wireframe”prototypes” in power point to walk through certain use cases to explain an idea and get feedback.

Did you do any prototyping of concepts? And how about tools in general? Does Bungie have proprietary tools for screen design and prototyping?

We conducted rough prototyping during planning to test our concepts in a usability lab or to get feedback on concepts, and we also put together a polished director demo to present the final interface proposal to the team at the end of the pre-production phase.

On the rough prototyping we worked with Randy Pagulayan and John Hopson from the Microsoft Game Studios User Research group to test the concepts in the usability lab. We put together a script for the prototype, then I created wireframe screens in illustrator and John coded the screens into a prototype so that test subjects could use an Xbox 360 controller to navigate the prototype. Randy and John and our team spent about three weeks running the prototype through tests and then rapidly iterating on ideas with matchmaking, the core game shell interface and the party system.

The content was all fakery, I think we called the game in the prototype “Mecha”, but it was designed to confirm the fundamental direction for our user experience. The lab setup and process was top notch and I really have to give props to the Microsoft usability team. The process helped us to refine our thinking and have confidence in the information architecture and core navigation. In fact, the final prototype for those sessions is very close to what we shipped in the final game.

David Candland, Max Hoberman and I then put together a polished demo in Director that was scripted to run through the main use cases for our proposed interface direction. We used this to present our proposed direction to the team and Max and the leads used this to evaluate the direction, gather feedback and reach consensus on feature sets and final direction as we moved into production.

Thanks, Colm!

Note: shortly after Halo 3 shipped, Colm left Bungie to work with Max
Hoberman at Certain Affinity, a game design and development company
based in Austin, TX.

Flowmaps and Frag-Grenades, Part 1

Written by: Bryce Glass

By any measure, Halo 3 is one of the most wildly-successful consumer software interfaces in recent memory: more than 1 million players played the game in its first 24 hours on Xbox Live; over 8 million copies sold to date; and “over 100,000 pieces of user generated content being uploaded daily […] 30 percent higher than YouTube on a daily basis.” It’s probably safe to say that more cumulative man-hours have already been spent in Halo gaming lobbies than in Microsoft Word! But H3 is distinguished for another reason, too. It’s one of the earliest—and definitely one of the highest-profile—mass-market video games to benefit from the contributions of a dedicated interaction designer.

Colm Nelson was the interaction designer for Halo 3 and has been a working UX designer since 2000. Before joining Bungie (the Studio that produces the Halo series), Colm’s background was largely in Internet consumer applications, with a heavy bent toward entertainment software. Colm’s experience is unique, but it’s part of a growing trend in the gaming industry toward employing UX professionals. Colm would like to see this trend continue, and was gracious enough to speak about it with us, and share some insight into the intersection between his ‘traditional’ UX background and his job duties at Bungie.

Hi Colm—I’d like to thank you for taking the time out to speak to the B&A community. Given the audience here, I thought this emerging trend—this matriculation of interaction designers into the gaming world—is something that folks would want to know more about…

Online systems that facilitate player experiences around social interaction, custom content sharing and online communities have received a lot of attention by both the gaming press and fans and is definitely a hot trend in gaming. The gaming press has even begun to draw comparisons with these features to You Tube, My Space and Facebook. My observation is that developers that are offering more features in [the] user experience around the game are seeing more of a need to specialize and fill roles specifically around user experience and interface design.

Games with success in these areas have generally done a good job developing a solid feature set and matching the social goals of gameplay with the accessibility and usability of the features. Ultimately these features add to the longevity of a game’s popularity, which translates directly to sales. I think as a result there are more opportunities for traditional interaction designers in the games business.

I’ve met developers that are actively recruiting from traditional software interaction design to take ownership of these features and if you look around you’re starting to see postings for UI designers—both Bungie and Blizzard are actively recruiting interaction designers and experience designers. There are also studios that are championing player experience research and design such as XEOdesign, Inc.

But I also think that if you look around you’ll see that it’s not as clearly defined role in all game companies as it is in traditional software so I think as a trend it’s fairly early. My impression is that in many game companies the interface and experience design in games is handled by either designers or artists that are also responsible for the overall game design. The good news is that if you are an interface designer with a passion for games, there are definitely opportunities out there.

Let’s start at the beginning. I actually remember seeing the job req. at Bungie that you filled … it even used the term ‘Interaction Designer.’ My jaw almost dropped—design jobs in the gaming industry typically focus on character design, level design, gameplay and mechanics. How did Bungie ‘catch religion’ about strong interaction design? About paying attention not just to the core gaming experience, but also all of that scaffolding that gets you into the game? The experience around the game?

Yeah, I had the same reaction when I saw the posting. I’d been looking for opportunities in the games industry for some time and had not seen any positions related to interaction design, so when I saw the posting I was amazed.

The guy that hired me, Max Hoberman, was the online, UI and multiplayer design lead for Halo 2. Max and the team at Bungie are really passionate about the user experience around the game and also about usability. It’s just part of the culture of the studio. You can see the results from the design of the party system and the matchmaking system from Halo 2. Heading into Halo 3 there was plenty of ambition for the social experience and with features for the game so the team decided to hire a dedicated Interaction Designer.

And how did you get the job? 😉

As soon as I saw the position I put together a portfolio and cover letter that said I wanted to help Bungie in their quest for world domination. I managed to get a phone interview with Max, which went OK. His feedback was that he enjoyed our conversation but if we had a second conversation he expected me to be more critical with my observations about what could be improved from Halo 2 and Bungie.net. This was on a Friday. The “if” felt pretty dicey to me so I decided to be proactive.

I worked all weekend on a concept document on ideas to improve Halo 2 and fired it off on Sunday night at 3am. I wasn’t sure how it would be received but it paid off because I got a invitation to visit Bungie for an interview. I flew to Seattle to meet the team for a full day interview and was really impressed with the energy and passion that they had for design and the experience around the game. It was a lot of fun—I was also passionate and the interview felt like a series of brainstorming sessions as we discussed problems and ideas and how we might solve them. I guess it went pretty well because they offered me the job!

Describe the development team to me. I (like you, before your time at Bungie) come from a web & consumer applications background with roles like Product Manager, Project Manager, Developers, Designers, Researchers. Is game development roughly the same? How were you situated on the team?

There are similarities. It is still software design so all of the practical considerations still apply—you need to manage the project well in order to succeed and you need the resources to make it all come together. Producers, engineers, designers, researchers and QA all play a role on the team. Producers at Bungie are roughly equivalent to project managers from my previous experience, although I think the producer role varies quite a bit across studios. But at the same time you have cinematics, art, modeling and animation that are also core to the project.

There’s really not a “product manager” role, at least at Bungie. The team makes pitches for the game, the leads of the studio then decide what will be greenlit for production, and the team leads propose and drives feature sets for the project. It’s a very collaborative process and it is driven by the leads of the various disciplines. An example is that in designing the online experience and interface plans we solicited feedback, then proposed features and prototyped “proofs of concept” in order to land on the feature sets that would be developed for Halo 3.

Was there a bit of culture shock moving into the gaming world? Did folks on the team generally ‘get’ what you were brought onboard to achieve?

Yeah, there was a bit of culture shock for me. Mainly because some of the tech, process and roles on the team were new to me. As far as people getting my role, I’d say it was about the same as what an interface designer typically encounters when joining a new team. Definitely the core team responsible for interface and social design had clear goals for how the interface design process would work and understood what I was tackling—we tackled it together as a team. I was really surprised at how important interface design and usability was to the entire team—it was awesome! And at a higher level, even if all the folks didn’t get the details about process, they were supportive and as a rule folks at Bungie are really good at giving feedback on concept proposals and contributing ideas.

[Stay tuned for another installment of Colm Nelson, designer and gamer.]