Photos for interaction

Written by: Milan Guenther

When developing user interfaces, designers increasingly use custom graphical elements. As the web browser becomes basic technology for software interfaces, more and more elements derived from graphic and web design replace the traditional desktop approaches to the concrete design of human-computer interfaces.

In the near future, this development will become even more relevant. The barrier between web pages and desktop software is beginning to disappear, and modern rich client user interface technologies such as Silverlight/WPF, Air, or Java FX enables designers to take the control over the whole user experience of a software product. Style guides for operating systems like MacOS or Windows become less important because software products are available on multiple platforms, incorporating the same custom design independently from OS-specific style guides. Software companies and other parties involved begin to use the power of a distinct visual design to express both their brand identity and custom interactive design solutions to the users.

While this implies a new freedom for designers working in the field of interactive software products, it strengthens the importance of visual design for the design of user interfaces. Designers working on concrete graphic solutions for a specific interface are breaking away from established standards defined by a software vendor. It is now the responsibility of those user interface designers to choose graphical elements wisely to make a product’s interaction principles visible and usable.

Elements of interactive visual design

Following the roots of visual design in print and online communication, the design of a visually compelling and functional application must take into account different requirements, even though it takes the same methods to realize its goals: A dynamic visualization of the interactive product in form of text, images, and colors. In contrast to pure one-way communication design striving to create identity and media, the main goal of such a design process for interactive products is much closer to product or industrial design — namely the creation of a product that serves the user in a optimal way. It requires a strong collaboration with the disciplines of interaction design, software development, and product management.

The role of photography in software user interfaces

Photography has both challenges and opportunities as graphical element in user interface design. I chose photography as an example for a classic communication design instrument,  but the ideas are also applicable to typography, illustration, motion design, graphics, and the like. One important aspect of these thoughts is the required collaboration between the different design disciplines involved in the creation of a user experience, and how to optimize team performance for most valuable ideas and outcomes.

Case 1: Photography as content

In software applications, photography in most cases is used as content element, since photos express situations of human life very well and thus are well suited to capture and represent a certain message. The images have a semantic meaning, communicating information to the viewer and user of the respective web or software application.

Examples for this type of application can be found not only in private photo collection software such as iPhoto but also in enterprise content management solutions for web sites and product catalogues, or the web shop itself. To the user, the photo is not an element of decoration or design, but it is the actual content or a part of it.

On the visual design side, the challenge is to present this content in a way that makes it visible and reveals context and meaning. Photographic content tends to come to the fore due to its strong graphical impact, so other elements should be designed to support that effect and not to compete with it for the viewer’s attention.

The challenge of well-representing imagery content elements in a user interface is often to provide adequate metadata-driven tools to allow enhancing images with meaning; take tagging people at Facebook as an example, which turns photos into something findable. Finding a a meaningful visual representation of photographic content and this data is a common challenge to visual design and information architecture.

Case 2: Photography as design element

While the use of photography as a design element in user interfaces is rather new, there is a long tradition of using photography as a design element in advertising-related online media. This treatment as design element follows the rules of brand communication and takes photography as integral element of the web site design.

But contrary to its usage as a content element, the image is used in web design as a medium to communicate a message to the user in order to create a certain context for the real content. Some sites, such as financial institutions or software suppliers, are working with stock-like photography showing photos of people or buildings, while other businesses can combine site content and corporate communication in one image, like on fashion sites.

Benetton Web Site

Benetton uses the photo on their home page to convey both a product and a brand message to the visitors. The photo is in the focus, but is receipt more like a visual expression of emotion than as actual site content. The web design uses the photo like an advertisement would do: It is part of the site’s visual design and has been chosen by the designers. The product, derived from the site’s content, is turned into the medium to make an impression to the visitor.

Photography in interactive media is often a trigger for engagement and interaction. Interaction designers working on the product’s interaction flows can thus provide visual designers with key information to select and apply visual elements, in order to start the conversation, and keep it alive.

3. Photography in software UI Design

Unlike other digital products, the visible part of software usually makes no significant use of photography by means of communication design. Today’s desktop software interfaces consist of text, rectangle areas, and icons, along with with a lot of transparency or 3D effects. If not a necessary content element, photos are only used in splash screens of desktop applications.

In web interfaces, static images in header bars are quite common, resulting from the "hybrid" characteristics of those applications between a software product and a web site. In most cases, the photo serves as decorative element with no semantic meaning and is thus reduced to a very small amount of space of the screen; it is not important for the product’s original purpose. This is done in order to provide as much space as possible for the informational content that is useful to the user.

SAP Enterprise Portal

The image above shows SAP’s enterprise portal product in a standard visual design. The small photo showing a bridge in the header bar is part of the UI design, while the images at the bottom are content elements related to the text messages.

Like in web design, the image is used here as an element of design but loses all its visual power due to its jammed position in a design that puts all emphasis on the representation of information. The "mise en scène" of the interface suffers from the poor integration of the photographic element, totally separated from all information. Its meaning in the application context is reduced to a vague bridge metaphor referring to the function of a portal.

The best of both worlds: towards a new quality

With every release, software providers make a step towards a custom graphical representation and improve the visual design quality of their products. To take a real advantage of photography as a medium, there is a need to treat it differently than it is done today in the software industry.

At same time, a lot of effort is being made to make applications more "shiny and glossy", to better imitate real world structures on the screen. Sometimes, like in current reporting tools for business intelligence, this additional glitter reduces the visual perception of information instead of enhancing it.

The following examples and recommendations are not always easy to follow, because a meaningful integration of this medium in a UI design that centers around representation of information and providing a tool for efficient usage is a difficult task. Nonetheless, visual elements such as photography have the power to reveal a message instantly and powerfully to the user to complete and to establish a visual identity. Designers should use these possibilities to trigger the user’s attention to support a holistic interaction design and not to distract her by decorative elements and visual clutter.

Examples for photography in interactive applications

Designklicks Designklicks

This example screenshot shows Designklicks (now seen.by), a German website that collects and tags user-generated imagery. Just like Flickr and other photo-centric web sites, the images are in the focus of the design and are visually strictly separated from other design elements like icons, logos, buttons, and links. For a visual representation of the complex information architecture, it allows the user to sort and present the content in different ways, from a simple grid to a navigable 3D space.

Space by the Barbarian Group for Getty
Space by the Barbarian Group for Getty Space by the Barbarian Group for Getty

These screens are taken from an art project for gettyImages, done by the barbarian group. It uses widescreen photos to build a three-dimensional flow of cascaded rooms, connected to each other by graphical signage elements appearing in the images.

Société Générale Customer Portal

The bank Société Générale used a photo as main art on their web site, emphasizing the fact that they address everyone with their services. The main navigation appearing on the start page is embedded into the photo, but at the same time arranged in a clearly separated layer above the image.

VDW Fine Art Website

Photography is the main design element of Van De Weghe Fine Art, an art gallery in New York. All graphic design elements remain very reduced while the full screen photo is used to create a virtual room for information and interaction.

Take the blinkers off, and think about experiences as a whole

People in the roles of information architects or interaction designers tend to concentrate on their part of the job and leave subsequent visual decisions to the graphic or visual designers, which is of course always a good way to start. Nevertheless, all designers (including the two disciplines mentioned before) should be able to actively think about and contribute to the concrete, sensual appearance of the final product, since this is what design is all about.

So why posting this on a site dedicated to the "design behind the design"? Because interaction designers and information architects have become strong conceptual thinkers, driving an experience in terms of concept as well as it’s soul.  Visual design should enhance and implement this vision, which is in fact in most cases the contratry of "making things pretty."

Recommendations for photography in next-generation interfaces

  • Integrate the images into the interaction design. This can be achieved by making areas responsive to user behaviour, enhancing its function from a visual element to an instrument of interaction. Due to its realistic and nonverbal nature, photography can be equally or more powerful than icons, buttons or other classic interface elements.
  • Work with screen space. Place images in a way that they have a real impact on the overall appearance instead of putting them into small banner-like screen areas.
  • Photography invokes an emotional reaction and has the capability to create a certain ambiance more easily than other media. Use pictures that make the user feel comfortable and adequate to the application context.
  • Clarity, structure, movement, separation, union – photos can convey messages instantly to the viewer, by means of blur, motion, composition, and of course motive. Work with these as design elements.
  • If used as content element, think about alternatives to simply placing photography on a grid. There are a lot of possibilities to make images "tangible" to the user. Think of multiple layers, movable objects, or 3D approaches.
  • Keep the subject of the application and the nature of the content in mind while designing. Choose photos that convey a real meaning and make sense in the application context. Avoid standard (stock) images or those with only decorative function. Prefer custom-made images tailored to your intentions and requirements.
  • Combine and integrate all elements to create a holistic interface design where all visual elements work together and make the interface.

See also:

Interactive Identity: Bridging Corporate Identity and Enterprise IT
Visible Narratives: Understanding Visual Organization by Luke Wroblewski
10 ways by gettyImages
seen.by

Coming soon:
Part II – Typography in User Interface Design

All It’s Cracked Up To Be

Written by: Chris Baum

When the Web first emerged, there was a hole in the user experience world. Many people practiced interaction design, but their community was shoe-horned into those of other disciplines like IA, HCI, and Usability. At the time, most of the UX community felt like that was sufficient.

Then the Web started to change, and the conversation with it. That hole seemed all the more gaping. The “IxDA”:http://www.ixda.org/ was formed in 2003 and fit nicely into a space amongst those other communities of practice. The IxDA discussion list was (and still is) an interesting place to be; it nurtured conversations different than were happening elsewhere. They were all about interaction design, and that purity lent a focus that took the ideas of Web 2.0 and ratcheted up the thinking several notches.

As they reached a certain concentration point, it was obvious to the IxDA faithful that a conference was a logical next step. All those other major practices in the UX community of practice had their own events. The lack of an interaction design gathering during the conference gauntlet was quite obvious, but who had time to put on the show?

So they set out to design a conference that was completely about interaction design, for interaction designers, and designed by interaction designers.

I attended that first conference, “Interaction 08”:http://interaction08.ixda.org, last year in Savannah. The event was fantastic. The partnership with the “Savannah College of Art & Design”:www.scad.edu/ (SCAD) raised the bar, giving the conference unusually deep connections into the community. That’s interaction design.

The speakers were experienced and incredibly varied, covering the hallowed ground (Alan Cooper, Bill Buxton) and the vanguard (Bill DeRouchey, Matt Jones, David Armano), all the while adding with some key folks from other disciplines (Jared Spool, Malcolm McCollough, Chris Conley). See “last year’s recordings”:http://interaction08.ixda.org/videos.php. Even though they are one year old, you will find something inspiring.

Your inspiration might be:
* Bill Buxton’s admonishment to throw away five designs before keeping one
* Matt Jones telling the world how they created Dopplr, making great design sound like no big deal
* Sigi Moeslinger’s images of the NYC subway car that she designed so that kids are not able to climb the hand rails

All of the sessions were filmed and placed online within days of the conference ending. (Though, as an IA, I have to protest that the slides were often not given sufficient treatment.)

Interaction08 IRL: Wayfinding arrows from Savannah
(c) L. Halley as posted on “flickr”:http://flickr.com/photos/lanehalley/2254195910/

We constantly came across little touches that you would never expect from a conference. A great example of this was the arrows chalked on the sidewalk between the three buildings that housed the conference sessions. Color-coded by track (as shown on your badge), they were incredibly useful and, like Savannah, quite charming.

No, not everything was perfect, but that should never be the expectation. I challenge you to find a better-run example of a conference’s initial event, especially one that was planned for 250 people, but 400 showed up. And placing the conference in the small, but interesting and cozy hamlet of Savannah, GA, was a stroke of genius. Interaction 08 had an intimate air. Plenty of distractions allowed us to escape from the geek talk, but the city didn’t pull you away as do most.

Vancouver is a great town, but I hope that the IxDA will consider doing something similar to Savannah next year.

Why am I talking about this now? Well, to be honest, I feel like I haven’t done enough to let people know how great the conference was. Here we are, two weeks from to “Interaction 09”:http://interaction09.ixda.org/program.php in Vancouver, and I feel nostalgic about last year and realize that I will miss not being able to go this year.

Even if you can’t don your superhero costume and get to Interaction 09 this year (which you should do if you can), think about it next year. Interaction is fresh, vibrant, and takes a usefully different perspective on the issues we encounter every day. I know I’ve designed differently because of that experience.

Keep an eye out on Boxes and Arrows as we cover the conference in Vancouver. Whitney Hess (“twitter”:http://twitter.com/whitneyhess/ “website”:http://whitneyhess.com/blog/ )will be there to keep an eye out for the nuggets of wisdom. After the conference, she’ll provide a full report.

Viva la Interaction 09, and Happy Conference Season to all!

Flowmaps and Frag-Grenades, Part 2

Written by: Bryce Glass

I’d like to talk specifics a bit. I’m sure there will be some readers at B&A who aren’t gamers, and probably even more who haven’t played Halo—so my apologies to those folks— but… describe in some detail exactly what you contributed to the finished product.

When I look at Halo 3, what ‘pieces’ of the experience did you work on?

I worked on the IA, navigation and screens for the game shell; the social design for the game for systems such as the party system, matchmaking systems and sharing systems; on rewards systems such as the stats, medals and experience ratings; and also on how that user experience extended to the web through Bungie.net. I also worked on the theater features such as film clips and screenshots, and on the Forge “in-game” UI. My compatriot David Candland handled the in-game HUD in addition to collaborating with me on the design, look and feel for the overall UI and specifically handling the visual design for the game. Aaron Lemay was the art and graphic design lead for our team, including Bungie.net. Max Hoberman was the lead for the entire multiplayer and UI team during the planning stage of the project.

The information architecture and navigation includes all of the screens and flow to support the game experience outside of the game—we refer to this as the “Game Shell” UI. With Halo 3 we started by identifying what the “core game experiences” would be for the game and grouping them into “modes”.

These modes were:

  • Campaign:The story mode where players play through an adventure either solo or cooperatively.
  • Matchmaking:Players are matched with other players over the internet based on similar skills or experience and based on game preferences to play games that are controlled by Bungie matchmaking.
  • Custom Games:Players set their own game rules and maps in a player-hosted game lobby.
  • Forge:Players can customize maps to play in Custom Games or to share with the community.
  • Theater:Players can view films from any game mode and take screenshots.

Do these modes then inform the IA of the shell?

Grouping the experiences as modes allowed us to start with a foundation for the overall player experience and a baseline for the information architecture. Each of the modes support many options within the mode, but these 5 modes have unique characteristics that support a”focused” player experiences within the mode over a period of time. With the priority that “everything is social,” each of these modes are designed to support from 4 to 16 players either locally, on System Link or over Xbox Live, so we gave each of these modes its own “lobby” where players could gather to share the experience.

In addition to focusing the core experiences in the game, this lobby system sets up the infrastructure for our party system. In Halo a “party” is a group of players that gather to play together, particularly over Xbox LIVE. The party leader is the player who makes decisions for what the party will do together, and the system allows players to stay together and do anything they want without breaking up. In Halo 2 this was termed the”virtual couch”…

Yeah, I recall that H2 was really revolutionary at the time—made it so easy to form a group and hang out for the night…

It’s like sitting on the couch together—if you decide you want to switch from one game mode to another on Xbox LIVE you can do it together just like if you were sitting on the couch with your friend. This is a very big deal on consoles because many online systems do not have this flexibility and it is not always easy to get together and stay together online.

The end result was a fairly simple information architecture for our game shell. Each mode has a lobby. Within the lobby, the specific options are contextual to the game mode. For example, in Campaign the main options are to select a level or difficulty for the story, whereas in Custom games the main options are to select a game type or map to play. The lobbies themselves are “locations” for players to gather into a party and play together and once players are together they can easily switch modes from within the lobby system to travel together to try a different mode. For example, a party of players may decide to customize a map together in Forge, then switch over to Custom Games to play on the map they just created.

The other major areas for player experience are community, personal identity, sharing, and settings. These are very much tied to a player’s personal profile and so in the information architecture these are all presented in a global menu that can be accessed anytime by pressing the “Start” button. The menu is always tied to the identity of the player who presses the button.

Regarding navigation and orientation, our goal was that the player always understands where they are in the game and that menus are in most cases only a couple of levels deep. In most cases the player is only a few clicks away from a core location. Another benefit of grouping the experiences into modes is that the main experiences for the game are easily discoverable from the main menu.

What kind of process did you follow?

The overall timeline for game development was “pre-production” where the studio teams plan what we wanted to do for the project and evaluates scope, then “production” where we execute on the design. At the end of pre-production each team submits an overall design document to the leadership group and the project features are approved. For the interface and experience this was a pretty detailed document for the overall information architecture and screens for the game. This is similar to a product requirements document, but in the games world these are design documents. Over the course of the project the design evolved in some places or was scaled back in other places. A great idea may be recognized well into production and is never discarded automatically, but anything new that is proposed during production is weighed against other features that are in development.

Regarding design process, we targeted the foundation first. The information architecture and systems that would support the different features in the game, as well as the overall guiding principles for the game. This allowed us to understand where everything fit.

Then we tackled the major features based on scope and dependencies. Each of these “major features” would cover many areas of the game. For example, the lobby system would provide the foundation for many other features and was also a dependency in supporting the overall IA for the project. It included the “shell” for the interface, the player roster that shows who is in your game lobby and the core navigation for the information architecture. For each major “feature” set, I would put together a proposal for the feature using screen flow “posters” that outlined flow and also detailed screen requirements. We would then review these proposals with the team members that had an interest in the UI. From there we would refine and build out detailed design documents to support the development. Once the feature was built and in the game we would verify that the features were working according to specification through in-game testing.

We also had great support from the Microsoft usability lab. User researchers were part of our review process and provided heuristic analysis of the proposed designs, and also supported usability testing for both the early “prototype” ideas and later with the actual game.

Would your design artifacts look totally familiar to most practicing Interaction Designers? Wireframes, flows, that kinda thing?

Absolutely. The format I found most useful were poster flows.These are large format posters with detailed wireframe screens, navigation and flow decisions for a feature area. These would include detailed specs and use cases for specific features near the screen or decision point on the poster where they were relevant. I would print these out and post them on the wall near the UI pit, and also post them internally as pdf documents.

The posters allowed everyone in the studio to get an overview of the feature by reviewing the printed poster on the wall, and the engineers and QA team would use the pdf version as the spec while developing and testing the feature. I preferred this format because it was a format that outlined “the big picture” graphically, so it was easy to collaborate and refine as a team. It was also easier to update than a detailed 50 page word document. In many cases, the poster on the wall would be the “most up to date” spec because—as we were developing the feature—our team collaborated to work through issues together using the printed posters, and we would update the poster specification with markers as we refined the direction. The QA team calledthe poster wall the “wall of truth”.

I also put together design documents for the main feature areas such as matchmaking, the party system, sign-in and profile, etc. These were word documents with detailed specs, or in some cases excel spreadsheets. The word documents started with an IA diagram and overview of how the feature worked in context with the core shell UI and that then outlined specific specifications for each feature. Early in the project I also had wireframe”prototypes” in power point to walk through certain use cases to explain an idea and get feedback.

Did you do any prototyping of concepts? And how about tools in general? Does Bungie have proprietary tools for screen design and prototyping?

We conducted rough prototyping during planning to test our concepts in a usability lab or to get feedback on concepts, and we also put together a polished director demo to present the final interface proposal to the team at the end of the pre-production phase.

On the rough prototyping we worked with Randy Pagulayan and John Hopson from the Microsoft Game Studios User Research group to test the concepts in the usability lab. We put together a script for the prototype, then I created wireframe screens in illustrator and John coded the screens into a prototype so that test subjects could use an Xbox 360 controller to navigate the prototype. Randy and John and our team spent about three weeks running the prototype through tests and then rapidly iterating on ideas with matchmaking, the core game shell interface and the party system.

The content was all fakery, I think we called the game in the prototype “Mecha”, but it was designed to confirm the fundamental direction for our user experience. The lab setup and process was top notch and I really have to give props to the Microsoft usability team. The process helped us to refine our thinking and have confidence in the information architecture and core navigation. In fact, the final prototype for those sessions is very close to what we shipped in the final game.

David Candland, Max Hoberman and I then put together a polished demo in Director that was scripted to run through the main use cases for our proposed interface direction. We used this to present our proposed direction to the team and Max and the leads used this to evaluate the direction, gather feedback and reach consensus on feature sets and final direction as we moved into production.

Thanks, Colm!

Note: shortly after Halo 3 shipped, Colm left Bungie to work with Max
Hoberman at Certain Affinity, a game design and development company
based in Austin, TX.

Flowmaps and Frag-Grenades, Part 1

Written by: Bryce Glass

By any measure, Halo 3 is one of the most wildly-successful consumer software interfaces in recent memory: more than 1 million players played the game in its first 24 hours on Xbox Live; over 8 million copies sold to date; and “over 100,000 pieces of user generated content being uploaded daily […] 30 percent higher than YouTube on a daily basis.” It’s probably safe to say that more cumulative man-hours have already been spent in Halo gaming lobbies than in Microsoft Word! But H3 is distinguished for another reason, too. It’s one of the earliest—and definitely one of the highest-profile—mass-market video games to benefit from the contributions of a dedicated interaction designer.

Colm Nelson was the interaction designer for Halo 3 and has been a working UX designer since 2000. Before joining Bungie (the Studio that produces the Halo series), Colm’s background was largely in Internet consumer applications, with a heavy bent toward entertainment software. Colm’s experience is unique, but it’s part of a growing trend in the gaming industry toward employing UX professionals. Colm would like to see this trend continue, and was gracious enough to speak about it with us, and share some insight into the intersection between his ‘traditional’ UX background and his job duties at Bungie.

Hi Colm—I’d like to thank you for taking the time out to speak to the B&A community. Given the audience here, I thought this emerging trend—this matriculation of interaction designers into the gaming world—is something that folks would want to know more about…

Online systems that facilitate player experiences around social interaction, custom content sharing and online communities have received a lot of attention by both the gaming press and fans and is definitely a hot trend in gaming. The gaming press has even begun to draw comparisons with these features to You Tube, My Space and Facebook. My observation is that developers that are offering more features in [the] user experience around the game are seeing more of a need to specialize and fill roles specifically around user experience and interface design.

Games with success in these areas have generally done a good job developing a solid feature set and matching the social goals of gameplay with the accessibility and usability of the features. Ultimately these features add to the longevity of a game’s popularity, which translates directly to sales. I think as a result there are more opportunities for traditional interaction designers in the games business.

I’ve met developers that are actively recruiting from traditional software interaction design to take ownership of these features and if you look around you’re starting to see postings for UI designers—both Bungie and Blizzard are actively recruiting interaction designers and experience designers. There are also studios that are championing player experience research and design such as XEOdesign, Inc.

But I also think that if you look around you’ll see that it’s not as clearly defined role in all game companies as it is in traditional software so I think as a trend it’s fairly early. My impression is that in many game companies the interface and experience design in games is handled by either designers or artists that are also responsible for the overall game design. The good news is that if you are an interface designer with a passion for games, there are definitely opportunities out there.

Let’s start at the beginning. I actually remember seeing the job req. at Bungie that you filled … it even used the term ‘Interaction Designer.’ My jaw almost dropped—design jobs in the gaming industry typically focus on character design, level design, gameplay and mechanics. How did Bungie ‘catch religion’ about strong interaction design? About paying attention not just to the core gaming experience, but also all of that scaffolding that gets you into the game? The experience around the game?

Yeah, I had the same reaction when I saw the posting. I’d been looking for opportunities in the games industry for some time and had not seen any positions related to interaction design, so when I saw the posting I was amazed.

The guy that hired me, Max Hoberman, was the online, UI and multiplayer design lead for Halo 2. Max and the team at Bungie are really passionate about the user experience around the game and also about usability. It’s just part of the culture of the studio. You can see the results from the design of the party system and the matchmaking system from Halo 2. Heading into Halo 3 there was plenty of ambition for the social experience and with features for the game so the team decided to hire a dedicated Interaction Designer.

And how did you get the job? 😉

As soon as I saw the position I put together a portfolio and cover letter that said I wanted to help Bungie in their quest for world domination. I managed to get a phone interview with Max, which went OK. His feedback was that he enjoyed our conversation but if we had a second conversation he expected me to be more critical with my observations about what could be improved from Halo 2 and Bungie.net. This was on a Friday. The “if” felt pretty dicey to me so I decided to be proactive.

I worked all weekend on a concept document on ideas to improve Halo 2 and fired it off on Sunday night at 3am. I wasn’t sure how it would be received but it paid off because I got a invitation to visit Bungie for an interview. I flew to Seattle to meet the team for a full day interview and was really impressed with the energy and passion that they had for design and the experience around the game. It was a lot of fun—I was also passionate and the interview felt like a series of brainstorming sessions as we discussed problems and ideas and how we might solve them. I guess it went pretty well because they offered me the job!

Describe the development team to me. I (like you, before your time at Bungie) come from a web & consumer applications background with roles like Product Manager, Project Manager, Developers, Designers, Researchers. Is game development roughly the same? How were you situated on the team?

There are similarities. It is still software design so all of the practical considerations still apply—you need to manage the project well in order to succeed and you need the resources to make it all come together. Producers, engineers, designers, researchers and QA all play a role on the team. Producers at Bungie are roughly equivalent to project managers from my previous experience, although I think the producer role varies quite a bit across studios. But at the same time you have cinematics, art, modeling and animation that are also core to the project.

There’s really not a “product manager” role, at least at Bungie. The team makes pitches for the game, the leads of the studio then decide what will be greenlit for production, and the team leads propose and drives feature sets for the project. It’s a very collaborative process and it is driven by the leads of the various disciplines. An example is that in designing the online experience and interface plans we solicited feedback, then proposed features and prototyped “proofs of concept” in order to land on the feature sets that would be developed for Halo 3.

Was there a bit of culture shock moving into the gaming world? Did folks on the team generally ‘get’ what you were brought onboard to achieve?

Yeah, there was a bit of culture shock for me. Mainly because some of the tech, process and roles on the team were new to me. As far as people getting my role, I’d say it was about the same as what an interface designer typically encounters when joining a new team. Definitely the core team responsible for interface and social design had clear goals for how the interface design process would work and understood what I was tackling—we tackled it together as a team. I was really surprised at how important interface design and usability was to the entire team—it was awesome! And at a higher level, even if all the folks didn’t get the details about process, they were supportive and as a rule folks at Bungie are really good at giving feedback on concept proposals and contributing ideas.

[Stay tuned for another installment of Colm Nelson, designer and gamer.]

Prototyping with XHTML

Written by: Anders Ramsay

Illustrations by Leah Buley

If you design user experiences for standards-based websites and applications (i.e. those built with XHTML, CSS, and JavaScript), there are several great reasons for adding XHTML prototyping to your UX tool kit. Perhaps you’ve found that traditional wireframes just aren’t sufficient and are looking for more powerful ways to explore and communicate design solutions. Perhaps your current practice is based on the traditional waterfall model (i.e. first creating wireframes, which are handed off to creative, who hand off comps to tech, and so forth), and you want to explore more contemporary methodologies, such as agile and iterative development. Regardless, a great way to embark on that journey is to start prototyping with XHTML.

So what does it mean to prototype with XHTML? Essentially, it’s the process of using the XHTML itself, and related technologies, to evolve and define your design solution. And what does an XHTML prototype look like? While, as we’ll see, that depends on where you are in your prototyping process, an XHTML prototype generally looks like any other web page built with XHTML, with some links or features perhaps being non-functional. In other words, anything you can build with XHTML, from consumer websites to enterprise applications, you can also prototype with XHTML. As we’ll see, there are numerous advantages to this approach compared to designing with wireframes or other prototyping tools.

An Iterative Process

While prototyping with XHTML isn’t tied to a specific design process, iterative development seems to effectively leverage its strengths. There are many reasons for this, but perhaps the most significant is that in both cases the prototype, and later the application itself, doubles as a specification. We’ll explore what that means in a bit, but first let’s walk through a suggested process for prototyping with XHTML.Let us start with an overview of the larger design process:

In this (iterative) methodology, rather than design the entire application before starting to build it, one designs and builds a unit of the application and then uses what has been built to inform and serve as a starting point for other application units. As with other design methods, the design work begins with sketching, which plays a particularly important role relative to prototyping.

Sketching: A Freeform Question

The term ‘sketching’ refers here to any freeform exploration unconstrained by a specific technology. This includes production of wireframes, which in this model are reframed, as it were, from specification artifact to refined sketch. When thought of, and presented to stakeholders, as sketches, its more natural to discard your wireframes once the design has evolved beyond them. This is usually after a prototype equivalent has been produced. With the design team I work with, we’ve found that when prototyping with XHTML, wireframes often became superfluous, and it’s more effective to go directly from sketch to prototype.

Prototyping: A Concrete Response

Prototyping has a counterpoint relationship to sketching. To paraphrase Bill Buxton, while sketches ask a question—“Is this a good design idea?”— prototypes provide a response. By making the idea manifest, prototypes force upon it the concrete realities and user experience idiosyncrasies of the actual production technology and offer a crisp verdict on the quality of what you dreamed up in drawings.

The Prototype/Build Relationship

When prototyping with XHTML, especially in an iterative model, the build and prototype become very intertwined. If you’re prototyping a new application or product, the XHTML prototype is essentially a rough draft of the actual application. However, when updating the design of an existing application, the production version can serve as the starting point for the prototype of the new solution.

Three Integrated Layers: Structure, Behavior, Foundation

The model for XHTML prototyping is based on the best practices model for actual site production: start by setting the foundation with XHTML, add a presentation layer with CSS, follow it by a behavioral layer using JavaScript then iterate.

Let’s start by looking at the structural layer.

Structure: Set the Page Foundation

The first step in production of the XHTML prototype is to create a structural foundation. Similar to how we create a wireframe, we start by representing the main content areas on the page, except we do so with text-based XHTML markup.

| If our sketch or wireframe looks like this | …our XHTML might look like this: |
|^. |^.

My Account

Account options

Account details

Account Help


(We’re only displaying the relevant snippet of the XHTML here.)
|

Next, we add detailed content elements that have been defined, using the XHTML structure appropriate for the corresponding content.

| For example, if our detailed sketch looks like this | …we’d represent the list of account help topics as an unordered list (i.e. use the ul tag): |
|^. |^.


|

Continuing to add detailed content to the page, we have essentially produced a structured content inventory of the page. This serves as a foundation for the rest of the prototype production. While wireframes force us to represent a page’s information architecture within a specific layout, this is pure structure and hierarchy, and, in my opinion, represents the true information architecture of a web page.

By defining the information architecture directly in the XHTML, we can also easily define accessibility-specific attributes, such as being cognizant of how users traversing the page with a screen reader will experience the page, and order content blocks accordingly. Additionally, we can more easily define elements often overlooked when working with wireframes, such as effective use ofLabel tags in forms.

If one were to view the structural layer in a browser, it would essentially look like an unstyled web page, and would not be interesting to look at. Just as building foundations are not known for their aesthetic qualities, but instead for the impact their quality has on the building they support, so too will the quality of the page structure significantly impact the overall quality of the web page. In fact, that absence of style is a key advantage of working with XHTML.

Evolving the Presentation Layer

With a page structure in place, we are ready to focus on how content will be presented. Looking back at our sketches, we’ve already explored some layout concepts, which we can begin to apply to our content structure. The way that look and feel is developed and applied will vary widely from team to team. While you may choose to do your initial exploration of look and feel with design comps, especially if you are also developing an overall brand, it’s worthwhile to redefine comps similarly to how we previously redefined wireframes. Just like wireframes are great as sketches, design comps are great for initial exploration of look and feel. But the practice of fully developing the presentation layer away from the actual technology, and then cutting it up and applying it wholesale to a web page is like wallpapering a façade onto a building. It’s impossible to be aware of all the dynamic aspects of a web page when working in static illustration software. However, when prototyping with XHTML, you can leverage the power of rendering your design in the same way that it will be seen by users, and incrementally evolve page presentation based on this immediate and rich feedback.

Issues that don’t easily reveal themselves when working in illustration software will often be obvious. This includes issues related to your design and the browser viewport, from the basic question of if the design should center itself in the browser window, to more advanced issues, such as how to design for different window sizes and browser resolutions. For example, for small windows sizes, is it okay if some content disappears out of view, or should the design adapt to the window size? When look and feel is designed solely with illustration software, questions like this are often unexplored to the detriment of user experience.

Adding Behavior: Unreinventing the Wheel

When prototyping with XHTML, you are designing within the larger ecosystem of the web, which effectively becomes your always-up-to-date UI library. Instead of laboring over the design of a detailed piece of functionality, start by letting Google inform you if anyone else has designed and built something similar, and then use that as the starting point for your solution. This can include anything from date-pickers to web widgets to whatever cutting edge UI idea was just created. Additionally, prototyping with XHTML makes it easy to incorporate and simulate Web 2.0 functionality, such as embedded widgets and syndication. If you don’t know JavaScript, or whatever technology is being used, you can collaborate with your developer on integrating the solution. Of course, you’re not going to find a solution for all your design needs online. In those cases, go back to sketching and collaborating with your team.

Iteration: Discovery, Evolution

The true power of prototyping really emerges during iteration This is when users can interact with your prototype. On a recent project, we sketched out a solution in which users could drag videos from a library onto a playlist. Looking at the static illustrations, it seemed a simple and elegant idea. But when users were able to interact with the solution, dragging and dropping video thumbnails, they found that it was a pretty tedious activity, especially for large numbers of videos. In other words, the prototype allowed us to discover a design problem that went unnoticed when looking at a wireframe.

And therein lies a core problem with using static artifacts to communicate interactive solutions; they effectively force the user to prototype the solution in their imagination, where all solutions seem to function in glorious perfection. With XHTML, we minimize the cognitive leap that users need to make, allowing them to instead experience and respond to something nearly identical to the actual solution.

Once users provide feedback and the team begins work on the next iteration, another measure of the quality of the prototyping methodology comes into play: how rapidly are you able to iterate? The longer an iteration takes, the less valuable your prototype. When prototyping with XHTML, iterations can be incredibly fast, first because the prototype can be easily presented to users, since it’s usually just a question of posting your files and sending out a URL. Second, because XHTML is text-based, iterations such as text changes or basic functional updates can often be completed in just a few minutes. More advanced design updates usually don’t take more than a few hours of actual production time.

How XHTML Can Double as a Specification

One of the most powerful aspects of XHTML is that it is self-describing. The same XHTML markup that tells a browser what to display can also double as a specification for a developer. For example…

^.

^.

This markup Would be read as
^.

“This is the start of the header content block.”
   
^. XYZ Application “display the product name, which should link to the homepage”
   
^.

Signed in as Jane Smith (Editor)

;

^. “display user information, including the user’s role (or set of application permissions”

In buzzword-speak, the practice we are applying here is writing semantically meaningful markup, which means we are selecting tags and naming our IDs and Classes such that they communicate the meaning and function of the content they enclose.

Annotations Visible Only to Those Who Care About Them

Another advantage of using XHTML as a specification is that IDs and Class names can double as annotation references. In other words, the annotations for the content block with the ID “account-options” would appear under the heading “Account Options” in your specification.

Rather than obscure and clutter a page design by placing annotation callouts on top of it, a common practice when using wireframes, that may confuse and distract non-technical viewers, references are only in the markup view for developers who are interested in seeing them. And since the XHTML file itself is so richly informative, the actual annotations written tend to only be short bullet points.

More Standards, Less Noise

One of the biggest problems with wireframes is the lack of a standardized notation. In other words, my wireframes certainly don’t look anything like your wireframes. This means that visual designers and developers who use wireframes are continually relearning how to interpret our work, leading to noise between author and reader. To compensate for the lack of a standard, we have to create highly detailed wireframes, with often lengthy annotations that explain what our wireframes mean and how elements in them work. These, in turn, are collected in large specification documents that usually are so labor-intensive they become impossible to maintain. When they are no longer kept up-to-date, the team stops trusting and relying on them as the design specification, which leads to all kinds of bad things happening.

In contrast to wireframes, XHTML is a standardized notation, anyone who knows XHTML can read your document. More importantly, it is a language spoken fluently by a key target audience of your design documents, the developers. And those who don’t know or care about XHTML can view the part they do care about, the page design, by opening the document in a browser.

Using a standardized notation also means you are not confined to specialized wireframing or prototyping software, but can use anything from a simple text editor to the range of tools available for editing XHTML files. Also the compact syntax of XHTML, particularly compared to verbose wireframe annotations, combined with the fact that you are just typing in a text file, leaving it to a browser to deal with the visuals, allows you to work rapidly and efficiently.

A Small Amount of Knowledge Goes a Long Way

If you’re new to XHTML, you’ll discover that a small amount of knowledge goes a long way. Spend just a few hours following any of the innumerable online tutorials and you’ll be writing XHTML markup in no time. (Two great places to start are htmldog.com or w3schools.com) Better yet, rather than invest time learning the UX tool du jour, you deepen your understanding of the technology that realizes your design.

Dividing and Conquering

The redefining of a wireframe from a blueprint to a sketch has a domino effect on who does what and when in evolving the page or application design. After a rough page design has been sketched out, rather than have one team member toil away in isolation, wireframing detailed representations of each page design, this model takes a divide-and-conquer approach. On the team I work with, I might produce an initial cut of the XHTML and some of the CSS, while other team members build on that, updating the XHTML, adding more advanced CSS, as well as JavaScript. If the team as a whole conceives of a solution, why not also have the team as a whole design it? In other words, rather than creating one person’s vision of a team’s solution, why not have the entire team contribute their particular expertise? When working with XHTML, we can use the tight integration of CSS and JavaScript to allow team members to contribute their dimension of the design via a set of integrated artifacts.

Where To Go From Here

This has, of course, been a mere whetting of the appetite for anyone interested in prototyping with XHTML. If you are interested in exploring the methodology further, particularly if you currently follow a traditional waterfall-oriented process, I recommend a many-small-steps approach. In other words, prototype the methodology itself, working with your team on a small project, and then building on that. If your experience is anything like mine, you’ll find it an incredibly powerful addition to your UX toolbox — a more effective way to straddle that proverbial divide between user experience and technology.