The two weaknesses I see in Garrett’s model are:
- Garrett sees the web as having two dimensions: “web as software interface” and “web as hypertext system.” But there’s also the “web as interactive multimedia,” which focuses on sensory richness and immersiveness.
- The “surface” layer of his model—the “look” in the “look-and-feel” of the actual interface as Garrett puts it—only involves visual design. But that ignores the possibility of involving additional senses, from Microsoft Entourage’s audio feedback to the force-feedback joysticks used by video-gamers.
To be fair, I’m looking beyond the web to a model that handles a broader context, including software, interactive CD-ROMs (for those who remember them from the early 1990s), video games, and other interactive products. But even within the web alone, ignoring the “experiential” elements of user experience seems to be a serious omission.
Before walking through my expansion (80kb PDF) of Garrett’s model, let’s first take a more extensive look at the critiques I’ve summarized above.
Why interactive multimedia?
As Garrett accurately points out, the web is a convergent medium, and that convergence has caused much confusion among user experience professionals.
Originally conceived of as a hypertextual information space, the web quickly added functionality, drawing in those from a software background. But just as quickly, the web also drew those using it to provide rich experiences reminiscent of the “new media” of the early 1990s—interactive multimedia CD-ROMs.
Needless to say, bandwidth constraints quickly posed difficulties, but as early as 1996, I was designing movie promotion sites for which the primary goal wasn’t to enable online ticket purchases, nor to provide detailed information about the movie, but rather to give visitors a taste (or an “experience” as it were) of the movie in an effort to inspire them to go to theaters to see it. This is just as true of recent, highly visited movie sites, such as “The Mummy Returns” or “Shanghai Knights.” So while usability puritans may shudder, these sites are clearly fulfilling the interests of the visitors and of the studios building these sites.
Likewise, while the number of interaction designers and information architects have grown, so have the numbers of “interactive designers,” people like top-rated Flash/multimedia designer Hillman Curtis, and the readers of eDesign. Is it really sensible to exclude them from the field of user experience?
Much of the argument over what the “right” kind of website is, stems from people’s failure to appreciate that, as a medium, the web encompasses more than just the specific aspect they’re most comfortable with—and a failure to appreciate that users might be interested in more than one type of experience. It’s a question of finding an appropriate balance between these three types of experiences.
Too often sensory richness is seen as fluff that distracts from functionality and understanding—witness the distain expressed toward Flash by some. But this misses the point about how sensorial design1, when used well, can be used to expand the palette of tools used in task-oriented and information-oriented design.
For example, public radio’s “Marketplace” uses musical cues when reporting the day’s stock market results. When the market’s down, listeners hear a glum version of “Stormy Weather.” When the market’s up, it’s a jaunty “We’re in the Money.” Back in the heady days of the late 1990s, new stock market records were accompanied by an additional sonic overlay of cheering. Regular listeners can instantly know the day’s results before the announcer delivers the specific figures.
Obviously with a radio program, such audio cues are unsurprising. But they can also be useful in interfaces. Microsoft’s Entourage uses a set of different chimes to indicate when it’s checking a mail account and if any mail has been received. Such “ambient feedback” is extremely useful when Entourage is left running in the background, checking mail periodically throughout the day.
Sometimes sensory richness is an appropriate goal in its own right. Most video-gamers invested in speakers to enhance their experience and force-feedback joysticks are not uncommon. Hardcore devotees of auto racing and flight simulation games are known to use full wheel and pedal systems and gaming chairs that use low-frequency sounds to create gut-thumping tactile sensations. Now the latest cutting-edge gaming technology involves 3D glasses. Is all this necessary to play the game? Certainly not, but gamers spend the money on the gear to enhance the immersive aspect of their experience.
A similar example of the appeal of immersiveness as an end unto itself was shown in the original Myst, which was enormously popular with people who’d never played video games before, nor were particularly interested in solving Myst’s puzzles. Rather they were content to wander through an entrancing environment.
But more sensory richness doesn’t necessarily mean better. Zen gardens provide rich experiences with subtlety and minimalism. As with all design, appropriateness to the context is the key. For example, much of the Flash on the web today is is being used inappropriately, which is precisely the problem.
Expanding the model into the third dimension
So how do we approach the creation of this third dimension of user experience? Garrett’s five-layer “strategy to surface” model (described in this sample book chapter, 220kb PDF) holds up quite well. To quickly summarize (at the risk of oversimplification):
- The visible components of a site, software, or product make up its “surface.”
- The “skeleton” organizes these visible components.
- The skeleton is the concrete implementation of the underlying conceptual “structure” that organizes the overall features and functionality.
- The features and functionality to be included the conceptual structure are determined by the “scope” of the product.
- The “strategy,” which incorporates both the creator’s goals and users’ needs, determines what’s in scope and what’s not.
But in extending this model, the difficulty is that unlike the other two dimensions, sensory richness involves a wide variety of fields, including writing, graphic design, filmmaking, animation, motion graphics, sound design, and musical scoring. These fields don’t always have descriptive terms to neatly separate their design processes into layers, and the terms that do exist vary widely. Consequently, I’ve had to adapt terms in an effort to find descriptions that fit the equivalent stages of the design process across these various fields. In both cases, since these terms may be used differently in a specific field, I’ll try to be clear about how I’m using them.
(A final note, several of the examples mentioned are drawn from non-interactive mediums. I’ve done this because they’re more familiar and more clearly articulated within their traditional contexts.)
At this stage, the approach isn’t much different from task- or information-oriented design processes. Business, creative, and other goals of the creators are combined and balanced with the needs and desires of the users/audience.
Since sensory richness often involves the “creative” fields, this stage is often referred to as developing the “artistic vision.” But despite that name, creators are often keenly aware of their audiences, especially in commercial endeavors.
For example, advertising agencies often employ “account planners,” whose methods are similar to user researchers, and whose goal is to get inside the head of targeted audiences so that the agency can craft an advertisement that resonates. Account planning was essential to the creators of the famous—and highly effective—“Got Milk?” advertising campaign because they discovered that consumers only really thought about milk when they ran out of it.2 That insight became the foundation of the campaign.
This is not that different from developing the strategic direction of a site, software, or product by doing task analysis to determine what users are trying to accomplish and/or research to understand their mental models of tasks or content.
With the strategic goals in mind, the creative brief defines the intended experiential and/or emotional aspects to be evoked. While “creative brief” is often used by those with a graphic design background as being somewhat synonymous with a project definition document, I’m using it in a narrower sense: as the sensory equivalent to what the functional specification does for spelling out supported tasks and what the content requirements do for the informational needs of a project.
This is the point where fundamental choices are made about which particular medium to use—i.e., whether it is best conveyed by visuals, by sound, by motion, etc.
Likewise, there are often decisions about the conceptual approach, genres, metaphors, and imagery to be used. For example, the “Got Milk?” team decided to use a comic touch, highlighting characters who are stuck with a mouthful of food and no milk, to make the ads more memorable.
For sites, software, and interactive products, this is where brand strategy intersects with user experience to ensure that both reinforce each other. Just as a functional specification may be constrained by technology choices, or content requirements may be affected by available information, the creative brief may also need to work within existing branding strategies and corporate identity guidelines.
As the rough idea comes into focus, the choreography of interactive multimedia elements coincides with the interaction design and information architecture. Borrowed from dance, choreography seemed an apt term to describe the activities involving the design and structuring of the overall elements so that they create a seamless and unified whole that supports the effect they are intended to create.
At this stage, graphic designers will often create “mood boards”—a collage of images illustrating the sentiments, feelings, or emotions that the product should evoke. Typically, the first thumbnail sketches outlining potential ideas for specific design directions are also developed.
Likewise, filmmakers, animators, and motion graphic artists often use storyboarding as a way to map out sequences to ensure that they flow together.
For writers, this stage involves the basic structuring of a story, whether it’s the nonfiction outline or the story arc of fiction. Or in video-gaming, it may be the creation of the environment in which the game player’s “story” will occur. Needless to say, these activities can overlap with those of an information architect.
All involve designing at the conceptual level, just as the interaction designer structures task flows or the information architecture arranges content into top-level sections.
The work at this stage is similar to the previous one, but the focus now shifts to a finer level of detail – typically the design of individual screens or sequences. It’s similar to the shift from the more conceptual interaction design and information architecture to the more concrete interface design and navigation design.
The theatrical term, mise-en-scene—usually translated as the “arranging of the scene”—captures this sense of arranging specific elements to evoke expressive qualities such as mood, style, and feeling.
For example, this is where composers make choices about specific instrumentation, which can greatly affect how listeners will react to the basic melody and harmony. Imagine Rimsky-Korsakov’s “Flight of the Bumblebee” played on the tuba instead of the traditional clarinet.
Likewise, filmmakers and animators will plan out specific camera angles and lighting, costuming, and set decoration to reinforce the script’s intended effect for a scene. A masterful example comes from the climatic scene in the film noir, “The Third Man,” in which the protagonist, Holly Martins, finally catches up to the monstrous black marketer, Harry Lime (Martins’ oldest friend), who is on the run from the police. So far Martins has refused to believe the charges against Lime, who arrives wearing a black overcoat as they board the gigantic Vienna Ferris Wheel. As Lime dissembles, nearly convincing Martins, he doffs the coat revealing a gray suit. Outside, the ferris wheel’s spinning structure, further distorted by the tilted camera angles, mirrors Martin’s inner turmoil. But as Lime reveals his true colors, he puts on the black overcoat again while threatening to kill Martins. When pointed out, these cues sound a bit heavy-handed, but when watching the movie, one is only subconsciously aware of their effect.
While it is obvious that movies are one of the most highly controlled experiences, similar techniques are used on sites, software, and products as well to strike the right tone in support of brand personality, subtly direct a user’s attention to help reinforce content hierarchies, or highlightuser interface components relevant to the task at hand.
Graphic designers traditionally use “comps,” ranging from rough sketches to almost-finished dummy layouts, to work out these arrangements on a specific screen. Since this overlaps with information design (in the broad Tuftean sense of designing the presentation of information for understanding) as well as user interface design—both which also occur at this stage—there’s often been tension in this area. It’s the classic “who owns the wireframe” argument among visual designers, information architects, and user interface designers.
Finally all the beneath-the-surface work becomes expressed in the tangible interface of the site, software, or product – in essence its skin. “Skin” is, in fact, the term used by products that allow users to substitute their customized skins over the interface’s skeleton, which itself doesn’t change.
Most commonly, this skin primarily involves visual design, but as I’ve discussed previously it’s better to think more broadly about sensorial design as part of the overall lookandfeel.
And yes, looks do count. A recent study on website credibility found 46.1 percent of those surveyed mentioned the site’s appearance in assessing it—far more than any other factor. (The next closest factor, information design/structure, was mentioned only 28.5 percent of the time.)
This actually isn’t surprising. The service industry has long recognized that consumers often use “tangibles” (neatness, friendliness, etc.) to make a shorthand evaluation of the service itself, particularly when the quality of the service is difficult to evaluate. (For example, can you really tell how good a job your accountant did on your taxes?) Astute businesses make use of surface qualities. For example, in a bit of real-world theater, Avis has supervisors at its car rental counter wear little headsets. Not because they are needed, but rather because Avis found customers were reassured to see someone was in charge with the headsets providing the cue that a supervisor was present.
This concern with appearances is true even with a site as starkly utilitarian as Google, which uses its playful—and often played with—logo and its “I’m Feeling Lucky” search button to reinforce its friendly, slightly quirky, brand personality.
Toward a holistic view of user experience
Garrett provides a useful foundation for trying to bring some order to the various concepts being used to describe the user experience development process. I hope my expanded model will do the same for an important dimension that has been overlooked. I welcome suggestions on how to improve this model.