Expanding the Approaches to User Experience

Written by: George Olsen
“I’m looking beyond the web to a model that handles a broader context, including software, interactive CD-ROMs (for those who remember them from the early 1990s), video games, and other interactive products. But even within the web alone, ignoring the “experiential” elements of user experience seems to be a serious omission.”Jesse James Garrett’s “The Elements of User Experience” diagram (17kb PDF) has become rightly famous as a clear and simple model for the sorts of things that user experience professionals do. But as a model of user experience it presents an incomplete picture with some serious omissions—omissions I’ll try address with a more holistic model.

The two weaknesses I see in Garrett’s model are:

  • Garrett sees the web as having two dimensions: “web as software interface” and “web as hypertext system.” But there’s also the “web as interactive multimedia,” which focuses on sensory richness and immersiveness.
  • The “surface” layer of his model—the “look” in the “look-and-feel” of the actual interface as Garrett puts it—only involves visual design. But that ignores the possibility of involving additional senses, from Microsoft Entourage’s audio feedback to the force-feedback joysticks used by video-gamers.

To be fair, I’m looking beyond the web to a model that handles a broader context, including software, interactive CD-ROMs (for those who remember them from the early 1990s), video games, and other interactive products. But even within the web alone, ignoring the “experiential” elements of user experience seems to be a serious omission.

Before walking through my expansion (80kb PDF) of Garrett’s model, let’s first take a more extensive look at the critiques I’ve summarized above.

Why interactive multimedia?
As Garrett accurately points out, the web is a convergent medium, and that convergence has caused much confusion among user experience professionals.

Originally conceived of as a hypertextual information space, the web quickly added functionality, drawing in those from a software background. But just as quickly, the web also drew those using it to provide rich experiences reminiscent of the “new media” of the early 1990s—interactive multimedia CD-ROMs.

Needless to say, bandwidth constraints quickly posed difficulties, but as early as 1996, I was designing movie promotion sites for which the primary goal wasn’t to enable online ticket purchases, nor to provide detailed information about the movie, but rather to give visitors a taste (or an “experience” as it were) of the movie in an effort to inspire them to go to theaters to see it. This is just as true of recent, highly visited movie sites, such as “The Mummy Returns” or “Shanghai Knights.” So while usability puritans may shudder, these sites are clearly fulfilling the interests of the visitors and of the studios building these sites.

Likewise, while the number of interaction designers and information architects have grown, so have the numbers of “interactive designers,” people like top-rated Flash/multimedia designer Hillman Curtis, and the readers of eDesign. Is it really sensible to exclude them from the field of user experience?

Much of the argument over what the “right” kind of website is, stems from people’s failure to appreciate that, as a medium, the web encompasses more than just the specific aspect they’re most comfortable with—and a failure to appreciate that users might be interested in more than one type of experience. It’s a question of finding an appropriate balance between these three types of experiences.

Too often sensory richness is seen as fluff that distracts from functionality and understanding—witness the distain expressed toward Flash by some. But this misses the point about how sensorial design1, when used well, can be used to expand the palette of tools used in task-oriented and information-oriented design.

For example, public radio’s “Marketplace” uses musical cues when reporting the day’s stock market results. When the market’s down, listeners hear a glum version of “Stormy Weather.” When the market’s up, it’s a jaunty “We’re in the Money.” Back in the heady days of the late 1990s, new stock market records were accompanied by an additional sonic overlay of cheering. Regular listeners can instantly know the day’s results before the announcer delivers the specific figures.

Obviously with a radio program, such audio cues are unsurprising. But they can also be useful in interfaces. Microsoft’s Entourage uses a set of different chimes to indicate when it’s checking a mail account and if any mail has been received. Such “ambient feedback” is extremely useful when Entourage is left running in the background, checking mail periodically throughout the day.

Sometimes sensory richness is an appropriate goal in its own right. Most video-gamers invested in speakers to enhance their experience and force-feedback joysticks are not uncommon. Hardcore devotees of auto racing and flight simulation games are known to use full wheel and pedal systems and gaming chairs that use low-frequency sounds to create gut-thumping tactile sensations. Now the latest cutting-edge gaming technology involves 3D glasses. Is all this necessary to play the game? Certainly not, but gamers spend the money on the gear to enhance the immersive aspect of their experience.

A similar example of the appeal of immersiveness as an end unto itself was shown in the original Myst, which was enormously popular with people who’d never played video games before, nor were particularly interested in solving Myst’s puzzles. Rather they were content to wander through an entrancing environment.

But more sensory richness doesn’t necessarily mean better. Zen gardens provide rich experiences with subtlety and minimalism. As with all design, appropriateness to the context is the key. For example, much of the Flash on the web today is is being used inappropriately, which is precisely the problem.

Expanding the model into the third dimension
So how do we approach the creation of this third dimension of user experience? Garrett’s five-layer “strategy to surface” model (described in this sample book chapter, 220kb PDF) holds up quite well. To quickly summarize (at the risk of oversimplification):

  • The visible components of a site, software, or product make up its “surface.”
  • The “skeleton” organizes these visible components.
  • The skeleton is the concrete implementation of the underlying conceptual “structure” that organizes the overall features and functionality.
  • The features and functionality to be included the conceptual structure are determined by the “scope” of the product.
  • The “strategy,” which incorporates both the creator’s goals and users’ needs, determines what’s in scope and what’s not.

But in extending this model, the difficulty is that unlike the other two dimensions, sensory richness involves a wide variety of fields, including writing, graphic design, filmmaking, animation, motion graphics, sound design, and musical scoring. These fields don’t always have descriptive terms to neatly separate their design processes into layers, and the terms that do exist vary widely. Consequently, I’ve had to adapt terms in an effort to find descriptions that fit the equivalent stages of the design process across these various fields. In both cases, since these terms may be used differently in a specific field, I’ll try to be clear about how I’m using them.

(A final note, several of the examples mentioned are drawn from non-interactive mediums. I’ve done this because they’re more familiar and more clearly articulated within their traditional contexts.)

At this stage, the approach isn’t much different from task- or information-oriented design processes. Business, creative, and other goals of the creators are combined and balanced with the needs and desires of the users/audience.

Since sensory richness often involves the “creative” fields, this stage is often referred to as developing the “artistic vision.” But despite that name, creators are often keenly aware of their audiences, especially in commercial endeavors.

For example, advertising agencies often employ “account planners,” whose methods are similar to user researchers, and whose goal is to get inside the head of targeted audiences so that the agency can craft an advertisement that resonates. Account planning was essential to the creators of the famous—and highly effective—“Got Milk?” advertising campaign because they discovered that consumers only really thought about milk when they ran out of it.2 That insight became the foundation of the campaign.

This is not that different from developing the strategic direction of a site, software, or product by doing task analysis to determine what users are trying to accomplish and/or research to understand their mental models of tasks or content.

With the strategic goals in mind, the creative brief defines the intended experiential and/or emotional aspects to be evoked. While “creative brief” is often used by those with a graphic design background as being somewhat synonymous with a project definition document, I’m using it in a narrower sense: as the sensory equivalent to what the functional specification does for spelling out supported tasks and what the content requirements do for the informational needs of a project.

This is the point where fundamental choices are made about which particular medium to use—i.e., whether it is best conveyed by visuals, by sound, by motion, etc.

Likewise, there are often decisions about the conceptual approach, genres, metaphors, and imagery to be used. For example, the “Got Milk?” team decided to use a comic touch, highlighting characters who are stuck with a mouthful of food and no milk, to make the ads more memorable.

For sites, software, and interactive products, this is where brand strategy intersects with user experience to ensure that both reinforce each other. Just as a functional specification may be constrained by technology choices, or content requirements may be affected by available information, the creative brief may also need to work within existing branding strategies and corporate identity guidelines.

As the rough idea comes into focus, the choreography of interactive multimedia elements coincides with the interaction design and information architecture. Borrowed from dance, choreography seemed an apt term to describe the activities involving the design and structuring of the overall elements so that they create a seamless and unified whole that supports the effect they are intended to create.

At this stage, graphic designers will often create “mood boards”—a collage of images illustrating the sentiments, feelings, or emotions that the product should evoke. Typically, the first thumbnail sketches outlining potential ideas for specific design directions are also developed.

Likewise, filmmakers, animators, and motion graphic artists often use storyboarding as a way to map out sequences to ensure that they flow together.

For writers, this stage involves the basic structuring of a story, whether it’s the nonfiction outline or the story arc of fiction. Or in video-gaming, it may be the creation of the environment in which the game player’s “story” will occur. Needless to say, these activities can overlap with those of an information architect.

All involve designing at the conceptual level, just as the interaction designer structures task flows or the information architecture arranges content into top-level sections.

The work at this stage is similar to the previous one, but the focus now shifts to a finer level of detail – typically the design of individual screens or sequences. It’s similar to the shift from the more conceptual interaction design and information architecture to the more concrete interface design and navigation design.

The theatrical term, mise-en-scene—usually translated as the “arranging of the scene”—captures this sense of arranging specific elements to evoke expressive qualities such as mood, style, and feeling.

For example, this is where composers make choices about specific instrumentation, which can greatly affect how listeners will react to the basic melody and harmony. Imagine Rimsky-Korsakov’s “Flight of the Bumblebee” played on the tuba instead of the traditional clarinet.

Likewise, filmmakers and animators will plan out specific camera angles and lighting, costuming, and set decoration to reinforce the script’s intended effect for a scene. A masterful example comes from the climatic scene in the film noir, “The Third Man,” in which the protagonist, Holly Martins, finally catches up to the monstrous black marketer, Harry Lime (Martins’ oldest friend), who is on the run from the police. So far Martins has refused to believe the charges against Lime, who arrives wearing a black overcoat as they board the gigantic Vienna Ferris Wheel. As Lime dissembles, nearly convincing Martins, he doffs the coat revealing a gray suit. Outside, the ferris wheel’s spinning structure, further distorted by the tilted camera angles, mirrors Martin’s inner turmoil. But as Lime reveals his true colors, he puts on the black overcoat again while threatening to kill Martins. When pointed out, these cues sound a bit heavy-handed, but when watching the movie, one is only subconsciously aware of their effect.

While it is obvious that movies are one of the most highly controlled experiences, similar techniques are used on sites, software, and products as well to strike the right tone in support of brand personality, subtly direct a user’s attention to help reinforce content hierarchies, or highlightuser interface components relevant to the task at hand.

Graphic designers traditionally use “comps,” ranging from rough sketches to almost-finished dummy layouts, to work out these arrangements on a specific screen. Since this overlaps with information design (in the broad Tuftean sense of designing the presentation of information for understanding) as well as user interface design—both which also occur at this stage—there’s often been tension in this area. It’s the classic “who owns the wireframe” argument among visual designers, information architects, and user interface designers.

Finally all the beneath-the-surface work becomes expressed in the tangible interface of the site, software, or product – in essence its skin. “Skin” is, in fact, the term used by products that allow users to substitute their customized skins over the interface’s skeleton, which itself doesn’t change.

Most commonly, this skin primarily involves visual design, but as I’ve discussed previously it’s better to think more broadly about sensorial design as part of the overall lookandfeel.

And yes, looks do count. A recent study on website credibility found 46.1 percent of those surveyed mentioned the site’s appearance in assessing it—far more than any other factor. (The next closest factor, information design/structure, was mentioned only 28.5 percent of the time.)

This actually isn’t surprising. The service industry has long recognized that consumers often use “tangibles” (neatness, friendliness, etc.) to make a shorthand evaluation of the service itself, particularly when the quality of the service is difficult to evaluate. (For example, can you really tell how good a job your accountant did on your taxes?) Astute businesses make use of surface qualities. For example, in a bit of real-world theater, Avis has supervisors at its car rental counter wear little headsets. Not because they are needed, but rather because Avis found customers were reassured to see someone was in charge with the headsets providing the cue that a supervisor was present.

This concern with appearances is true even with a site as starkly utilitarian as Google, which uses its playful—and often played with—logo and its “I’m Feeling Lucky” search button to reinforce its friendly, slightly quirky, brand personality.

Toward a holistic view of user experience
Garrett provides a useful foundation for trying to bring some order to the various concepts being used to describe the user experience development process. I hope my expanded model will do the same for an important dimension that has been overlooked. I welcome suggestions on how to improve this model.

End Notes

  1. Thanks to Nathan Shedroff, who to my knowledge, first used the term in regards to digital media.
  2. Jon Steel, one of the creators of the “Got Milk?” campaign provides a case study in his book “Truth, Lies and AdvertisingGeorge Olsen is principal of Interaction by Design. He has done award-winning work for a variety of companies, from dotcom start-ups, to Hollywood studios to Fortune 500 companies. He’s taught at UCLA Extension, and written about and spoken at numerous conferences about user experience design issues.

George Olsen

was a co-founder of Boxes and Arrows, and is a senior interaction designer with Yahoo!. Previously he was an information architect with The Capital Group and principal of Interaction by Design. He has done award-winning work for a variety of companies, from dotcom start-ups, to Hollywood studios, such as Disney, to Fortune 500 companies, including Nestle and Transamerica. He’s taught at UCLA Extension, and written about and spoken at numerous conferences about user experience design issues. He muses about user experience from time to time.
View all posts by

23 thoughts on “Expanding the Approaches to User Experience”

  1. Good ‘additive’ observations for a very specific scenario-set.

    I would hope that we can eventually come up with a more ‘exacting’ term to align with the distinct scenario-set represented by this article than ‘experience’ (perhaps ‘deep experience’). As noted in “The Experience Economy”, every interaction a business has with an individual is an experience and the majority of them are not online and yet still ‘should’ require the involvement of the types of skills/activities alluded to by the model.

    While George suggests a way to add to the model by inclusion of another ‘narrow’ perspective, I see opportunities to add to the model due to more ‘broad’ perspectives.

  2. forgive this tangent, but, paula why do you use ‘scare quotes’ around so many words? it suggests an ironic detachment, and at times makes following your train of thought difficult.

    just say what you have to say!

  3. Excellent article! I’m going to be thinking about this more, but my first reaction is that it’s great to see the skills and practices of “immersion oriented” interactive design practices so clearly defined. This is a step in the right direction: towards a practice and nomenclature for interactive experiences that are useful, usable, and a pleasure to use.

    Paula, if Jesse and George had been a bit more overt in their focus and used the term “Computer Interactive Experience” instead of “User Experience”, would you be happier with the essay(s)? I’ve always thought that in this field the term “user” was assumed to mean “user of a computer system”… It seems to me that you are (re)defining “user” as “a person who interacts with an organization”. It’s an admirable definition, but I think most people in these parts are trying to have a discussion about computer interaction design, not customer relationship management (i.e., HCI not CRM). I’m not even sure that the topics raised in these diagrams are sufficient to cover those topics associated with CRM.


  4. it really irritates my to find the “surface” layer on the bottom of your diagram.
    I also don’t understand your assumption that content is information-oriented. wouldn’t it be better to say content is experience-oriented where receiving information in a special way would be one, but not the only part, of a possible content ?

    @jjg: saying the flash-technologie doesn’t belong to the web, is like saying fish doesn’t belong to the ocean. it is one of many creatures and manifestations of the ever-changing and evolving ocean called “www”…

  5. More to come, but I have 2 things right off:

    1. Why the censorship? Why was ‘fifa_mifa’s post removed? Agreed that it wasn’t eloquent etc, but it did raise an important point: ‘How does George have the time during the working day to be doing a back-and-forth on this discussion list’?

    2. I, and I’m sure I am not alone, for one work for a company that does real work. I shoulder quite a bit of responsibility and crank out a good amount of work. The IAs- and pretty much everyone else here- is not looking for the next blog to post to or the next book to write. We’re cranking out schematics/ wireframes/ use cases/ deliverable reviews/ design reviews/ etc… So, to re-iterate fifa_mifa’s point (and this time a little more eloquently), ‘George, how busy are you? And is replying to blogs/ discussions/ etc the most exhilarating thing you do in the day?’

    PS: Before flaming me and accusing me of double standards for writing this during the work-day, I’m currently sitting at an airport, connected via GPRS, waiting for my plane to leave

  6. George…

    I guess my concern was not so much of a personal attack (and my apologies if it came across as such), but more of a curiosity about what IAs- or call them what you will- are doing in the average work day.

    Honestly, is being an IA at a client’s side so to speak, a walk in the park compared to being an IA developer-side?

  7. I’ve been staring at these two diagrams (JJG’s and Goerge’s revision) over the past few days and posed this question. What if the JJG user experience model was actually a model for the production of various interactive projects?

    I have been experimenting with this idea over the past few days. Though JJG makes it clear that his multi-layered diagram is “does not describe a production process” my thought is that maybe could. I’ve incorporated the ideas presented by George Olsen in his article at Boxes and Arrows which elegantly added to the user experience view and JJG’s diagram.

    In haste I have created some quick diagrams in OmniGraffle (less than perfect) and have assigned task lists to each of the items — possible pre-production tasks one could look at doing in each stage of production. They are available here: http://www.benry.net/blog/archives/2003_04.html#000314

    Feedback welcome as I continue to tread through these murky waters.

  8. As a hyphenated-to-the-point-of-exasperation designer-programmer-writer-whatever, George’s model has been a real gift.

    The way I’ve interpreted it, I’ve gone from a two-dimensional on-or-off state to having a three-dimensional continuous model.

    It’s helping me visualise sites as multi-dimensional entities (imagine vectors: 15 task, 30 immersion, 55 information) and considering how pages/sections can contribute to that.

    In the long term, I’m sure it will help me meet the divergent/convergent needs of different users.



    PS: Jesse’s diagram remains the one on my wall!

  9. The addition of the third dimension (“Immersion-oriented”) is a big step in the right direction. A few things come to mind:

    Significantly, the added “immersion-oriented” column is *not* a dimension. It’s not orthogonal to the two previous columns. Rather all three are *parallel* layers or workflows. Both Jesse and George’s diagrams seem to present these three columns as representing different types of UE design. While I agree that emphasis varies by project, I think all three workflows are present and interracting in all projects.

    It seems to me that we could replace the terms Information-oriented, Task-oriented, and Immersion-oriented with more classic terms of Content, Function, and Form. This might not be quite what you had in mind, since there is already a horizontal dimension of Visual/Sensory Design.

    But I think it might make sense. In fact, I wonder if we might learn something here from the Rational Unified Process (software developers, God forbid!). RUP has a process diagram that presents a Y dimension of “workflows” and an X dimension of time (http://www.rational.com/products/whitepapers/100420.jsp). The workflows (requirements, analysis, design, implementation etc.) sound like the traditional phases of a waterfall approach, but because they are kept in an orthogonal dimension, they are allowed to interact. For instance, an implementation might reveal new requirements.

    So, if we thought of this new diagram as presenting three parallel design disciplines of Content, Function, and Form, it would explain how content, function, and form are really inextricably tied together. They can be treated as parallel rather than sequential processes. For instance, visual design can influence interaction, and interaction design can influence content, rather than the other way around as is usually thought. This is an issue I’ve always thought needed more attention.

  10. With all the copy editors listed on the credits for this site, I wonder how the misspelled word “distain” (correct spelling is “disdain”) got through?

  11. Glad people are finding it useful. As Christopher noted the model’s primarily intended for users’ experiences with websites, software, videogames etc. There’s definitely crossover with the larger issue of brand experience — i.e. every direct and indirect experience someone has with a company/organization — which Paula seems to be concerned about.

    But trying to come up with a grand unified theory of everything tends to be so abstract that it’s typically not that useful to practical application. So I opted to work from the bottom-up. Hopefully, this is a piece of the puzzle that can be integrated into a larger picture.

    For example, I think much of it can be extended in to product design and development — which is where my interests are these days. As far as a designing a stapler, it’s probably less useful, simply because it’s overkill. The functional and sensory aspects are applicable (in a simple way), but staplers don’t have a lot of content.

    It’s more applicable to what I like to call “products with brains,” stuff like a cell phone, your VCR, medical imaging equipment, etc. The good/bad news is there’s going to be more and more smart products. There’s definitely some things we can learn from industrial designers (who’ve dealt with functionality), but they’re less familiar with the sort of interaction that comes with software (nor obviously with content issues).

    The web is an especially convergent medium in the sense that it typically involves all three dimensions in a way that projects from each of the roots don’t. For example, traditional software applications focused on functionality, and while they dealt with data they didn’t have content, in the sense that a lot of websites do. Likewise, sensory/immersive qualities were rarely thought much about.

    Which it typical, I’d say. In most of the root professions, projects generally dealt with two of the dimensions at most, and usually one dimension was predominant. What makes our work hard is having to 1) involve more dimensions and 2) strike an appropriate balance among them.

    As far what “interactive multimedia” refers to, while I suppose you could include applications and hypertext in a technical sense, I’m using it to things like the CD-ROMs of the early 1990s (which were commonly referred to by that name). The difference is the emphasis on sensory/immersive qualities _and_ having interaction, which makes it different from a film or other multimedia presentations.

    As far as the time dimension, since the diagram was already crowded, I dropped it because I thought it was implicit in moving from conception to completion. The time needed to each dimension and each step depends entirely on the particular project at hand.

    As far as the notion of communication, one of the key things that’s still missing is content strategy, which is implicit both my model and Jesse’s original, but not explored in detail. I made some nods in that direction, but I ran into difficulties of muddying the waters by trying to include too much. That said, having been a journalist and knowing filmmakers, I think most of the these non-fiction/fiction content strategy steps fit within the various dimensions of model, which is why I didn’t feel the need create a fourth dimension around content strategy.

    However, it’s true this model tends to focus more on what is said, rather than _how_ it’s said — i.e. rhetoric — as well as how that fits into the larger relationship between those having the “conversation.” Originally, I was going to wrap that into the model, but again doing so caused the model to be too confusing.

    But I’m still working on modeling that issue. There’s actually a lot of interesting thoughts from the service industry about this. Which makes sense, since really the computer is just taking the place of one of the participants in a service encounter. Stay tuned.

  12. Nicely done, George. This is the diagram I’ve been hoping you would produce for a long time now. I have two minor quibbles, one extremely minor and the other merely quite minor.

    First, the extremely minor one: I’m not sure what you gain by inverting the visual arrangement of the planes. It complicates side-by-side comparison of the models, loses the building-up-from-strategic-foundation idea that the original diagram evokes, and conveys no discernible advantage in return.

    Second, the somewhat less minor: It hardly seems fair to describe the Elements model as having “weaknesses” and “omissions” when one tries to apply it beyond the context for which it was devised. My diagram is explicitly about the Web, and it seems to me that Flash movies and CD-ROMs constitute something qualitatively quite different. Stretching the definition of the Web to include interactive multimedia renders that definition meaningless.

    Characterizing the model as falling short when applied beyond the scope of its original intent strikes me as rather like saying a screwdriver falls short because it does a poor job of pounding in nails. That said, I’m impressed with the elegance of your extension to the Elements model.

  13. I’m flattered Jesse. To answer your points:

    I inverted the planes mainly because when I’ve shown your original diagram to people, they’ve tended to be a little confused because they tend to associated “moving down” as “moving forward through time” and didn’t necessary get the “building up from the foundation” logic.

    But I don’t have strong feelings about this, and it would be interesting to see if having the planes explicitly labeled clarifies things. If it’s clear, I don’t mind revising the order of the planes.

    On the second point, I guess that’s where we have a fundamental diagreement about the nature of the web — I think immersive/interactive multimedia _is_ part of it. (As I mentioned in my article, in years past I did lots of movie sites where providing a “cool experience” was the primary focus.) But we can agree to disagree.

    That said, these extensions wouldn’t have been possible without your excellent base. If I see further, it’s because I’ve stood on the shoulders of giants…

  14. When I say the Web is different from Flash, I’m not saying the Web is better or more important or anything like that. Just different. When I refer to “the Web” I’m referring to a very specific set of technologies (namely, HTML delivered over HTTP) with a specific set of constraints. The Elements model was never intended to be applied in contexts where those constraints are absent, or where differing constraints exist.

    Now, there is a class of applications that I would characterize as “Web-like” — WinHelp and wizard interfaces spring immediately to mind — and I suppose the Elements model could be stretched to accommodate these without straining. But Flash and Director are rather further removed from HTML-over-HTTP than these cases, and the constraints involved seem different enough to merit consideration on their own grounds, rather than trying to shoehorn them into some definition of “the Web” that thereby becomes so broad as to be rendered meaningless.

    Talking about the user experience of interactive multimedia is fine and important and valuable and necessary, but mixing multimedia up with Web sites does a disservice to meaningful discussion of both classes of applications.

  15. Well it’s useful to hear how narrowly you define the Web because that’s not an assumption that’s clear in your model. It’s your model, so you’re free to define things as you wish, but I think many people would disagree that the Web is _only_ HTML-over-HTTP.

    However my assertion about interactive multimedia/immersion doesn’t rely on Flash or Director — which I mentioned mainly as clearly understood references, and which weren’t available and/or practical on some of the early movie sites I worked on. Heck, animated gifs were a big deal at the time.

    And back in 1996, sites like “The Spot” tried to create immersive experience without even that, just using text and photos. I suppose you could call it hypertext, but it went beyond the sort of thinking I saw in “traditional” hypertext circles at the time (folks like Eastgate Systems), who were text-focused.

    Or in another lo-res example, sometime right after frames came out Feed had one of the most innovative uses of them I’ve seen, in essay by a feminist who was pursuing a career as a stripper. The reader had to “lift the dress” on each page to read the article itself. (In my oversimplified description it sounds crass, but the effect worked beautifully with the particular subject matter.) It was still graphics rather than animation or video, the interaction was minimal, but it still created a memorably immersive experience.

    Besides, by the time of your diagram, there had been some (quite elaborate) dynamic HTML sites with interactive multimedia that technically were still just HTML-over-HTTP. (Or are JavaScript and CSS not part of the Web? And what about SMIL or SVG, which after all are W3C standards specifically for creating interactive multimedia for the Web, even if they never got widely supported.)

    So it’s not like the immersive dimension hasn’t been part of the Web for quite some time.

    Again I think we’ve just got a fundamental disgreement over the nature of the medium. What you seem to see as muddying the waters, I see as painting a complete picture. Not sure we’ll ever convince each other otherwise.

  16. Maybe this will help me get a handle on the intended application of your model, George: What is excluded from your definition of the Web? By what criteria?

  17. I’m not sure, but I think you may have misunderstood every single point I’ve raised. All I’m saying is that understanding the model entails understanding the context in which it was created.

    I don’t understand your hostility at the suggestion that some kinds of network applications aren’t Web sites. I don’t think everything on the Internet has to be labeled as “Web” in order to be valuable or important.

  18. I guess my definition of the Web would be pretty similar to my Mom’s (who I think is a pretty typical user): pretty much anything that gets displayed through a browser.

    This would include HTML, CSS, JavaScript, DHTML and back-end applications that enable functionality through the browser. It also includes the major plugs-in displayed through a browser: Flash (and Director), audio/video (QuickTime, Real, WindowsMedia Player). Obviously my Mom doesn’t have the technical language to describe it this way, but that’s what she refers to when she talks about “the Web.”

    She doesn’t distinguish between things on the Web that are site-ish and those that are application-ish and those that are experience-ish.

    Admittedly stuff presented via plug-ins can straddle the border between Web-based interactive multimedia and “passive presentations,” depending on how they’re used.

    (In short, the longer the viewing experience and the less potential the user has for actively interacting, the more I think these become akin to traditional media. On the other hand, that’s not that much different than lengthy passages of text just being thrown up on the Web without being “translated” to be appropriate the medium, for example, brochureware.)

    I admit it’s a broad definition, but again I’m arguing that we _should_ see things broadly. That’s what I thought was powerful about your original model — it showed the connections between traditional application development and hypertext systems, and it highlighted the convergent nature of the Web. I just think that convergence includes yet another dimension.

  19. I’m not sure how my work habits are relevant to the pros and cons of the model I’ve proposed.

    If I choose to take a little time out of my day to converse about this and am willing to stay late to ensure work gets done, what makes that a concern of yours?

    Yes, I do enjoy engaging with our community. I also enjoy my real work cranking out schematics / wireframes / use cases / deliverable reviews / design reviews etc. I just make time to do both.

  20. I must be missing something because I don’t see why this is relevant or why it had to be re-iterated. I also don’t see why your comments had to be so personal, Pramit.

    I don’t care where George writes coments from. He can write them from his Yoga class for all I care, I’m just glad he makes the effort to do so. The idea is we all learn something from participating in these discussions.

  21. I’ve been staring at these two diagrams (JJG’s and Goerge’s revision) over the past few days and posed this question. What if the JJG user experience model was actually a model for the production of various interactive projects?

    I have been experimenting with this idea over the past few days. Though JJG makes it clear that his multi-layered diagram is “does not describe a production process” my thought is that maybe could. I’ve incorporated the ideas presented by George Olsen in his article at Boxes and Arrows which elegantly added to the user experience view and JJG’s diagram.

    In haste I have created some quick diagrams in OmniGraffle (less than perfect) and have assigned task lists to each of the items — possible pre-production tasks one could look at doing in each stage of production. They are available here: http://www.benry.net/blog/archives/2003_04.html#000314

    Feedback welcome as I continue to tread through these murky waters.

  22. I can’t speak for Jesse, but I definitely agree the three dimensions shouldn’t be seen as mutually exclusive. Rather they’re particular endpoints of a multi-dimensional continiuum — and as you’ve said, the question is how to find an appropriate balance among them.

    Particular types of projects at the endpoints were highlighted merely to help provide easy-to-understand examples, since traditionally they’ve each emphasized a particular dimension. But as you’ve said, any project normally involves all three.

    Which why I actually moved away from using Content, Functionality and Form as names for these dimensions (after toying with this initially). I think CFF is a decent short-hand, just as long as we realize that it has the potential to confuse as well as clarify. (For example, a lot of UI design is using form to support functionality; the content may dictate the type of form to be used, etc.)

    It may have been a bit jargony, but with Task-, Information-, Immersive-Oriented, I was trying to shift the focus from how we do it — which I think is CFF’s weakness — to the user’s perspective. My own shorthand is more like Do, Know, Experience (or Sense/Feel, if you like).

    I’ve always seen them as parallel, intertwined processes, so if that wasn’t clear, that’s only due to my lack of clarity in the model. Perhaps it’s because I take it for granted, that I didn’t highlight the interconnectiveness more.

Comments are closed.