Somehow, products, services, and systems need to respond to stimuli created by human beings. Those responses need to be meaningful, clearly communicated, and, in many ways, provoke a persuasive and semi-predictable response. They need to behave.
This basic definition of Interaction Design (IxD) illustrates the common threads between definitions crafted by esteemed designers Dan Saffer1 and Robert Reimann2 as well as the Interaction Design Association3.
It’s also important to note that Interaction Design is distinct from the other design disciplines. It’s not Information Architecture, Industrial Design, or even User Experience Design. It also isn’t user interface design. Interaction design is not about form or even structure, but is more ephemeral—about why and when rather than about what and how.
For any design discipline to advance, it needs to form what are known as foundations or elements. The creation of such semantics encourages:
* better communication amongst peers
* creation of a sense of aesthetic
* better education tools
There are other reasons, but for now these seem sufficient for a discussion about foundations.
What Are Foundations?
“Foundations” first came to my attention while preparing for Masters of Industrial Design program at the Pratt Institute in Brooklyn, NY. The program was built by Roweena Reed Kostellow based on her educational philosophy of foundations (as detailed in the book Elements of Design by Gail Greet Hannah4).
To Kostellow there were six elements that made up the foundations of Industrial Design: line, luminance & color, space, volume, negative space, and texture. Mixing and experimenting with these was at the heart of designing in the 3D form discipline. Students at Pratt explored these foundations in a year’s worth of studio classes. They would press boundaries and discuss relationships while critiquing abstract and real projects.
I’m not the only person ever to think about this issue though I propose that we think about it differently. Dan Saffer, for example, in his book, “Designing for Interactions” has a great chapter on what he calls the Elements of Interaction Design: Time, Motion, Space, Appearance, and Texture & Sound. Dan’s elements concentrate on what I would call the forms that carry interactions, but to me they are not the form of an interaction, except maybe time.
If there are indeed foundations of Interaction Design, they need to be abstracted from form completely and thus not have physical attributes at all.
Foundations of Interaction Design
“Time” makes interaction design different from the other disciplines of user experience (UX). It is the wrapper of our experience of an interaction and must live over time.
But Time is not a single foundation in Interaction Design. There are too many interrelated facets of time to be manipulated. And as we all learned Time is relative; it is fungible; and exists on many axis all at the same moment. Let us consider three time related foundations of Interaction Design:
Interaction design is the creation of a narrative—one that changes with each individual experience with it, but still within constraints. For example, if I’m using an email client, I’m not going to turn on a stove burner during the process of writing an email.
Narratives have pacing. We experience that most clearly when we watch a movie. A great movie will have you coming out of a theater having never looked at your watch. Pace is also a part of interaction design, but in some cases a good experience may have you looking at your watch—hopefully not out of boredom, but because you need to know the current time so you can complete the goals of the interaction.
The way I think of pace in interaction design often correlates to how much can I do with any given moment. And not just how much can I do, but how much I have to do before moving to the next moment. For example, I can have a single really long form where all of my checkout information is presented in one presentation when I’m buying something, or I can separate different components of the checkout process into more discreet moments.
While it might take the same length of time to complete either experience because the number of form fields is the same, the experience of the pacing of these designs is quite different. Further, it has been argued that one long form is more efficient, and conversely that separating a form into chunks is more manageable. Maybe that means that the total positive experience needs to consider other things beyond efficiency for its own sake.
A simpler way that we design for time in interaction design is “reaction time.” How long does it take for the system to produce a reaction to an event? We’ve all seen our cursor change to an hourglass or the proverbial progress bars as we wait for the system to do what we asked, but there are other reaction time considerations.
Actions done in real time (synchronous) have a level of relationship to the moment, while actions that seem to happen in a black box and come back later (asynchronous) lack that relationship. However, because some systems take time, we need to be cognizant of how we communicate these different types of reactions.
Every major foundation element like time should probably have a “context” sub-element. What this means is that there is always something about the human being in the interaction that would change the course of the design itself. In the case of “time,” we cannot design any application without understanding and exploring the meaning of how much time a human will be spending in direct contact with the system.
How much time we spend with an application and how long we are in relationship to it inform our designs and also participate in the experience we create.
Alan Cooper & Robert Reimann in About Face 2.06 speak about the context of time as the concept of “posture.” There are four postures:
* Sovereign – an application that takes our full attention.
* Transient – applications in the periphery of our attention that call us for short moments.
* Daemonic – alerting systems
* Parasitic – support interaction mode for both sovereign and transient applications.
Metaphor is a literary device which uses one well-understood object or concept to represent with qualification another concept which would be much more difficult to explain otherwise. The virtual nature of computers requires that we bring tangible metaphors to bear to help people understand the vagueness of it all. What type and how many metaphors we use directly impact the quality and emotional connection we have for a product.
A favorite metaphor is the trash can or recycle-bin (pick your OS). The idea your files are in waiting in some virtual “bin” or “can” so that if you were mistaken you can dig through the trash(Ick!) and recover them is ingenious. Of course, you can always “empty” it, making whatever was inside irrecoverable. The metaphor works well for most people mainly because of its preciseness and flexibility with the real. In thinking about the qualities of the metaphors for a bin/can between Mac OS and Windows, one might wonder if the nature of a trash can’s “dirtiness” makes it less likely that we will dig files out than recover files from the recycle-bin.
All metaphors break down at some point; where these metaphors break is how we get things into them. We still use the term “delete” to express how we add something to that bin or can. We don’t delete things into our real trash cans, do we? Despite the break down of the metaphor (and every computer metaphor does break down at some point), it still is tangible enough for us to grasp.
But sometimes metaphors go too far. They require a chasm wider than our ability to imagination. The literal desktop seems to make sense and has been tried in the past. If I have a blotter, a file cabinet, an inbox, a calendar, etc. laid out quite beautifully on my screen, I can call my objects files, use a notepad, keep my messages in an inbox, and keep appointments on a calendar, right?
But metaphors appear to succeed best when they are imprecise and the user has to fill in the gaps from their own understanding. Thus, we have an adaptation of that desktop metaphor on our computers today.
The interaction designer needs to strike this balance, cautiously using the metaphors of their predecessors and building on top of them, so long as the original (maybe convention-setting) metaphor can withstand the new direction.
Working in tandem with metaphor, Abstraction relates more to the physical and mental activity that is necessary for an interaction to take place. I first started thinking about abstraction after reading an article by Jonas Lowgren7 on what he has termed “Pliability.” After reading the article and using the term a few times in talks and discussions, it occurred to me that Jonas was really speaking about how abstracted an interface is from the response of the product.
By most accounts almost everything on a PC is pretty abstracted because you have two primary interface points for input—mouse and keyboard. Some people have placed their monitor inside of some sort of touch device lowering the level of abstraction for some types of interactions, mainly drawing. Still, most of us type, point, click, and move the mouse around on the screen.
Let us focus on “mousing”. We are looking at a monitor where there is a cursor (an icon) we were taught is related to the mouse. Without looking at that mouse (usually) we move it and in whatever direction we move the mouse, the icon on the screen (usually an arrow) moves. Well, sorta. Right and left seem to work, but moving a mouse away from us moves the cursor up and moving it towards us moves the cursor down playing on the metaphor of perspective possibly.
Then when we get the icon over a target, we click a button on the mouse. This is a strong level of abstraction. The mouse, monitor, and CPU work in unison to create a series of effects that communicate to the connection between the three devices. But the connection is very very abstract and must be learned.
Even in moused behaviors there are different levels of abstraction as well. My favorite comparison is between Google Maps and MapQuest. What makes Google Maps a success is that by mousing down and moving my arm I can change the focused area of the map. It has a very quick reaction time (see above), but the type of motion—moving my arm as if moving a piece of paper in my focused line of sight—is much less abstracted than in MapQuest, which is to simply click on the border or on the map (assuming the correct mode is set). Now one might say that the click is easier (a less complex set of behaviors), but this is more abstracted, arguably less engaging, and definitely less accurate. This makes Google Maps (and copycats) a much more pleasing and effective interaction.
Systems are both becoming highly complex and highly integrated into our lives. Many systems are losing abstraction completely, and not always for the better, while complexity is increasing abstraction of information. This is why everyone is so fascinated with touch-screens of late. They quickly reduce the level of abstraction for interacting with a computer device.
Other new and popular technologies will create challenges for the next wave of interact designers.. The expanding world of spatial gestures, RFID, and other near-field communications technologies create interaction experiences basically increase abstraction without any device to interface with directly. For these, we have not found similarly effective metaphors to guide the user’s understanding of the abstraction as we have for the mouse.
All good design disciplines have a form of negative space. In Architecture and Industrial Design, it is the hollowness or the space between solids. In Graphic Design, it is “white space” what is left without color, line, or form—literally the white part of the paper to be printed on. Sound design uses silence, and lighting design looks at darkness.
So what is the negative of interaction?
There are many places where you can “lack” something, or, more accurately, there are many layers. Are we only talking about the product action? What about our action? What about the space in between either entity’s action?
Pause – So clearly a moment in time where no action is taking place by anything that is part of the interaction experience. Often in interaction design we try to fill these gaps, but maybe these gaps are useful.
Cessation of thought – What if doing nothing created a reaction from the system? Well, one student thought this up with BrainBall (http://w3.tii.se/en/index.asp?page=more&id=4) at Sweden’s Interaction Institute (http://w3.tii.se/en/). As you think less, the ball moves more.
Inactivity – Doing nothing, or the product doing nothing in reaction to an action may be a negative occurrence. This differs from pause, but in this case inactivity is the reaction to activity as opposed to just a cessation of activity.
Well, whatever the negative space of interaction design is, it isn’t.
Intersection in Interaction
Unlike form-creating design disciplines, interaction design is very intricate in that it requires other design disciplines in order to communicate its whole. For that reason, interaction design is more akin to choreography8 or film making than music or costume making. The foundational elements above only belong to interaction design, or are re-defined to be explicitly for interaction design.
For example, the use of color is an aesthetic tool and a functional tool that can enhance or detract from communication of core interaction styles. Language or semiotics as tools for communicating through another discipline called narrative or story telling also come together and make for a better interaction experience. Further, for many experiences, information architecture is required for the preparation and arrangement of information before the interaction can be created.
As Dan Saffer points out (see above), motion, sound, appearance, texture, and sound all make up the form and are used to create patterns of time, abstraction and metaphor.
It is the interaction designer’s attempt to manipulate these four foundations that separates the practice from industrial design, architecture, graphic design, fashion design, interior design, information architecture, and communication design.
In the end, interaction design is the choreography and orchestration of these form-based design disciplines to create that holistic narrative between human(s) and the products and systems around us.
1Reimann, Robert. “So you want to be an Interaction Designer”
2Saffer, Dan. “A Definition of Interaction Design”
3Interaction Design Association. “What is Interaction Design?”
4As captured in this recent book: Hannah, Gail Greet, Elements of Design: Rowena Reed Kostellow and the Structure of Visual Relationships, New York: Princeton Architectural Press, 2002.
5Saffer, Dan. Designing for Interaction: Creating Smart Applications and Clever Devices, New Riders, 2007.
6Cooper, Alan and Reimann, Robert, About Face 2.0, Indianapolis, IN, Wiley Publishing, Inc., 2003.
7Lowgren, Jonas. “Pliability as an experiential quality: Exploring the aesthetics of interaction design,” Artifact 12:1 (April 10, 2006): 55–66. (republished on the author’s website)
8Heller, David (NKA Malouf, David), “Aesthetics and Interaction Design: Some Preliminary Thoughts.” (ACM membership required), Interactions 12:5 (September-October 2005): 48-50.
Dave, on Abstraction;
While not countering what you’ve said I was left with the impression you believe its better to reduce abstraction. I’m not sure that should be a goal in itself, I would say its better to push out the abstraction to the right level for the given interaction.
For example, the very first cars used something akin to a boat tiller for steering. It had poor mechanical advantage so a steering wheel was invented/implemented which added a layer of abstraction. Similarly, if we were to blindly reduce the abstraction in an interface it would look like a display panel from the Matrix. Or, we’d all be assembler programmers (o; I like to think that there are three parties when it comes to abstraction; the user, the system and the ‘tasks’. Its the designer’s job to triangulate the design to meet the requirements and constraints of those three.
Also, I feel the level of abstraction needed is based on the user’s understanding of the system’s operation/capabilites as well as their experience in their domain. As that understanding increases then the abstraction can be reduced. My users often graduate from web to CLI when they achieve a level of experience and search for better flow.
Its a complex topic and I’d really appreciate your thoughts on how to arrive at a balanced abstraction.
Great article btw – thanks, pauric
Oh! I agree that different contexts will require different levels of abstraction. Teleperception may be as abstracted as we get, but may lead to amazing interfaces for example.
I’m not sure about graduating to CLI. I’ve tried to make Enso (Humanized.com) part of my life and just couldn’t make it more useful than my current mouse-based gesturing systems that I’m so used to. But I like the steering wheel example.
Another way is the two lever steering of a tank where you direct the tracks to go forward or back and you steer by increasing or decreasing speed of the wheels on the left or right side to go in the proper direction, like using a wheel chair or rowing a boat/canoe/raft/kayak. To me that has more abstraction but also leads to more efficient and proper interaction.
But for digital systems, I do find in general so far that lowering the abstraction layer is better so long as that abstraction layer is designed well. Ergo MS Windows Mobile while capable of touch-screen interfaces is not nearly as good at it as the iPhone. It just wasn’t designed well to do it.
david — great piece
my favorite part: “Narratives have pacing. We experience that most clearly when we watch a movie. A great movie will have you coming out of a theatre having never looked at your watch. Pace is also a part of interaction design, but in some cases a good experience may have you looking at your watch–hopefully not out of boredom but because you need to know what time it is to complete the goals of the interaction.
that part especially is awesome and important because it exactly the smashes the paradigm of usability testing that consists of asking people to stand on their heads and timing them and putting the results in a spreadsheet … which tells you only that jane did it for 10 seconds and john did it for nine … never addressing whether they wanted to in the first place or whether they enjoyed it, etc.
it’s similar to those airport studies re baggage claim: it’s not how long it takes from the gate to your suitcase, it’s how it feels while you’re doing it
very nicely done.
Thanks for this article, David–it’s encouraging to be reminded that the human experience is at the heart of the design-centered disciplines you listed. Often the details of our workday obligations obscure that perspective (for me, at least).
Much like differing cultures have recognizable characteristics unique to their dance and their music, the “choreography and orchestration” of their interaction should also be designed as to their unique cultural characteristics. In short, the elements of the foundations are different culture to culture.
I think your orientation to foundations needs to be considered separately for each cultural audience of interactors we are designing for. And this may not necessarily be across national boundaries. Even the differing levels in a given society need to be considered.
You spent some time discussing “abstract,” for example. Different groups have different ideas of what abstract is–or have different levels of comfort with the abstract. An easy illustration of this is Asian cultures’ and Western cultures’ thinking patterns and how elements onscreen match these patterns (thinking patterns can also involve “pace” as well). And different cultures have differing ways of looking at time, color, language, and semiotics, too.
Your discussion of metaphor and your mention of aesthetic are tied together when considering multiculturalism within the interaction. Bluntly, unfamiliar metaphors violate a cultural aesthetic. The user may then grant less agency to the interaction out of resentment or confusion, rendering the design ineffective. This may be especially true when one of the two cultures is more powerful or somehow has an advantage over the other. For instance, as we become more globalized, we Westerners need to consider this agency as our human capital stores are more and more diverse (okay, I’ll say it: outsourcing). Indeed, we can’t slow down the bus of capitalism in order to accommodate each and every user group. But if we want members of those other cultures to be viable resources at our technological level, we need to meet them half way with that technology. At least related to my comments here, that halfway point means responsibly infusing their cultural aesthetic into the interaction.
As you’ve said, in order to behave, the tech-based products and systems need to respond to human stimuli—with an amazing cultural array of humans using the technology, “behave” can mean an array of different things.
Regarding, abstraction and CLIs, we use a technique in our administration GUI that allows users to view the CLI equivalent of the action they are performing, which allows them to move from the more abstract GUI model to the less abstract CLI model. This is important because we know from research that our users tend to start first with the GUI to learn the concepts then transition to the CLI for power and control. Rather than fight this, we try to enable it.
Regarding abstraction in general, the problem I see is more practical than theoretical… good abstractions are really frickin’ hard to design. And bad abstractions are often worse than no abstraction. How many times have you been using a product with an abstraction intending to help you, then the abstraction breaks down in one place and BAM you have to consume all of the underlying complexity to make progress? Development tools do this all the time – you get a nice WYSIWYG builder that works 90% of the time, but to accomplish the other 10% means learning everything that the WYSIWYG builder was attempting to hide. On the other hand, I’ve had techies tell me that we should expect all of our users to know our product’s installation file structure, and if they don’t they have no business using our product. So abstractions are hard, but the alternative is too horrible to consider. =)
@laurie – I particularly liked that section as well, for the same reason. I think it’s rarely the case that time on task is a useful metric, and focusing on time on task can lead to bad design. People would rather spend 30 mins on a task where they always know where they are and what they’re going to do next and exactly when they’re done, then spend 15 mins blindly trying random techniques that may or may not be working.
@David – I enjoyed the article, but I disagree with one small part of it. In the very last sentence you say, “In the end, interaction design is the choreography and orchestration of these form-based design disciplines to create that holistic narrative between human(s) and the products and systems around us.”
There’s a lot of nuance in the world of “design”, and I think it’s worthwhile to break “design” up into pieces to understand them individually as a learning technique. Where does information architecture end and user experience begin? Where does visual design end and interaction design begin? These questions might not have clear answers, but trying to tease the pieces apart makes it easier to understand how each contributes to the whole. But interaction design, IMO, does NOT sit at the top and choreograph the “holistic narrative”. It’s just one of many pieces contributing to it. To put it another way, if I knew a product’s ID and the design parts that were being choreographed, does it follow that I must know the “holistic narrative”? I think not.
This is probably semantic quibbling, but I bring it up because I think there’s a tendency for people to try to break “design” into pieces to better understand them… then forget that a) it’s the overlap between the pieces that are most interesting, and b) none of the pieces define the whole. I’m not saying that you’ve forgotten this, I suspect the sentence was just worded so that it made a broader point than you intended to make.
_But metaphors appear to succeed best when they are imprecise and the user has to fill in the gaps from their own understanding._
I think you can take that idea further. As you noted in different words, a metaphor is a map that captures certain correspondences between one domain and another. A metaphor only works if the user already understands one side of that correspondence. In terms of interaction design, the metaphor maps elements of a pre-existing mental model onto the internal computer model.
Going deeper, it is essential that the metaphor maps *some* features but not *all* of them. The balance between which elements are mapped and which aren’t is crucial. The elements where there *is* a correspondence are those that give the metaphor its evocative power; but it is the elements where there is no correspondence that highlight the usefulness of the interaction.
For instance, the Trash Can metaphor is so helpful to understanding because throwing something into a physical trash can corresponds to the operation of dragging a file to the on-screen image of a trash can. Similarly for taking something back out of the trash prior to emptying it. But there is nothing in the on-screen behaviour that corresponds to uncrumpling the paper once you’ve retrieved it from the trash, nor to the dirt stains on the paper. (I heard this example from someone else, most likely Donald Norman.)
If the application of a metaphor was completely accurate, there would be no advantage in the computerised interaction. The computerised interaction needs to reproduce the features of the metaphor that people want, but the value of computerising the task depends precisely in those features of the metaphor that are *not* reproduced.
Thanks for this thoughtful article, David. I agree with the foundations you listed, but was left wondering why you didn’t include a foundation for “goal,” i.e. why I undertook the design, and why anyone else would undertake the steps of the interaction. Fully understanding the context and mechanics of relevant goals seems fundamental to any interaction design, and constrains all of the other foundations you describe. Unless by foundations you mean the components of the interaction itself, or the philosophy of the art apart from the science.
Wow I’m so flattered by the incredibly thoughtful discussion going on here based on my article. I want people to know that these types of amazing analytical responses are exactly why I write for good zines like B&A. It gives me a thrill to be engaged like this. I acknowledge that I do not have all the answers (I also acknowledge thanx to my editor, I had 2500 word maximum, which is an extension from the usual for B&A).
Great comments from EVERYONE and why people think they are disagreeing with me, I think the nuances that make IxD so compelling for me are what people are getting at. It is so hard to get specific enough (especially in an article) to fill it all in.
Using the twiter markup that Terry started, I’m going to start at the bottom and jump around from there.
Yes, I mean the components of the interaction design itself. “Goals & Motivations” are part of the context of the problem space, but do not make up the interaction. This is similar to how I separated out the “form” elements that Dan Saffer discusses in his book as foundational elements of IxD. All design has Gloals & Motivations of the user that they must acknowledge and they are integrated into the whole.
Nothing to say here. I think this is a great extension and clarification of what I am thinking as well. Thanx for doing that.
I love this clarification that the cultural, sub-cultural and even personal psychology will have a dramatic effect. This is why it is so key to have great models of your personas (See About Face 3.0) that try to include cultural, cognative pieces. I think there has been great work about the social vs. the individual for example between US and non-US cultures on a continuum.
Thanx for the CLI example. Aza Raskin is going to be speaking at IxDA Interaction 08 | Savannah in Feb 08 (interaction08.ixda.org) and while he isn’t speaking directly on one of his favorite topics, CLI, he will probably love to talk to anyone who is doing work or otherwise interested in CLIs.
Now to the meet. I think I agree and disagree w/ you about the choreography issue. I see what you mean about the collaborative nature, but someone needs to be driving the vision. If all the design disciplines are equal partners I feel that leads to a muddied vision. To me all the design disciplines are working together towards a goal of creating and communicating interactions (in this space anyway). So while I’m not trying to create a land grab here, I do see it that IxD stands in the conductor’s podium. Sometimes a conductor’s job is to make himself invisible, even make the composer invisible and let the virtuoso take center stage. This may mean let the ID or IA shine through, but to me someone has to lead this by putting the total IxD narrative in the right light. It is like a director who is able to really let an amazing Cinematographer go to town, or a production designer. I mean what would the “Fifth Element” be with Godier’s (sp?) amazing costume design and the other production artists? I think this happens a lot with Apple’s products which really make people feel that it is the visual design that is most important to their products (the total form factor), but that is totally just a gateway “bluff”. It is the ease of adoption through the enticement of that gorgeous interface that leads you to the real meat in their products, but that play that is being done is key to the total interaction design that is being presented. I would even say there are usability weaknesses in that strategy that lead to better user experience and interaction design overall.
Ok, so I’m sitting here writing my slides for my course I’m teaching for SmartExperience.org (Interaction design of rich web applications) that starts this Wed. (6 2hr. classes). (Yes, that was an obvious placement advertisement.) and it occurred to me that in Terry’s comment above he was assumes that CLI is less abstract than a WIMP (or GUI) interface.
I think I have to disagree. While typing something out is definitely easier from a Fitts Law perspective so long as the number of keystrokes required is kept to a minimum, the act of describing your actions in non-direct linguistic forms is actually more abstract than the act of navigating & pointing-&-clicking. I’m sure there is some give and take in here as some elements are more abstract and some are less in both interaction models, but I think the assumption that CLI is less abstract is actually a false one from my subjective off-the-cuff analysis.
Maybe Aza can chime in here? I’ll poke him to the discussion.
David – I agree, I don’t think CLIs are inherently less abstract than GUIs. In fact, I might even go out on a limb and say that it’s probably the most common case that there is no real difference in abstraction level between the two, because usually both are based on the same model. Is describing your action textually more abstract or less abstract than the same action using pointing & clicking? I’m not sure… it might depend on the action.
But, in the product I work on, the CLI is definitely less abstract than the GUI. It’s the modern CLI equivalent of coding in binary! =) This is not by design, it’s actually a problem that we’re trying to solve, because our CLI is so close to our programming model that the learning curve it is very steep. In fact, we’re adding a layer of abstraction on top of the CLI (a “command framework”) so people can do some useful stuff without learning the full CLI language. So my statement above is less a generic statement that CLIs are less abstract than GUIs, and more a statement about trying to assist users in moving between abstractions.
Regarding the choreography debate, first, “The Fifth Element” is an underrated guilty pleasure! Second, let me see if I can make my point differently and see if we still disagree. Building on the movie analogies, the recently released “Stardust” was by most accounts a very good movie AND a commercial failure. Why? Apparently the studio’s marketing team couldn’t figure out how to sell the film. There were terrible trailers, bad ads, bad posters, etc. People couldn’t figure out what the movie was trying to be from the marketing campaign. Was it a kid’s movie? LOTR? An action movie? A love story? And mostly, IMO, there was no emotion attached to the trailers… people couldn’t predict how the movie would make them feel. This happens a lot, but usually it’s because the MOVIE doesn’t know the answer to those questions, so the marketing does what it can with a bad product. (Aside – actually I think the opposite is very common… a bad movie with great marketing) In this case, the movie knew, but marketing didn’t. And, I might add, the director wasn’t happy (even before the movie released) with the marketing… so that part obviously wasn’t his call. So the question is: who was in charge of the “holistic narrative” between moviegoers and “Stardust”? The answer is clearly “lots of different people”. And there was definitely no single choreographer… and the breakdown of one part of the whole basically doomed the movie.
Now, you might argue that one person should have been in charge (and I might agree), but a) it’s unrealistic in my experience, and b) if one person is in charge it won’t be a “worker bee”. There’s no IxDer in the world who I would trust with choreographing my product’s holistic narrative… and there’s no UXer or technical architect or information architect or marketeer or anyone else who I’d trust either. All those roles are too specialized to be in charge. They each need to add their special skills to the whole, including IxD.
Wrapping up, in my first comment I said, “To put it another way, if I knew a product’s ID and the design parts that were being choreographed, does it follow that I must know the holistic narrative? I think not.” Now my analogy is, “If I sat and watched “Stardust” without knowing anything else, would I know the holistic narrative? No, because the holistic narrative starts long before someone sits down in the theater.”
I think with regards to for instance the MS mobile interface compared to I phone Wittgenstein sums up the difference:
“Today the difference between a good and a poor architect is that a poor architect succumbs to every temptation and the good one resists it”
This might be a detour from your conversation but I somehow got to think about this quote when I read the article and the discussion. I phone was a good idea to begin with and ended up being a great product because each link in the process where able to be true to the original vision instead of succumbing to the temptation of adding new stuff.
That is actually IMHO the most important factor, the ability to carry an idea the all the way through to completion without hesitating.
I always think in frequency of use. The higher the frequency, the lesser the narrative and vice versa. Therefore in more complex interfaces (take photoshop or 3D Max as an example) its not really the storyline of the GUI that is the key factor.
Comments are closed.