As computers and digital devices increasingly insert themselves into our lives, they do so on an ever increasing social level. No longer are computers merely devices for calculating figures, graphing charts, or even typing correspondence. When producers of the first personal computers initially launched them into the market over 20 years ago, they could think of no better use for them than storing recipes and balancing one’s checkbook. They couldn’t predict how deep computers (and related devices) would seep into our lives.
Computers have enabled cultures and individuals to express themselves in new and unexpected ways, and have enabled businesses to transform how, where, when and even what business they do. However, this rosy outlook has come at a price. Computers have become more frustrating to use. In fact, the more sophisticated the use, the application, the interface and the experience, the more important it is for computers and other digital devices to integrate fluidly into our already-established lives without requiring us to respond to technological needs. Also, the wider-spread these devices, the more socially-agile they need to be in order to be accepted.
Interfaces must:
- Be more aware of themselves.
- Be more aware of their surroundings and participants/audiences.
- Offer more help and guidance when needed, in more natural and understandable ways.
- Be more autonomous when necessary.
- Be better able to help build knowledge as opposed to merely processing data.
- Be more capable of displaying information in richer forms.
- Be more integrated into a participant’s workflow or information and entertainment processes.
- Be more integrated with other media.
- Adapt more automatically to behavior and conditions.
People default to behaviors and expectations of computers in ways consistent with human-to-human contact and relationships.
Ten years ago, when the computer industry was trying to increase sales of personal computers into the consumer space, the barrier wasn’t technological, but social. For the most part, computers just didn’t fit into most people’s lives. This wasn’t because they were lacking features or kilohertz, it was because they didn’t really do much that was important to people. It wasn’t until email became widespread and computers became important to parents in the education of their children that computers started showing up in homes in appreciable numbers. Now, to continue “market penetration,” we’ll need to not just add new capabilities, but build new experiences for computers to provide to people that enhance their lives in natural and comfortable ways.
If you aren’t familiar with Cliff Nass’ and Byron Reeves’ research at Stanford, you should be. They showed (and published in their book Media Equation) that people respond to computers as if they were other people. That is, people default to behaviors and expectations of computers in ways consistent with human-to-human contact and relationships. No one is expecting computers to be truly intelligent (well, except the very young and the very nerdy), but our behaviors betray a human expectation that things should treat us humanely and act with human values as soon as they show the slightest sophistication. And this isn’t true merely of computers, but of all media and almost all technology. We swear at our cars, we’re annoyed at the behavior of our microwave ovens, we’re enraged enough to protest at “corporate” behavior, etc. While on a highly intellectual level we know these things aren’t people, we still treat them as such and expect their behaviors to be consistent with the kind of behavior that, if it doesn’t quite meet with Miss Manner’s standards, at least meets with the standards we set for ourselves and our friends.
We should be creating experiences and not merely “tasks” or isolated moments in front of screens.
Experiences happen through time and space and reflect a context that’s always greater than we realize. Building understanding for our audience and participants necessarily starts with context, yet most of our experiences with computers and devices, including application software, hardware, operating systems, websites, etc. operate as if they’re somehow independent of what’s happening around them. Most people don’t make these distinctions. How many of you know people who thought they were searching the Web or buying something at Netscape five years ago? Most consumers don’t distinguish between MSN, Windows, Internet Explorer, AOL and email, for example. It’s all the same to them because it’s all part of the same experience they’re having. When something fails, the whole collection is at fault. It’s not clear what the specific problem might be because developers have made it notoriously difficult to understand what has truly failed or where to start looking for a solution.
We need to rethink how we approach helping people solve problems when we develop solutions for them. We need to realize that even though our solutions are mostly autonomous, remote, and specific, our audiences are none of these. They exist in a space defined in three spatial dimensions, a time, a context, and have further dimensions in play corresponding to expectations, emotions, at least five senses, and real problems to solve—often trivial ones, but real nonetheless.
Most of you probably create and use user profiles and scenarios during development to help understand your user base. These are wonderful tools, but I have yet to see a scenario that includes someone needing help. I’ve never seen a scenario with a truly clueless user that just doesn’t get it. Yet, we’ve all heard the stories from the customer service people, so we know these people exist. When you pull out those assembly instructions or operating instructions or even the help manual, they really don’t help because they weren’t part of the scenario or within the scope of the project (because the help system never gets the same consideration and becomes an afterthought). They may not be part of the “interface,” but they are part of the experience.
This is what it means to create delightful experiences, and is a good way of approaching the design of any products or services. What delights me is when I’m surprised at how thoughtful someone is, how nice someone is in an adverse situation, and when things unexpectedly go the way I think they should (which is most likely how I expect a person to act).
Think about how your audience would relate to your solution (operating system, application, website, etc.) if it were a person.
Now, I’m not talking about bringing back Bob. In fact, Bob was the worst approach to these ideas. He embodied a person visually and then acted like the least courteous, most annoying person possible. But this doesn’t just apply to anthropomorphized interfaces with animations or video agents. All applications and interfaces exhibit the characteristics that Nass and Reeves have studied. Even before Microsoft Word had Clippy—or whatever that little pest is called—it was a problem. Word acts like one of those haughty salesclerks in a pricey boutique. It knows better than you. You specify 10-point Helvetica but it gives you 12-point Times at every opportunity. It constantly and consistently guesses wrong on almost every thing. Want to delete that line? It takes hitting the delete key three times if the line above it starts with a number, because of course it must, must be a numbered list you wanted. You were just too stupid to know how to do it. Interfaces like that of Word might be capable in some circumstances, but they are a terrible experience because they go against human values of courtesy, understanding and helpfulness, not to mention grace and subtlety.
So when you’re developing a tool, an interface, an application or modifying the operating system itself, my advice throughout development and user testing is to ask yourself what type of person is your interface most like? Is it helpful or boorish? Is it nice or impatient? Is it dumb or does it make reasonable assumptions? Is it something you would want to spend a lot of time with? Because, guess what, you are spending a lot of time with it, and so will your users.
I don’t expect devices to out-think me, think for me, or protect me any more than I expect people to in my day-to-day life. But I do expect them to learn simple things about my preferences from my behavior, just like I expect people to in the real world.
Human experiences as a model
When developers approach complex problems, they usually try to simplify them; in other words, “dumb them down.” This is usually a failure because they can’t, really, take the complexity out of life. In fact, complexity is one of the good things about life. Instead, we should be looking for ways to model the problem in human terms, and the easiest way to do this to look at how humans behave with each other—the good behaviors, please. Conversations, for example, can be an effective model for browsing a database (show example). This doesn’t work in every case, but it is a very natural (and comfortable) way of constructing a complex search query without overloading a user. And just because the values are expressed in words doesn’t mean they can’t correspond to query terms or numerical values. An advanced search page is perfectly rational and might accurately reflect how the data is modeled in a database, but it isn’t natural for people to use, making it uncomfortable for the majority, despite how many technologically-aware people might be able to use it. There is nothing wrong about these check-box-laden screens, but there is nothing right about them either. We’ve just come to accept them.
God is in the details
As Mies van der Rohe said, “God is in the details.” Well, these are the details and the fact that they’re too often handled poorly means that technological devices are ruled by a God that is either sloppy, careless, absent-minded, inhuman, or all of the above.
This isn’t terribly difficult but it does take time and attention. And we don’t need artificial intelligence, heads-up displays, neural nets, or virtual reality to accomplish it. There is a reason why my mother isn’t a fighter pilot—several, in fact. But the automobile industry in the U.S. spends tens of millions of dollars each year trying to develop a heads-up display for cars. That’s all my mother needs—one more thing to distract her from the road, break down someday, and scare her even more about technology and making a mistake. What we need are human values integrated into our development processes that treat people as they expect to be treated and build solutions that reflect human nature.
Everything is riding on this: expansion into new markets, upselling newer and more sophisticated equipment, solving complex organizational problems, reducing costs for customer service, reducing maintenance costs, reducing frustration, and (most of all) satisfying people and helping them lead more meaningful lives. Companies fail to differentiate themselves anymore on quality or tangibles. Instead, they try to differentiate themselves on “brand.” What marketers and engineers often don’t “get” is that the only way to differentiate themselves on brand is by creating compelling experiences with their products and services (and not the marketing around them). Niketown and the Apple Stores would never have succeeded—at least not for long—had they not been selling good product experiences. This isn’t the only reason the Microsoft store failed (a tourist destination for buying Microsoft-branded shirts and stationery really wasn’t meeting anyone’s needs), but it was part of it. Gateway, in comparison, has been much more successful, though they still aren’t getting it quite right.
The Apple Store is a good example. You can actually buy things and walk out with them (unlike the Gateway stores which really disappoint customers by breaking this social assumption). What’s more, anyone can walk in, buy a DVD-R (they come only in 5-packs, though) and burn a DVD on the store equipment. Really, I’ve done it. I may be the only person who has ever taken Steve Jobs up on this offer, but it is a very important interaction because most people aren’t going to have DVDRs for awhile—and neither are their friends. Most people don’t even have CDRs, but if they want to burn a DVD of their children’s birthday party to send to the grandparents, what else are they going to do? This recognition of their users’ reality is what made Apple’s approach legendary (not that it hasn’t been tarnished often). It’s not a technological fix, it’s not even an economic one. In this case, access is the important issue and allowing people to walk in off the street, connect their hard drive or portable, and create something with their precious memories became the solution. It works because it supports our human values (in this case, sharing). It works because this is what you would expect of a friend or someone you considered helpful. This is not only a terrific brand-enhancing experience, it jives with our expectation about how things should be and that is what social and human values are all about.
This is not a crisis of technology or computing power, but one of imagination, understanding and courage. I would love to see designers create solutions that felt more human in the values they exhibited. This is what really changes people’s behaviors and opinions. Just wanting things to be “easy to use” isn’t enough anymore—if it ever was. If you want to differentiate your solution, if you want to create and manage a superior customer relationship, then find ways to codify all those little insights experts have, in any field, about what their customers need, desire, and know into behaviors that make your interfaces feel like they’re respecting and valuing those customers. This is the future of user experiences, user interfaces, customer relations and it’s actually a damn fine future.
For more information
|
Nathan Shedroff has been an experience designer for over twelve years. He has written extensively on the subject and maintains a website with resources on Experience Design at www.nathan.com/ed. He can be reached at . |
Yes! This is exactly what we need to hear–a strong recognition and appraisal of a humanistic approach to designing new technology products. Nathan’s piece confirms that at the heart of digital design is really social interaction, human communication, which should embody fundamental human values, like trust, respect, courtesy, etc., and expressed in the product itself.
Consideration for the totality of the human expereince of a product, beyond isolated tasks, and the values therein, suggests a more promising way to integrate products into our daily lives. So, they become true participants in our lives, not merely annoying things we have to deal with. Apple, Nike, Sony are all great examples, as Nathan pointed out.
A question is, how do we cultivate positive human values within interdiciplinary product development processes, so they become reflected in the products we design?
Thank you Nathan
This isn’t so much a comment on this article as it is on this site, and the information architecture profession, as a whole.
When are we going to come up with something innovative? I don’t think I’ve read an article at Boxes and Arrows that was really the least bit interesting or even (heaven forbid!) a new idea. Let’s stop evangalizing the same tired message over and over again, and come up with something both new and useful.
You know, I think that MS really was trying to make Word “context-aware” and “be more autonomous” when it formats consecutive numbers into lists.
I mean, in this article there’s unfortunately not even *one* actual example of an actual product that does any of this stuff successfully and behaves itself. Is it possible that only real-world experiences are able to do it, like the Apple Store example?
We’re just waiting for you, Tad.
http://www.boxesandarrows.com/about/writeforus.php
Clifford Nass should definitely be required reading. Here’s a PDF of an essay he wrote with a colleague titled “Machines and Mindlessness: Social Responses to Computers”
http://hci.stanford.edu/cs147/readings/nass.pdf
And here’s an earlier paper on a similar topic:
http://hci.stanford.edu/cs147/readings/casa/index.html
And here’s an essay Nass and Reeves wrote on Perceptual Bandwidth for Communications of the ACM
http://www.iha.bepr.ethz.ch/pages/leute/zim/emopapers/reeves-Perceptual_Bandwidth.pdf
I have to credit Andrew Dillon for being the one to first turn me on to Nass and Reeves’ research:
http://www.gslis.utexas.edu/~adillon/
Now, all that said, I find Nathan’s essay, while right on, also, um, “well duh.” And I fear that, at least here on B&A, he’s merely preaching to the choir. I don’t think any of us would challenge the notion of better incorporating human values into our work.
And I think interface designers, HCI types, and the like, have been trying to for years. (I just read an essay Alan Kay wrote in 1989 on his work in interface design, which talks about stuff very much like what Nathan’s describing.)
But it’s not happening. And simply saying, “It must happen,” clearly isn’t going to get it to happen. For a whole boatload of reasons, this stuff is hard to implement.
Why?
Is it happening? I think the earlier comment on microsoft was right on… we should look to Tenner’s “Why things bite back” to understand the nature of the problem. MS thought they were being human centered when they made those godawful menus that hide half the options and are always changing based on use. It sounds user-centered… show items that are used often, and hide those that aren’t. But the reality is that the software program looks extremely arbitrary and unpredicatble to people who can’t remember what they used last, but sure as heck know what they need now and cannot find it. it’s no wonder people think of computers as humans…. they are fickle and unpredictable.
The design problem is determining what predicable and faithful would look like. And it is far more complex than most folks think.
About Microsoft Word, I used to teach classes on all MS office products.. and to think back, everything I found annoying about Word was when it would try to take over and assume what you wanted next. I found it kinda amazing that after a while of indenting and adding a bullet, typing two letters in bold, then changing to a different font, Word would somehow remember all of this and soon enough I wouldn’t have to do any formatting myself, Word took care of it.
While sometimes annoying, like a little kid who starts off being a pain, then realizes how the world works and is able communicate, some programs are hinting at ways to serve people better.
If any of you played Black and White, you may have seen some interesting AI going on.. I just wonder how long that AI (a loose term) will integrate itself into more traditional applications.
I find it ironic that an article which lays into Microsoft Word for:
“It knows better than you. You specify 10-point Helvetica but it gives you 12-point Times at every opportunity”
is hosted on a web page with hardcoded font sizes so that my attempt to increase the font size in my browser (IE5.5) to make it easier to read is completely ignored.
Physician, heal thyself.
Now now, don’t blame Nathan for our flaws. We provide the “large font version” admittedly too small and ill placed; we’ll work on that soon (all long other tweaks that are fairly urgent… always in motion, B&A is). However, I can pretty much blame browser makers for the font resize problems, which as best illustrated here
http://www.thenoodleincident.com/tutorials/box_lesson/font/index.html
But that’s not what we are here to discuss. Personally, I think the problem is that which we see in both MS and in the earlier comment. people always think there is an easy true answer, and their isn’t. humans are complicated, building for them is very very hard. very few get it right. no one gets it all the way right. But, as I’ve said before, critiquing is easier than making.
Nathan, in the book I’m writing, I’ve been dealing with a great many of these same themes. I’m arguing that one user need seems to subsume many others in our daily lives, and that need is simplification.
I further argue that an intervention that would go a very long way toward simplification is for interface developers to (voluntarily) adhere to a creed not too dissimilar from those bullet points with which you open your piece. The hope is that we would enter, with no apologies to Ray Kurzweil, an “age of social machines.”
Thanks for your insights; they’re helping me refine and extend my own positions even as I write this.
Oh, and Tad? We’ll keep shouting from the treetops, I’m willing to bet, until a healthy chunk of reality starts to reflect the values we hold so dear, and which you apparently regard as yesterday’s news.
Adam,
Careful there, wjat you’re after isn’t usually simplification but clarification. Richard Saul Wurman taught me that a long time ago. Life isn’t simple and when we try to simplify things, we usually destroy any meaning. What people really need is complexity represented clearly rather than deleting important information.
Condiser a map. SImplifying it is to delete everything but the most essential pieces (say, roads to your house). But you take away all of the context then. If someone misses a street or isn’t sure if they’ve passed it, there is no information for them to know if they have. It is the full map that builds the context they need to judge distances, orient themselves, come from another directions, etc.–as long as the map is presented clearly enough for them to parse.
There are times when we need to simplify things but relatively few of them.
Tax Forms are another example. The 1040EZ is simple but this means it can’t do very much (and, therefore, is only usable in the simplist cases). A “flat tax” is extremely simple but seen by many as not “fair” because it can’t represent the richness of tax exemptions and rates we often view as desirable.
Nathan
Mmm, maybe what might have helped that last comment hit home is itself a bit of context.
Yes, of course you are correct: simplification is a tool, clarity is the desired goal.
What the article’s saying resonates with the whole concept of branding as a relationship and with all Nathan himself has written about interactivity vs reactivity.
The emotional attachment we develop for products and the personality we tend to attribute to the quality of our interactions with them are in the end the most powerful vehicle to carry brand values and make them come to life. Think about MS Word’s infamous “paper clip” and Microsoft’s perceived brand image for example.
Interacting with somebody, weather the other one is a person, an organization or a software product, should be all about creating a “virtuous loop” where the two entities mutually evolve along the lines of the information that’s been shared. If the relationship lasts over time trust is the final outcome.
Most so-called “interactive” products, software or websites, are actually merely “reactive” and stuck in sterile input-output loops. They fail at creating a real dialog with users and leave them repeating time and time again who they are and what they want and how they want it.
But there’s also another aspect to consider.
We are entering a world where interactivity will be channeled through new breeds of mobile connected devices that will be used in social contexts of use, far away from the traditional world of one user sitting alone in front of a computer screen.
The role of the user interface will soon be even more complex as it will need to mediate between the user, the data he/she’s accessing and the context of use itself, other human beings included.
Imagine receiving an MMS, a multimedia message, on your mobile handset, in a crowded bus. How should that message be delivered? How will it feel to look at your girlfriend’s pictures with other people around you? Will you feel more exposed to their judgment than if you were reading a text message nobody else is really able to see?
Or how many times have you been in a meeting only to get interrupted by somebody’s phone ringing? What were your thoughts? “Why didn’t he/she turn the thing off” probably.
I think Nathan’s article is hinting that in the future the phone’s interface should know, it should know where it is and act accordingly because that’s what we expect in a social context from anything we qualify as “intelligent”.
Right now more complexity is imposed over us as WE have to remeber to tell our devices were we are or how we feel and how they should behave.
Nathan has done what he’s always done best, let his ideas float like a kite and leave us running after the string.
Thanks Nathan, as inspiring as usual.
From the article some themes come up to my mind.
1. The article talk about the human values of computers (be aware of themselves, aware of their surroundings and participants, more natural and understandable ways, autonomous…) and the interface (courtesy, understanding and helpfulness, not to mention grace and subtlety).
Making a parallelism with human, computer overall values will be the overall of humankind and computers interface will be the human mind. But what happen with the physical representation of computers? What happen with the body? As we move to a more experience focus design the integration between the two aspects, virtual and physical (mind and body), is very important.
In my opinion physical aspects should also support and empower human values focusing in values such as adaptability to the space, mobility, expressive, choreography of movements, flexibility, agility…
2. Added to the lack of imagination, understanding and courage I will add collaboration. In my opinion understanding others professionals’ space of work is no more enough, a more co-participative relationship is needed to deal with the experience as whole.
3. To design experience as a whole we need to adjust our vision.
In my opinion we need to give a step back to start to see design outcomes as a stories form by the sum of events (or the interaction of components that form events) rather than just stories form by form+ content+ behaviour/ time. This also suggests a change in design methodologies from the study of users and their behaviours to more situation and facts focus.
“the opportunities are boundless for those who are able to step back, adjust their vision and begin to make sense” John Seely, Director Xerox PARC (1997)
Roberto Bolullo
bolullo.com/roberto
Wow, really interesting point. I’m certainly not one to advocate anthropomorphism but that doesn’t mean that the physical aspects of a computer’s behavior and appearance shouldn’t allow it to be expressive as well. It doesn’t have to have human appearance to have social characteristics.
> doesn’t mean that the physical aspects of a computer’s behavior and appearance shouldn’t allow it to be expressive as well. . It doesn’t have to have human appearance to have social characteristics.
No, I just try to make an addition to the article not to confront virtual and physical aspect of computer.
What I try to mean and suggest is that a more integrative solution that support and empower human interaction needs to be reach in the virtual (service/ human mind) as well as the physical (form/ human body) aspects of computers.
I don’t try to mean that computers have to look like, behave or their appearance needs to be as human just they need to support and empower human interaction. Computers are just “enablers” of human needs and desires.
As we try to give a design solution focus on the experience that we are creating we need to understand better the different components that form situations (sum of events) and how they integrate and interact with each other and with humans over time to be able to create objects, spaces, tools… that affect human’s every day life (activities and experiences).
For this reason my point 2 and 3. Experiences need to be understand, integrate and create as a whole.
Roberto Bolullo
bolullo.com/roberto
Thanks Nathan, I found this to be a *very* interesting article. I think it is fascinating to look at things from the ground up…and you’ve done a great job hypothesizing about a new forms of interface and interraction.
Some of the things that popped to mind: Part of me wants to believe that there is an awknowledged line between what humans perceive as “human” and what we perceive of as “non-human.” The points that people project human attributes to the non-human are well taken, but I wonder the value of making that line fade…would it be just to trick a user into thinking they are using a more human device/interface/etc?
For me, personally, I rely on that separation between human and non-human (which sounds silly when I type it out) in all my interractions.
An example: Sprint PCS recently introduced a pseudo-female computerized “avatar” to their customer service number…and it is really hard to use (harder than the old “press zero for all other questions” cold in-human system), and it strikes me as just plain creepy. Am I supposed to think that there is a woman on the other line, because I sure KNOW that there isn’t. Perhaps this example is too literal, and in fact, is just an example of a poor execution of a humanizing app…but it’s the first to come to mind.
So I guess there is a happy medium to be found between cold computer interactions and creepy, pseudo-human ones…plus, I find that I have frustrating and confusing interractions with a lot of PEOPLE as well as with a lot of devices 😉
Anyways, just some thoughts. Great thought-provoking article, though!
Tim
Tim,
You make a great point. I suspect that the line between human and non-human is really a pretty wide gray zone that has yet been explored. I haven’t heard/used the Sprint system (I’m an AT&T subscriber) but it reminds me of the recorded voice instructions in some subway systems–BART I think, but potentially others. When they first switched to human voice recordings for instructions, people assumed there was a real person present (a conductor or operator) and took more risks (running through doors at the last minute, etc.) since people thought there was someone in control and watching. The subway finally changed the instructions to a robot or computer-sounding one so that people wouldn’t have this impression and make this mistake.
I think a telephone response system with really thoughtful organization, navigation, and explanation would be, personally, more successful than one that tried to mimick humanity. However, Wildfire Communications (http://www.wildfire.com/) is one of the few companies with a service that does all of this successfully–with humor even.
Nathan
I think so.
What about the HOW?
This is a thought-provoking topic, and in the abstract sure to inspire some critical thinking about how interfaces work, but in practical terms there’s a gaping hole in the middle: How does one make an interface aware of itself and its environment, especially without relying on some form of artificial intelligence (read: tons of development work)?
You’re asking a computer to perform the tasks it is LEAST inclined to do. The work done so far in the direction of “making computers more like humans” has revealed, if anything, that tiny incremental steps in machine sentience require exponentially increasing amounts of R&D effort. Yet you claim that we can do this without reference to the AI field at all.
The solution must be so painfully simple that I can’t see it…Call me a pragmatist, but I think the real way UI ought to go is to educate decision makers about the importance of the user experience (many still don’t get it) and the balance of usability vs. feature-richness.
HOW…
Yes, that is a really difficult question.
A set of answers might come from a “diffuse intelligence” scenario and from the old idea that complex, intelligent-like behavior can “emerge” from combining apparently simple behavioral patterns.
MIT Media Lab’s “Things that think” research comes to mind (http://www.media.mit.edu/ttt).
Take the “the phone should know” scenario I was referring to above for example.
Let’s say the meeting room only “knows” what BASIC characteristics human being usually associate with that kind of environment, for example expecting “silence” or “not to be disturbed”.
All it would take is for the room’s “intelligence” to tell the phone that (via bluetooth or similar means) and the phone should only “know” how to adapt to those instructions…or where to find that information in an “always on” scenario.
Similarly you could imagine your Personal Digital Tools (PDTs) to surround you with info about yourself that could inform objects about certain expectations or needs you might have, even your mood.
Say that as you enter a room your PDTs will “inform” the other intelligent objects that you are hearing impaired and their interfaces will adapt to your specific needs.
From mixing these (apparently) simple behaviors you might have interactive objects that achieve some of the goals Nathan indicated in the article (more aware of themselves, of their surroundings and participants/audiences, offer more help, adapt automatically to behavior and conditions etc.) and all of this would be completely transparent to the user.
It is obvious that there are major issues down this path, such as:
– Technological complexity (how should objects communicate).
– Standardization related to the basic “patterns” (behaviors) these objects should communicate.
– Conflicts between contradicting information (should a room’s settings override my personal ones?)
Should our PDTs constantly communicate an AIM-like status to the environment that surrounds us?
Should that information be imbedded in the object or be somewhere on the network?
There is a difference between emergent behavior that is perceived as intelligent and a system that responds the right way at the right time. The MIT work that you refer to (the hypercello) actually takes advantage of its unpredictability and limited control; it makes music that the performer didn’t explicitly specify, but is related to his input. It is limited by parameters, but impossible to control beyond a certain level (in fact, that’s the whole point of the system).
The product of the system in this case is music (I happen to think it’s a great use of the technique). Whether the resulting output “works” or not is subjective. An automated decision about whether a phone should ring audibly when you’re in a meeting, has no margin for error.
It seems to me that what you are talking about is a machine-to-machine interface, not a human-machine interface. Having a device “read” one’s mood is pure sci-fi at this point; Enabling devices to share preferences (as in the hearing-impaired example) is well within the realm of the possible. I think it’s a great idea that can come to fruition only with a lot of human-to-human interaction (agreeing on standards, opting for fewer but more compatible features, etc.).
A combination of simple behaviors CAN result in complex behavior, but one cannot precisely specify the resulting behavior.
I am all for a humanistic approach to interface design, but it’s the designer’s approach, not the machine’s.
Points well taken Michael, but I fail to see the difference between the design of machine-to-machine and human-machine interface if they both happen to have an impact on human beings.
I do not expect (yet) my PDA or phone to know how I “feel” but if I set my instant messenger to “busy” I could soon start to expect my PDA or phone to know too.
If M2M communication can in fact act as an enabler for more humane user interfaces I think the design community should definitely be involved (the “lot of human-to-human interaction” you are referring to).
I sometimes feel that as a community we (I know I do) tend to be so fascinated with the design of the interface itself that we end up forgetting that the UI is not the point but only a necessary layer between humans and possible answers to their needs/desires/dreams.
I was trying to suggest that as more and more objects around us start to communicate the resulting ad-hoc networks might be interesting not only to channel functional benefits, but also to ease the increasingly complex interactions we end up having with those objects.
BTW, what you say : “A combination of simple behaviors CAN result in complex behavior, but one cannot precisely specify the resulting behavior” sounds to me like a very good definition of how humans tend to be…and that brings up again Tim Lynch’s point on how “human” a computer should feel.
The problem (too strong a word) I’ve always had with Nass’ research is that he has people interact with computers that behave socially (for example, as teachers) in the first place, which I believe stacks the deck in favor of finding a “social” human-computer interaction.
Compare this to, say, setting up a print driver, which has no comparable human-human interaction, and I wonder what type of results you’d get.
On the other hand, it’s not as simple as computers behaving socially to get users to respond socially; there are technical factors in play. Long ago some colleagues and I published an article on speech recognition and the effect of interlocutor style on the HCI, and found that system response time has a major effect on how users interact with the system, and their impressions of “sociability” of the computer.
For what it’s worth!