IA Summit 09 – Plenary

Written by: Jeff Parks

Download   Watch the video

iTunes     Del.icio.us     IA Summit theme music created and provided by BumperTunes™

IA Summit 2009 logo

IA Summit 2009 Podcasts

The IA Summit was held in Memphis, TN from March 20-22. Boxes and Arrows captured many of the main conference sessions (see schedule).

| Preview | Keynote | Day 1 | Day 2 | Day 3 | Closing Plenary |

The IA Summit Closing Plenary

Jesse James Garrett delivers a passionate closing plenary at the 2009 IA Summit in Memphis, TN.Jesse James Garrett is a noted figure in the IA community, not only for his ground breaking book Elements of User Experience, but for the essay that galvanized the community in 2002, IA Recon.

In this IA Summit Closing Plenary, given without slides while wandering amidst the audience, Jesse examines what he has learned at the conference, he thoughts on the nature of the discipline and the practitioner, and gives bold, perhaps even shocking advice for the future direction of information architecture.

The following is an outline of some of his key points; please download the audio or watch the video for the complete experience.

Looking Back

Jesse revisits the turbulence of the first IA Summit in Boston, lamenting that he does not see this same turbulence in the IA community right now. Warning that “the opposite of turbulence is stagnation,” he looks back at the Great Depression and compares our grandparent’s feelings of scarcity to the community’s continued reliance on categorization in its various guises (e.g. taxonomy, thesauri, etc.) for its identity.

Moving On

Thanking IA leaders and the organizations that have nurtured Information Architecture, he declares that it is time to move on from the past. Leaders in IA, including himself, are notable based upon what they say about their work, not by their actual work and asks, “Do you know good IA when you see it?”

He is surprised that we don’t have schools of thought around IA. We have many ways to talk about our processes, but not about the “product of our work, a language of critique.” Until we can talk about the qualities of IA, we cannot judge the quality of the work.

No Information Architects

One of the desires of the IA community is to command respect. However, the overall value will take time to manifest itself, only reaching critical mass when “someone from this room” ascends to be CEO of an organization and creates a culture that respects the user to decimate the competition.

Jesse then puts forth his declaration that Information Architects and Interaction Designers do not exist. “There are, and only ever have been, User Experience Designers.”

He continues by breaking down UxD, examining how each element implied in the title illuminate his hypothesis – that the ephemeral and insubstantial CAN be designed independent of medium and across media. The web is just clay, he implores, and we can use many materials to create experiences.

Synthesis & Cohesion

Engagement is paramount, within any medium and across mediums. “Designing with human experience as an explicit outcome and human engagement as a specific goal is unique in human history.”

The varieties of engagement (e.g. the senses, mind, heart, and body) and other elements that influence the experience (e.g. capabilities, context, constraints) create the environment in which we work. UxD produces experiences that cross all of these elements, and mapping these experiences is incredibly challenging. The main goal is to synthesize them and create cohesive experiences that honor them.

Discovery, not Invention

With perception covered by visual designers, sound designers, and industrial designers, cognition and emotion are the manifest destiny of IA. User experience is not about information, rather, it is always about people and how they relate to information.

By structuring the information, User Experience Designers structure the tools that humanity uses. And, as a result, we influence how people think and feel. The final result is that those tools, in turn, shape humanity. We should embrace that responsibility.

Jesse predicts that UxD will take it’s place among fundamental human crafts. He posits that we are discovering the realities of people, their tools, and experiences rather than inventing them. With only ten years under our belts, we’ve only just begun that discovery, and he hopes that there will always be more.

 

Transcript of the closing plenary address delivered March 22, 2009 at ASIS&T IA Summit 2009 in Memphis, TN.

This address was written to be read aloud. I encourage you to listen to audio or watch video of the address if possible.

I recognize that being chosen to deliver the closing plenary is an honor, and I do not intend to repay that kindness by giving you a product demo.

I will not be participating in five-minute madness this year. You may consider this my 45-minute madness.

This is a different kind of talk for me. First of all, I have no slides! I kind of feel like I’m working without a net here. I can’t throw in the occasional visual pun to keep you guys paying attention. Secondly, I have no idea how long this talk is. I just finished it just before this began, so basically when I’m out of things to say, I’ll stop talking. Hopefully that will be sooner than you expected, and not later. Third, I’ve decided not to take questions at the end of this talk. My preference would be that if you have questions, don’t pose them to me. Pose them to each other. Publicly, if you can.

So if I run short, we’ll just go straight into five-minute madness and then we’ll all get to the bar that little bit sooner.

Okay, now: first-timers, please stand up.

[audience applauds]

I don’t think we do enough to recognize the importance of new voices in this community, and at this event. Those of you who were here last year may recall my comments from five-minute madness last year, where it seemed like maybe I was a little bit too hard on the first-timers for not being more active participants. What I was really trying to do was scold the old-timers for not doing more to make the first-timers feel welcome, and so I hope that those of you who are first-timers this year have been made to feel welcome by this community.

Now, before you sit down, I want to apologize to all of you, because there’s a great big chunk of this talk that is not going to mean very much to you — because I’m a ten-timer and I’ve got some things to say to my fellow ten-timers. So I’ll just get that out of the way. I hope you’ve enjoyed the rest of the conference — and now you can sit down.

So yeah, in case you guys haven’t heard, this is the tenth IA Summit. I don’t know if word got around about that. This is my tenth IA Summit. Anyone who was at that first Summit will recount for you the strange energy in that room: academics and practitioners eyeing each other warily, skeptical of what the other had to contribute. There was turbulence. (Hi Peter!) But it was productive turbulence.

I can’t say I’ve seen much turbulence at these events since then. Which ought to make all of us nervous, because the opposite of turbulence is stagnation.

In his opening keynote, Michael Wesch quoted Marshall McLuhan: “We march backward into the future.” When I saw this quote, it reminded me of the old quip that generals are always fighting the last war — which is why I think we’ve been stagnating. What war is the field of information architecture fighting?

The war we still seem to be fighting is the war against information architecture itself as a valid concept, as a meaningful part of design practices.

Almost everything you see about the IA community and IA practices — the mailing lists, the conferences, the professional organizations, the process models, the best practice patterns — they’re all optimized to answer two questions: Is this stuff for real? And is it valuable? And the answer to both questions is always, invariably, an emphatic “yes”.

IA is real. And IA is good. And that’s what we all agree on: some IA is better than no IA. But is there such a thing as “bad IA”? I mean, is it possible for an information architecture professional to do a thorough, responsible job, following all the agreed-upon best practices, and still come up with a bad solution?

I don’t think anybody knows the answer to this question. Because we’re still fighting the last war. We’re still trying to defend the answer to that question: is IA good? Is IA valuable?

Now, if you are about my age (and most of you seem to be, which I’ll come back to in a minute), your grandparents grew up in the Depression. And if your grandparents are like mine, this was an experience that shaped their behavior for the rest of their lives. They save everything: any little bit of leftover food, or a loose scrap of fabric, or a button or a screw. They save everything, because the notion of scarcity was deeply imprinted on them when they were young and became such a fundamental part of their worldview that decades later they’re still hoarding all this stuff even though the Depression’s been over… well, it took a break anyway.

Here are some of the most common terms from past IA Summit programs: taxonomy, thesaurus, controlled vocabulary, metadata, faceted classification, navigation, content management — and then there was that one year with all the talks about tagging. Like my grandparents, we cling to these things because they are what saved us. They are the tools by which we proved that yes, IA is real, and it is valuable. But that war is over. We won. And now it’s time to move on, because those comfortable, familiar things represent only part of what information architecture can be.

So it’s time to leave the nest. Thank you, Lou and Peter. Thank you, library science. For getting us off to a great start. For giving us the tools and knowledge to win a place for IA in the world. There will still be a place for library science in IA, but it’s only a part of our larger destiny.

Thank you to ASIST. Thank you to Dick Hill, and Vanessa and Jan and Carlene. This field would not be where it is without your efforts at these events, year after year. But I’m curious — show of hands: who here has ever been to any ASIST event other than an IA summit? [audience raises hands] Who here is an ASIST member? [audience raises hands] A smattering at best. ASIST has been sort of a benevolent host organism for the incubation of IA, but the relationship between ASIST and IA beyond IA Summit hasn’t really gone anywhere.

Okay, I’m debating how to do this… Name the five best-known information architects. [audience calls out various names] Now: name a work of information architecture created by one of these people. [silence] Is that a sign of a mature profession?

The names you know are notable for what they say about their work, not for the work itself. They’re not known for the quality of their work (and I’m including myself in this category).

Moreover, do you know good IA when you see it? And can different people have different ideas about the qualities of a good solution or a bad one, based on their philosophical approach to their work?

One thing I’m really surprised we don’t have yet, that I had expected to see long before now, is the emergence of schools of thought about information architecture.

Will there ever be a controversial work of information architecture? Something we argue about the merits of? A work that has admirers and detractors alike?

We have lots of ways of talking about our processes. In fact, if you look back at these ten years of the IA Summit, the talks are almost all about process. And to the extent that we’ve had controversy, it’s been over questions of process: Is documentation necessary? If so, how much? Which deliverables are the right ones? Personas, absolutely essential, or big waste of time?

What we don’t have are ways of talking about the product of our work. We don’t have a language of critique. Until we have ways to describe the qualities of an information architecture, we won’t be able to tell good IA from bad IA. All we’ll ever be able to do is judge processes.

Another thing that you’ll notice from looking back over ten years of the Summit is that talks are ephemeral. I was at all those summits, and I remember maybe a tenth of what I saw — and I saw less than half of what was on the program. I’m known for being down on academia a lot of the time, but they do have one thing right: you have to publish in order to create a body of knowledge.

I think I’m pretty good at what I do. But you guys are going to have to take my word for it. Because you don’t know my work. You only know what I say about my work.

I think I’m pretty good at what I do. I hope I’m getting better. I hope that my best work is still ahead of me. But I’m not sure. And I’m not sure how I would know. I’ve been coming to the Summit for ten years, and I’ve been doing this work, in some form or another, for close to 15. And as I’ve watched my professional peers settle down, get married, start families, become managers, I’ve found myself wondering about creative peaks.

In the field of mathematics, they say that if you haven’t made a significant contribution by the age of 30, you never will. It’s a young person’s game. 33 is young to be publishing your first novel, but it’s old to be recording your first album.

When do information architects hit their creative peaks? Let’s assume that I’m at about the median age for this group. Just assume most of you are my age, and there are about as many older than me as younger than me.

Now, if I’m at about the median age for an information architect now, when will that change? Will the median age keep going up, as this group of people ages? Presumably, at some point I’ll be one of the oldest guys in the room.

Alternately, what if information architecture is something that you don’t really get good at until you’ve been doing it for 20 years? Then we really have something to look forward to, don’t we?

Here’s another thing I thought we’d be hearing more about by the time of the tenth IA Summit:

You guys heard of this thing called neuromarketing? Man, this stuff is cool. They take people, they hook them up to MRIs — you know, brainwave scanners — and then they show them TV commercials. And they look at what parts of their brains light up when they watch these TV commercials. Then they do a little bit of A/B testing, and they can figure out how to craft a TV commercial that will elicit things like a feeling of safety. Or trust. Or desire.

So yeah, my first reaction when I saw this stuff was: Wow, I gotta get my hands on some of that! We’ve only just scratched the surface of what we can do with eyetracking and the marketers have already moved on to braintracking! But then my second reaction was: Wait a minute. What are we talking about here? A process designed to elicit specific patterns of neural activity in users? Back in the 50s, they called that “mind control”!

Now in a lot of ways, we’re already in the mind control business. Information architecture and interaction design both seek to reward and reinforce certain patterns of thought and behavior. (Just ask anybody who’s tried to wrestle any 37signals app into functioning the way they want to work, instead of the way Jason Fried thinks they ought to be working.)

So there’s always been an ethical dimension to our work. But who’s talking about this stuff? Who’s taking it seriously?

I don’t hear anybody talking about these things. Instead, what everybody wants to talk about is power, authority, respect. “Where’s our seat at the table?” Well, you know, there are people who make the decisions you want to be making. They’re called product managers. You want that authority? Go get that job. Don’t ask them to give that authority to you.

“When are we going to get the respect we deserve?” I’ll tell you how it’s going to happen. Somebody in this room, right now, at some point in the future is going to be the CEO of some company other than a design firm. They’ll develop all of those right political and managerial skills to rise to that level of power. And they will institute a culture in their organization that respects user experience. And then they’re just going to start kicking their competitors’ asses. And then gradually it will happen in industry after industry after industry. That’s how it will happen. But it will take time.

I had the thought at one of these summits a few years ago that we would know we had really arrived as a profession when there were people who wanted to sell us stuff. Because, you see, I grew up in the United States, where you don’t exist unless you are a target market.

And here at this event this year we have companies like TechSmith and Axure and Access Innovations and Optimal Workshop. And we thank them for their support. But where’s Microsoft? Where’s Adobe? Where’s Omni?

We aren’t a target market for any but the smallest companies. The big ones still don’t understand who we are. We’re still a small community, struggling to define itself.

In 2002, in the wake of the last bubble burst, I wrote an essay called “ia/recon”. In that essay, I tried to chart what I saw as a way forward for the field out of the endless debate over definitions. In the essay, I drew a distinction between the discipline of information architecture and the role of the information architect, and I argued that one need not be defined by the other.

Seven years later, I can see that I was wrong. The discipline of information architecture and the role of the information architect will always be defined in conjunction with one another. As long as you have information architects, what they do will always be information architecture. Seems pretty obvious, right? Only took me seven years to figure out.

But that’s okay, because what is clear to me now is that there is no such thing as an information architect.

Information architecture does not exist as a profession. As an area of interest and inquiry? Sure. As your favorite part of your job? Absolutely. But it’s not a profession.

Now, you IxDA folks should hold off for a moment before Twittering your victory speeches — because there’s no such thing as an interaction designer either. Not as a profession. Anyone who claims to specialize in one or the other is a fool or a liar. The fools are fooling themselves into thinking that one aspect of their work is somehow paramount. And the liars seek to align themselves with a tribe that will convey upon them status and power.

There are no information architects. There are no interaction designers. There are only, and only ever have been, user experience designers.

I’d like to talk about each of these three words, in reverse order, starting with “design”. Now, this is a word that I have personally had a long and difficult history with. I didn’t like this word being applied to our work for many years. I thought it placed us in a tradition — graphic design, industrial design, interface design — where our work did not belong. I also saw the dogmatism endemic to design education as poisonous and destructive to a field as young as ours. I still find the tendency of “designers” to view all human creative endeavor through the narrow lens of their own training and experience to be contemptible and appallingly short-sighted.

But I’m ready to give up fighting against this word, if only because it’s easily understood by those outside our field. And anything that enables us to be more easily understood is something we desperately need.

Now, let’s talk about that word “experience”. A lot of people have trouble with this word, especially paired with the word “design”. “You can’t call it experience design!” they say. “How can you possibly control someone else’s experience?” they demand.

Well, wait a minute — who said anything about control? Treating design as synonymous with control, and the designer as the all-powerful controller, says something more about the way these designers think of themselves and their relationship to their work than it does about the notion of experience design.

“Experience is too ephemeral,” they say, “too insubstantial to be designed.” You mean insubstantial the way music is insubstantial? Or a dance routine? Or a football play? Yet all of these things are designed.

The entire hypothesis of experience design (and it is a hypothesis at this point) is that the ephemeral and insubstantial can be designed. And that there is a kind of design that can be practiced independent of medium and across media.

Now, this part makes a lot of people uncomfortable because they’re committed to the design tradition of a particular medium. So they dismiss experience design as simply best practices. “What you call experience design,” they say, “is really nothing more than good industrial design.” Or good graphic design. Or good interface design.

This “mediumism” resists the idea that design can be practiced in a medium-independent or cross-media way. Because that implies that there may be something these mediumist design traditions have been missing all along.

If our work simply recapitulates what has been best practice in all these fields all along, why are the experiences they deliver so astonishingly bad? And let’s face it, they are really bad.

One big reason for it has to do with this last word, one which I think has been unfairly maligned: the word “user”. You guys know the joke, right? There are only two industries in the world that refer to their customers as users. One is the technology business and the other is drug dealers. Ha ha, get it? Our work is just as dehumanizing as selling people deadly, addictive chemicals that will destroy their lives and eventually kill them! Get it? It’s funny because it’s true.

No, it’s not. I’m here to reclaim “user”. Because “user” connotes use, and use matters! We don’t make things for those most passive of entities, consumers. We don’t even make things for audiences, which at least connotes some level of appreciation. The things we make get used! They become a part of people’s lives! That’s important work. It touches people in ways most of them could never even identify. But it’s real.

Okay, time for another show of hands: who here has “information architect” or “information architecture” in your title, on your business card? Raise your hand. [audience raises hands] Almost as many as we had ASIST members.

Okay, now let me see those hands again. Keep your hand up if there is also someone in your organization with “interaction design” or “interaction designer” in the title.

[hands go down]

Almost every hand went down. I see one hand, two hands. Three, four… five.

This is what the interaction design community recognizes — and what the leadership of the IxDA recognizes in particular — that the IA community does not.

In the marketplace, this is a zero-sum game. Every job req created for an “interaction designer” is one less job req for an “information architect” and vice versa. And the more “interaction designers” there are, the more status and authority and influence and power accrues to the IxDA and its leadership.

They get this, and you can see it play out in everything they do, including refusing offers of support and cooperation from groups they see as competitors, and throwing temper tantrums about how other groups schedule their conferences. Meanwhile, the IAs are so busy declaring peace that they don’t even realize that they’ve already lost the war.

This territorialism cannot go on, and I hope the IxDA leadership sees an opportunity here for positive change. These organizations should be sponsoring each other’s events, reaching out to each other’s membership, working together to raise the tide for everyone.

There is no us and them. We are not information architects. We are not interaction designers. We are user experience designers. This is the identity we must embrace. Any other will only hold back the progress of the field by marginalizing an important dimension of our work and misleading those outside our field about what is most important and valuable about what we do. Because it’s not information, and it’s not interaction.

We’re in the experience business. User experience. We create things that people use.

To use something is to engage with it. And engagement is what it’s all about.

Our work exists to be engaged with. In some sense, if no one engages with our work it doesn’t exist.

It reminds me of an artist named J.S.G. Boggs. He hand-draws these meticulously detailed near-replicas of U.S. currency. It’s gotten him in trouble with the Secret Service a couple of times. They’re near-replicas — they’re not exact, they’re obviously fake. They’re fascinating and they’re delightful, in and of themselves, as objects.

But here’s the catch: For Boggs, the work isn’t complete until he gets someone to accept the object as currency. The transaction is the artwork, not the object that changes hands. As he sees it, his work is not about creating things that look like currency it’s about using art as currency. It’s the use — the human engagement — that matters.

Designing with human experience as an explicit outcome and human engagement as an explicit goal is different from the kinds of design that have gone before. It can be practiced in any medium, and across media.

Show of hands: Who here is involved in creating digital experiences? [audience raises hands] Okay, hands down. Now: who’s involved in creating non-digital experiences? [audience raises hands] More hands than I thought.

Now, do we really believe that this is the boundary of our profession? And if we don’t, why are there so many talks about websites at conferences like this one?

Don’t get me wrong, I love the web. I hope to be working with the web in 10 years, in 20 years. But the web is just a canvas. Or perhaps a better metaphor is clay — raw material that we shape into experiences for people.

But there are lots of materials — media — we can use to shape experiences. Saying user experience design is about digital media is rather like saying that sculpture is about the properties of clay.

That’s not to say that an individual sculptor can’t dedicate themselves to really mastering clay. They can, and they do — just like many of you will always be really great at creating user experiences for the web.

But that does not define the boundary of user experience design. Where it really gets interesting is when you start looking at experiences that involve multiple media, multiple channels. Because there’s a whole lot more to orchestrating a multi-channel experience than simply making sure that the carpet matches the drapes.

We’ve always said we were in the multimedia business. Let’s put some weight behind that. Expanding our horizons in this way does not dilute our influence. It strengthens it.

So if we’re all user experience designers, and there are no more information architects, but there is still such a thing as information architecture, what does it look like?

Well, let’s take a closer look at engagement, and think about the ways we can engage people. What are the varieties of human engagement?

We can engage people’s senses. We can stimulate them through visuals, through sound, through touch and smell and taste. This is the domain of the traditional creative arts: painting, music, fashion, cooking.

We can engage their minds, get them thinking, reasoning, analyzing, synthesizing. This is where fields like scholarship and rhetoric have something to teach us.

We can engage their hearts, provoke them in feelings of joy and sadness and wonder and rage. (I’ve seen a lot of rage.) The folks who know about this stuff are the storytellers, the filmmakers, and yes, even the marketers.

And we can engage their bodies. We can compel them to act. This is the closest to what we’ve traditionally done studying and trying to influence human behavior.

And that’s really about it. Or at least, that’s all that I’ve been able to think of: Perception, engaging the senses. Cognition, engaging the mind. Emotion, engaging the heart. And action, engaging the body.

Mapping out the interrelationships between these turns out to be a surprisingly deep problem. Every part influences every other part in unexpected ways. In particular, thinking and feeling are so tangled up together that we practically need a new word for it: “thinkfeel”.

There are a few other factors, sort of orthogonal to these, that influence experience:

There are our capabilities: the properties of our bodies, the acuity of our senses, the sharpness and flexibility of our minds, the size of our hearts. Our capabilities determine what we can do.

Then there are our constraints, which define what we can’t do. The limits on our abilities, whether permanent — someone who’s having a hard time reading because they have dyslexia — or temporary — someone who’s having a hard time reading because they’ve had five bourbons.

Finally, we have context. And I have to admit that I’m cheating a bit on this one because I’m packing a lot of different factors up into this one category. There’s the context of the moment: babies crying, dogs barking, phones ringing. (Calgon, take me away!) Then there’s personal context: the history, associations, beliefs, personality traits of that individual. And there’s the broad context: social, cultural, economic, technological.

But these three — capabilities, constraints, and context — are really just cofactors, shaping and influencing experience in those big four categories: perception, cognition, emotion, and action.

Our role, as user experience designers, is to synthesize and orchestrate elements in all of these areas to create a holistic, cohesive, engaging experience.

So how do we create user experiences that engage across all of these areas? Where can we look to for expertise? Where’s the insight? Where are the areas for further inquiry?

Perception is already pretty well covered. We’ve got visual designers and, sometimes, animators. In some cases we’ve got sound designers. We’ve got industrial designers, working on the tactile aspects of the products we create.

Action, again, is pretty much what we were doing already. I defined action as engagement of the body, which may sound strange to many of you when I say that we’ve really been doing this all along. But if you think about our work, when we talk about behavior, we are always talking about some physical manifestation of a user’s intention — even when that manifestation is as small as a click. (And the interaction designers claim to own behavior anyway so I say let them have it.)

Because the real action is in these last two areas, cognition and emotion. This, to my mind, is the manifest destiny for information architecture. We may not have fully recognized it before because the phrase “information architecture” puts the emphasis on the wrong thing.

It’s never been about information. It’s always been about people: how they relate to that information, how that information makes them think, how it makes them feel, and how the structure of that information influences both things. This is huge, unexplored territory.

We must acknowledge that as user experience designers we have a broader place in the world than simply delivering value to businesses. We must embrace our role as a cultural force.

Here’s Michael Wesch quoting Marshall McLuhan again: “We shape our tools, and then our tools shape us.” Think about that for a second. “We shape our tools, and then our tools shape us.” When McLuhan said “we”, and when he said “us”, he was talking about the entire human race. But not everybody’s a shaper, right? The shapers are the people in this room, the people in this field. We shape those tools and then, the experiences that those tools create shape humanity itself. Think about the responsibility that entails.

I believe that when we embrace that role as a cultural force, and we embrace that responsibility, this work — user experience design — will take its place among the most fundamental and important human crafts, alongside engineering and architecture and all kinds of creative expression and creative problem solving disciplines.

At last year’s five-minute madness, I said that the experts who give talks at events like this one were making it up as they went along. But, I said, that’s okay, because we all are.

I take that back. We aren’t making it up as we go along. This is not a process of invention. This is a process of discovery.

What we are uncovering about people, about tools and their use, about experiences — it’s always been there. We just didn’t know how to see it.

This discovery phase is far from over. Ten years isn’t nearly enough time. There’s more that we can’t see than is apparent to us right now.

For my part, and for you as well, I hope there’s always more for us to discover together.

Thank you all very much.


Video by Chris Pallé and “The UX Workshop”:http://theuxworkshop.tv/
photo by “Jorge Arango”:http://www.flickr.com/photos/jarango/3382137521/
Thanks to Chris and Jorge.

These podcasts are sponsored by:

ASIS&T logo
The “American Society of Information Science & Technology”:http://asist.org/: Since 1937, ASIS&T has been THE society for information professionals leading the search for new and better theories, techniques, and technologies to improve access to information.

IA Summit 2009 logo
The “IA Summit”:http://www.iasummit.org: the premier gathering place for information architects and other user experience professionals.

The theme of the event this year, Expanding Our Horizons, inspired peers and industry experts to come together to speak about a wide range of topics. This included information as wide ranging as practical techniques & tools to evolving practices to create better user experiences.

The design behind the design
“Boxes & Arrows”:http://www.boxesandarrows.com: Since 2001, Boxes & Arrows has been a peer-written journal promoting contributors who want to provoke thinking, push limits, and teach a few things along the way.

Contribute as an editor or author, and get your ideas out there. “boxesandarrows.com/about/participate”:http://www.boxesandarrows.com/about/participate

Bringing Holistic Awareness to Your Design

Written by: Joseph Selbie

Gone, thankfully, are the days when the user experience and the user interface were an afterthought in the website design process, to be added on when programming was nearing completion. As our profession has increasingly gained importance, it also become increasingly specialized: information design, user experience design, interaction design, user research, persona development, ethnographic user research, usability testing—the list goes on and on. Increased specialization, however, doesn’t always translate to increased user satisfaction.

My company conducted a best practice study to examine the development practices of in-house teams designing web applications—across multiple industries, in companies large and small. Some teams were large and highly specialized, while others were small and required a single team member to perform multiple roles. We identified and validated best practices by measuring user satisfaction levels for the applications each team had designed; the higher the user satisfaction scores for the application, the more value we attributed to the practices of the team that designed it.

We did not find any correlation with user satisfaction and those teams with the most specialized team members, one way or the other: some teams with the most specialization did well, and some teams did poorly. What we did consistently observe among teams that had high user satisfaction scores, was one characteristic that stood out above all the others—what we began to call shared, holistic understanding. Those teams that achieved the highest degree of shared, holistic understanding consistently designed the best web applications. The more each team member understood the business goals, the user needs, and the capabilities and limitations of the IT environment—a holistic view—the more successful the project. In contrast, the more each team member was “siloed” into knowing just their piece of the whole, the less successful the project.

All of the members of the best teams could tell us, with relative ease, the top five business goals of their application, the top five user types the application was to serve, and the top five platform capabilities and limitations they had to work within. And, when questioned more deeply, each team member revealed an appreciation and understanding of the challenges and goals of their teammates almost as well as their own.

The members of the teams that performed less well not only tended not to understand the application as a whole, they saw no need to understand it as a whole: programmers did not care to know about users, user researchers did not care to know about programming limitations, and stakeholders were uninvolved and were considered “clueless” by the rest of the development team. These are blunt assessments of unfortunate team member attitudes, but we were surprised how often we found them to be present.

We also observed that the best teams fell into similar organizational patterns—even though there was a blizzard of differing titles—in order to explore and understand the information derived from business, users and IT. We summarized the organization pattern in the diagram below. We chose generic/descriptive titles to simplify the picture of what we observed. In many cases there were several people composing a small team such as the “UI Developer(s)” or the “User Representative(s)” often with differing titles. Also fairly common were very small teams where the same person performed multiple roles.

Holistic Awareness Fig 1

Fig. 1 — Teams tend to organize in similar patterns in response to the information domains they need to explore and understand

Even this simplified view of the development team reveals the inherent complexity of the development process. The best team leaders managed to not only encourage and manage the flow of good information from each information domain, but they also facilitated thorough communication of quality information across all the team members regardless of their domain. Here’s how they did it.

Five Key Ways to Promote Shared, Holistic Understanding

1. All team members—all—conduct at least some user research

Jared Spool once wrote that having someone conduct user research for you is like having someone take a vacation for you—it doesn’t have the desired effect! On the best teams, everyone, from programmers to stakeholders, participate to some degree in user research. A specialist on the team often organizes and schedules the process, provides scenario scripts or other aids to the process, but everyone on the team participates in the research process and thus has direct contact with actual users. There is no substitute for direct contact with users. Interviewing living breathing users, ideally in their own home or work place, makes a deeper impression than any amount of documentation can duplicate.

2. Team members participate in work and task flow workshops

Designing applications to support the preferred work and task flow of the users—providing the right information, in the right features, at the right time—is one of the hallmarks of applications that get high user satisfaction scores. The best teams devote enough whiteboard-style collaborative workshop time to explore work and task flow (including in the sessions actual users when possible), until all team members truly understand all the steps, loops and potential failure paths of their users.

3. Team members share and discuss information as a team

A simple practice, but one which is often overlooked, is taking the time to share and discuss findings and decisions with the entire team. Too often teams communicate information of significant importance to the project through documentation alone or through hurried summaries. Even if it is not possible for the team to participate in all user research or in mapping out all work and task flow on a whiteboard together, at a minimum, the team should go through the results of these processes in sufficient detail and with sufficient time to discuss and understand what has been learned.

Direct participation is the most effective way to learn and understand. Full and relaxed discussion with team members is the second best. Reading documentation only is the least effective way for team members to retain and understand project information.

4. Team members prioritize information as a team

Not only is it important that all team members be familiar with information from all three domains (business goals, user needs, and IT capabilities), but it is especially significant that they understand the relative importance of the information—its priority.

My company developed what we call a “Features and Activity Matrix”, based on our own experience designing applications, and from the practices we observed in our study. The features and activity matrix accomplishes two things:

  1. It forces teams to translate business goals, user input and IT capabilities into specific proposed features or activities that a user would actually see and use at the interface level. We have found that if an identified user need (for example, the need to know currency conversion values) isn’t proposed as a specific feature (say, a pop-up javascript-enabled converter, tied to a 3rd party database) then a potentially important user need either gets lost, or is too vague to design into the application.
  2. The features and activities matrix allows team members to prioritize the proposed features and activities from the perspective of their own domain through a process of numeric ranking. Business ranks according to the importance to the business, IT ranks according to “doability” (measuring budget, resources and schedule against each proposed feature and activity), and the user representatives rank according to their assessment of what will make users most satisfied.

The numeric ranking for each feature or activity, from each domain, is then averaged to arrive at a consensus prioritization of every feature and activity proposed for the application. If, as is usually the case, the team is already aware that they cannot do everything that has been requested or proposed, this is a very effective way to determine which features and activities are not going to make the cut and which ones have the highest importance.

Holistic Awareness Fig 2

Fig.2 — Features and Activities Matrix. Note that we used a ranking scale of 1-5, 5 being the most important.

5. Team members design together in collaborative workshops

Once information from all three domains is gathered, analyzed, shared and prioritized, the remaining—and most powerful—practice to promote a shared, holistic understanding is to conduct wireframe-level, whiteboard-style, collaborative design sessions. Your session participants should include a solid representation of users, business and IT but not exceed roughly 12 people. In these workshops teams can work out together the basic layout, features and activities for most of the core screens needed for a project. These sessions can, and should, require multiple days. We have found that between 10 and 20 core screens can be considered, discussed, iterated and designed in 4-5 days of workshops.

Make sure everyone is fully engaged: business cannot be half committed, and IT cannot say that they will determine later if the screen can be built. All team members should be prepared to make real-time decisions in the collaborative design session. Inevitably there will be a few things that simply cannot be decided during the session, but the greater the shared, holistic understanding that already exists, the fewer things that will require processing and final decisions outside the session.

A well prepared collaborative design session both promotes and leverages the team’s shared, holistic understanding of the project. Even though they are time consuming, collaborative workshops eventually save time because the team ends up needing less iterations to refine and finish the design. Collaborative workshops also insure higher quality; there are fewer missteps and errors down the line because of the shared understanding of the design. Finally, collaborative workshops create great buy-in from business and IT. It is far less likely that some—or all—of the project will undergo unexpected or last minute changes if the goals and priority of features is clearly established during the design process.

Whatever the size and structure of your team, and no matter how many or how few specialists it has, your outcomes will be better if everyone shares a holistic understanding of the work at hand. Taking the extra time as a team to develop a shared holistic understanding pays off in greater efficiency in the long run—and most importantly—in greater user satisfaction and overall success!

Why We Call Them Participants

Written by: Dana Chisnell

It was not an easy recruit. Directors of IT are busy people. Oddly, they’re hard to get hold of. They don’t answer calls from strangers. They don’t answer ads on web sites. The ones who do answer ads on web sites we had to double-check on by calling their company HR departments to verify they had the titles they said they did.

And now this.

“Hi! So we have some executives coming in tomorrow to observe the test sessions.” This was the researcher phoning. He was pretty pleased that his work was finally getting some attention from management. I would have been, too. But. He continued, “I need you to [oh yeah, the Phrase of Danger] call up the participants and move some of them around. We really want to see the experienced guy and the novice back-to-back because Bob [the head of marketing] can only come at 11:30 and has to leave at 1:00.”

“Sure,” I say, “we can see if the participants can flex. But your sessions are each an hour long. And they’re scheduled at 9:00, 10:30, 12:00, and 2:00. So I’m not quite clear about what you’re asking us to do.”

“I’m telling you to move the sessions,” the researcher says, “so the experienced guy is at 11:30 and the novice is at 12:30. Do whatever else you have to do to make it work.”

“Okay, let me check the availability right now while we’re on the phone,” I say. I pull up the spreadsheet of participant data. I can see that the experienced guy was only available at 9:00 am. “When we talked with Greg, the experienced guy, the only time he could come in was 9:00 am. He’s getting on a plane at 12:30 to go to New York.”

“Find another experienced guy then.” What?!

Five signs that you’re dissing your participants

You shake hands. You pay them. There’s more to respecting participants? These are some of the symptoms of treating user research participants like lab rats:

They seem interchangeable to you.


If you’re just seeing cells in a spreadsheet, consider taking a step back to think about the purpose and goals of your study.

You’re focused on the demographics or psychographics.


If it’s about segmentation, consider that unless you’re running a really large study, you don’t have representative sample, anyway. Loosen up.

Participants are just a way to deliver data.


You’ve become a usability testing factory, and putting participants through the mill is just part of your life as a cog in the company machine.

You don’t think about the effort it takes for a person to show up in your lab.


Taking part in your session is a serious investment. The session is only an hour. But you ask participants to come early. Most do. You might go over time a little bit. Sometimes. It’ll take at least a half hour for the participant to get to you from wherever she’s coming from. It’ll take another half hour for her to get wherever she’s going afterward. That’s actually more than 2 hours all together. Think about that and the price of gas.

You don’t consider that these people are your customers and this is part of their customer experience.

You and your study make another touch point between the customer and the organization that most customers don’t get the honor of experiencing. Don’t you want it to be especially good?

They’re “study participants” not “test subjects.”

Don’t forget that you couldn’t do what you do without interacting with the people who use (or might use) your organization’s products and services. When you meet with them in a field study or bring them into a usability lab, they are doing you a massive favor.

Although you conduct the session, the participant is your partner in exploration, discovery, and validation. That is why we call them “participants” and not “test subjects.” There’s a reason it’s called “usability testing” and not “user testing.” As we so often say in the introductions to our sessions, “We’re not testing you. You’re helping us evaluate the .”

Throw away your screener: Tips on recruiting humans

I’m not kidding. Get rid of your screener and have a friendly chat with your market research people. Tell them you’re not going to recruit to match the market segments anymore. Why not? Because they usually don’t matter for what you’re doing. In a usability test, you focus on behavior and performance, right? So recruit for that.

Focus on behavior, not demographics


Why, if you’re testing a web site for movie buffs, will selecting for household income matter? What you want to know is whether they download movies regularly. That’s all. Visualize what you will be doing in the session, and what you want to learn from participants. This should help you define what you absolutely require.

Limit the number of qualifiers


Think about whether you’re going to compare groups. Are you really going to compare task success between people from different sized companies, or who have multiple rental properties versus only one, or different education levels? You might if you’re doing a summative test, but if most of your studies are formative, then it’s unlikely that selecting for multiple attributes will make a big difference when you’re testing 5 or 6 people in each audience cell.

Ask open-ended questions


Thought you covered everything in the screener, but fakers still got into your study? Asking multiple-choice questions forces people to choose the answer that best fits. And smart people can game the questionnaire to get into the study because they can guess what you’re seeking. Instead, ask open-ended questions: Tell me about the last time you went on a trip. Where did you go? Where did you stay? Who made the arrangements? You’ll learn more than if you ask, Of your last three trips taken domestically, how many times did you stay in a hotel?

Learn from the answers


You get “free” research data when you pay attention to the answers given to open-ended screening questions because now people have volunteered information about their lives, giving you more information about context in which you can make decisions about your study design and the resulting data.

Flex on the mix


If you use an outside recruiting firm, ask to review a list of candidates and their data before anyone gets scheduled. You know the domain better than the recruiters do. You should know who will be in the testing room (besides you). You should make the trade-offs when there’s a question about how closely someone meets the selection criteria.

Remember, we’re all human, even your participants. These steps will help you respect the human who is helping you help your company design better experiences for customers.

Building the UX Dreamteam – Part 2

Written by: Anthony Colfelt

As we discussed in “part one”:http://www.boxesandarrows.com/view/building-the-ux, the skills in research, information architecture, interaction design, graphic design and writing define the recognized areas of User Experience design. However, there still remains much to discuss about what makes a UX team dreamy.

Each UX Dreamteam has a finely tuned mix of skills and qualities, as varied as the environments in which they operate. Part two will address whether a person has the right ‘hard’ skills and ‘soft’ qualities like communication style, creativity and leadership ability to fit your particular organizational context. We’ll also touch on the quality of an individual’s personality that may or may not complement the others on your team.

Personality

Perhaps the most important consideration in forming your Dreamteam is mixing the personalities of your superstars. As mom used to say “It’s not just about how you look, it’s what’s inside that counts.” A candidate may look ideal on paper, but until you have them in front of you, talking and interacting, you won’t know if what is inside will be a fit. Your group spends almost as much time together as apart, they need to respect and like one another to work well together. Personality typing tests hold the promise of quantifying the immeasurable, but you would be ill advised to use them as part of the interview process. Myers Briggs, DISC and plenty of others use various axes to measure the intrinsic tendencies of a person.

As cool as it sounds, the science is just not exact enough to act as the basis of any decision. This is not to say that these tests are not illuminating in their own right – they certainly foster greater understanding and empathy among teams. Generally speaking, though, people under pressure may answer personality tests as they think they should rather than honestly.

Collaboration is a big part of design best practice and the ability to work well with teammates should be of paramount concern. Selflessness indicates that a candidate is a team player as they seek to raise not only their own reputation, but equally those with whom they work. Humility, humor and empathy are virtues particularly relevant to the creative industry and should be sought after in UX professionals. Each player on the Dreamteam accepts when they’re mistaken, keeps each other creatively entertained and feels for the users they serve.

As much as any skill or quality we have already discussed and will explore in this article, finding the right personality type you need is the classic answer: ‘it depends’. It depends on the personalities of existing Dreamteamsters, the type of work they do, and on the organization into which they must fit. There is no magic formula, but there is one thing to always avoid: toxicity. Morale and productivity can be totally undermined by a “toxic person”:http://bipolar.about.com/od/support/a/070315_toxic.htm. Having one aboard can turn your Dreamteam into a nightmare.  So, do your homework to avoid inadvertently hiring them.

Screening Tips:

Look for signs of toxicity by asking about previous work places and their interactions with teammates. Did they voluntarily leave the last job? Do they mainly talk badly about their last workplace? Remember, a toxic person is often manipulative and they may seem great on the surface, so check references. If you misjudge a new hire and you realize you have a toxic person aboard, waste no time in jettisoning them, no matter how skilled they may seem.

Creative and Analytical qualities

Most jobs in the UX Dreamteam involve a level of creativity and analysis, but it’s a rare gem who is a rock-star operator in both these modes. But visionaries and analysts are equally necessary, ensuring great ideas and the ability to organize and actualize them.

A creative person doesn’t see a glass half empty or half full, but instead asks why it should be a glass at all. An ability to think laterally, meaning" to escape from a local optimum in order to move towards a more global optimum" (“Edward de Bono”:http://www.edwdebono.com/debono/lateral.htm) – is the talent from which innovation is born. A Dreamteam accesses their creativity readily and regularly to push beyond the obvious for an appropriately innovative solution. Ensure a proportion of creative genius in your Dreamteam to increase business success and thereby the team’s reputation.

Your analytical superstars can process vast amounts of information and distill it into a concise and cohesive experience for the user. They are methodical, account for every detail, and question inconsistencies. They grow solutions by breaking a system into its component parts, then creatively reassemble it in logical order. Good analysts are passionate and detail-oriented when identifying patterns in data and behavior.

Screening Tips:

Given how ideas are often difficult to credit to the interviewee, gauge creativity from the dialogue and candor during the interview. A truly imaginative person effortlessly surprises you with a fresh, off-beat approach to old problems. Responses to tangential or seemingly random questions can help illuminate this quality. If they can link the absurd back to realistic solutions coherently and with humor, you can be sure there’s creativity within. Analytical people are interested in details. Does your candidate flinch at the idea of auditing the content of large information system? If they have they done data analysis before, did they jump into it enthusiastically? How did it go?

Practitioner vs. Managerial qualities

Managerial qualities are confused with experience in most professions, and UX is no different. Experience correlates with peer respect, but respect is not all a manager must command. Peter Merholz talks of managers needing to be either "T" shaped "Bar" shaped, referring to the profile of skills they possess. "T"-shaped people have a broad and shallow knowledge of most skills and go deep in only a few.

"Bar"-shaped people do not plunge the depths of any expertise. As “he says”:http://www.peterme.com/?p=580, they are all about the connections between disciplines, and being able to articulate the power of that integration. An "I" shape would indicate deep knowledge in just one or two areas. This profile suggests an awesome specialist practitioner (yes, there is an "I" in Dreamteam!).

Good bosses are quietly also coaches, therapists, facilitators, communicators, organizers and politicians. As leaders, they are comfortable in setting an agenda for others to fulfill while inspiring the Dreamteam to meet or beat that agenda. Your luminary leader provides ‘air cover’, also known as ‘running interference’. Making space for their reports to work by fending off interfering people or tasks, the manager ensures the Dreamteam is focused, not randomized. 

People who find less satisfaction in helping others to be effective are better placed as well-compensated senior practitioners. To presume that someone senior should be promoted into a management position is misguided. A manager’s UX skills are less important than their ability to co-ordinate a group of individuals and spot what your organization needs from them.

Screening Tips:

When seeking managerial talent, look for someone who will revel in the Dreamteam’s success, rather than their own. How have they "run interference" in the past? New managers sourced from within a team show a tendency to get the best out of others prior to their promotion. This is known as "acting up" and makes a good task to set potential managers to test their aptitude. If you’re looking for a practitioner, be sure they’re not fixated on being a manager, lest their ambitions undermine the effectiveness of your designated leader.

Strategic vs Tactical Ability

We all know guys who stand idly by, watching others do their work and wryly commenting, "You look after the details, I’m the big picture man.” Those who strategize with ‘blue sky’ ideas can raise the ire of people slaving at everyday tasks. Tactical skills are just as valuable as strategic. Each serves their purpose in envisioning and getting things done.

Conceiving an entire system and determining what both the business and users get out of it are the domains of big picture people. It is hard to imagine success without their vision to work toward. These people can be creative or analytical but find implementation a chore. They are typically well informed of industry trends and can forecast the future through them. While vision is an awesome asset, without attentive "small picture" work, it’s an apparition. Strategists think one to five years ahead and beyond and are good at depicting a vision.

Tactical people focus on day-to-day activity and on success in the one to six month timeframe. With the exception of think tanks, the organizational balance needs to skew toward small picture people in order to achieve success. Many startups and UX teams fail because of the inverse balance.

Screening Tips:

To find the detail-oriented, look for evidence of finishing products and a personal satisfaction in seeing all loose ends tied up. A strategic thinker will show evidence of helping others to see the wider context of what they’re doing, often through conceptual and architectural diagrams. Can they show you some? Also ask questions which illuminate how they’re plugged into where your organization’s industry and the wider UX field is headed.

Innies vs. Outies

In-house teams (aka "Innies") have needs different to external agencies that provide interface building/designing services or consultancy. An in-house team is working toward increasing profitability through UX. In many cases, the nature of projects does not change over time because there’s only one type of business to support. Exceptions exist, but in general those building in-house teams should discount candidates who need variety to thrive.

The in-house Dreamteam is also better suited to agile development methodologies, which rely heavily on face-to-face contact. Unless a consultant is able to work on-site for the duration of the agile project, they will not be able to fulfill some of the tenets surrounding ‘less documentation, more talking’. Aside from communicating an absent author’s intentions, documentation is a mechanism used by agencies to cover their backside if a client claims poor diligence and won’t be abandoned willingly.

Agencies don’t make much money from staff who aren’t comfortable playing the consultant role. Working under pressure, answering expertly on all subjects related and sometimes unrelated to the job requires a certain type of communication style and self-confidence. Agency staff (aka “outies”) must be broad-skilled and part salespeople to make their expertise and company’s value obvious to clients. This isn’t to say that these qualities aren’t good to have on the in-house UX Dreamteam, but they’re less critical to business success and can be compensated for in other ways.

Screening Tips:

Stack your in-house team with stars who are tactical, for their willingness to roll up their sleeves, dig-in and get enjoyment from attacking a long-term goal is what you need. Strategic thinking is also attractive, but you may want to emphasize this in your management function where vision is expected. Beware hiring those with purely "innie" experience for "outie" roles and vice versa. Outies may find innie work mundane and innies can struggle in the faster-paced, higher-pressure outie workplace. Outies need to have political and sales savvy to navigate varied organizations and present value. Confidence, plausibility and magnetism will be obvious – you’ll want to hire them before they’ve shown you their ample skills. Though be sure they have those too!

Organizational Contexts

Hiring managers generally consider organizational context subconsciously when preparing their Dreamteam, usually feeling out the candidate with gut instinct rather than concrete comparisons.  It helps to abstract the organization into something you can test applicants for compatibility with, like a “persona”;:http://en.wikipedia.org/wiki/Persona for instance, then you can envision a compatible teammate for that persona. Size, work processes, project types, employees, industry and brand among other things influence the organization’s personality.

Some organizations are process-driven and others are more free-form. Process ensures that work complies with a to-do list prescribing smooth running and/or best practice. The less experienced use process like new bicycle riders use training wheels. Some people flourish within a controlled environment. Others feel hampered or oppressed by it. What are the processes used within your organization? What unique characteristics do individuals who operate within them need to be happy and successful?

A Dreamteam’s number will impact the duties each superstar performs. Small organizations can have tasks similar in number to their larger counterparts, but spread them among fewer people. This inevitably means one fulfilling multiple roles. The graphic designer might double as the interface-layer coder. The Information Architect may also be the researcher and writer. If you are in a small organization, a ‘gun’ specialist with all their UX skills primarily in only one area may not be a good fit.

Every workplace has a pace. Agile development or simply expeditious environments tend to be frenetic and mean working quickly. Some people don’t perform without time to pause, think, rework and perfect their work. Others will be frustrated if it takes a long time to get things done. They won’t always agree that crafting something perfectly, or documenting design thoroughly is time well spent. Sometimes perfection is expected, but timescales remain fixed. In this case, experience and coping well with stress is consequential. 

Screening Tips:

What kind of personality does your office have? Who would get along best with that person? Prepare to win the best fit by making a list of organizational attributes and qualities that will complement these. Agile methodologies should be coupled with experienced folk who are natural communicators; as should organizations without process to guide activities. A quiet consensus builder might suit a contentious office, etc. Use the example below to get you started – be creative and modify the attributes as you see fit.

Company Persona and Match

Here’s an example of how you might break down how a potential new team member might fit in with your organization:Take the time to analyze what your Dream Team needs and how well that fits potential talent.

Where do we go from here?

Hiring UX staff is rarely easy, but now you can take a structured approach to identifying the skills and personal qualities your team needs within your organizational context.  Like any craft, building the UX Dreamteam takes practice and the occasional mistake leads to growth as a hiring manager. Even when you think you’ve mastered it, there is still an element of luck to contend. You may be willing to compromise skills and qualities for someone who just feels right and your instincts shouldn’t be discounted. Allow them to inform your choices while thinking about the areas we’ve touched on to build the UX Dreamteam that will make your organization shine.

We Tried To Warn You, Part 2

Written by: Peter Jones

A large but unknowable proportion of businesses fail pursuing nearly perfect strategies.

In Part I of We Tried to Warn You, three themes were developed:

# Organizations as wicked problems,
# The differences of failure leverage in small versus large organizations, and
# The description of failure points

These should be considered exploratory elements of organizational architecture, from a communications information architecture perspective. While the organizational studies literature has much to offer about organizational learning mechanisms, we find very little about failure from the perspective of product management, management processes, or organizational communications.

Researching failure is similar to researching the business strategies of firms that went out of business (e.g., Raynor, 2007). They are just not available for us to analyze, they are either covered-up embarrassments, or they become transformed over time and much expense into “successes.”

In The Strategy Paradox, Raynor describes the “survivor’s bias” of business research, pointing out that internal data is unavailable to researchers for the dark matter of the business universe, those that go under. Raynor shows how a large but unknowable proportion of businesses fail pursuing nearly perfect strategies. (Going concerns often survive because of their mediocre strategies, avoiding the hazards of extreme strategies).

A major difference in the current discussion is that organizational failure as defined here does not bring down the firm itself, at least not directly, as a risky strategy might. But it often leads to complete reorganization of divisions and large projects, which should be recognized as a significant failure at the organizational level.

One reason we are unlikely to assess the organization as having failed is the temporal difference between failure triggers and the shared experience of observable events. Any product failure will affect the organization, but some failures are truly organizational. They may be more difficult to observe.

If a prototype design fails quickly (within a single usability test period), and a project starts and fails within 6 months, and a product takes perhaps a year to determine its failure – what about an organization? We should expect a much longer cycle from originating failure event to general acknowledgement of failure, perhaps 2-5 years.

There are different timeframes to consider with organizational versus project or product failure. In this case study, the failure was not observable until after a year or so of unexpectedly weak sales, with managers and support dealing with customer resistance to the new product.

However, decisions made years earlier set the processes in place that eventuated as adoption failure. Tracing the propagation of decisions through resulting actions, we also find huge differences in temporal response between levels of hierarchy (found in all large organizations).

Failures can occur when a chain of related decisions, based on bad assumptions, propagate over time. These micro-failures may have occurred at the time as “mere” communication problems.

In our case study, product requirements were defined based on industry best practices, guided by experts and product buyers, but excluding user feedback on requirements. Requirements were managed by senior product managers and were maintained as frozen specifications so that development decisions could be managed. Requirements become treated as-if validated by their continuing existence and support by product managers. But with no evaluation by end users of embodied requirements – no process prototype was demonstrated – product managers and developers had no insight into dire future consequences of product architecture decisions.

Consider the requisite timing of user research and design decisions in almost any project. A cycle of less than a month is a typical loop for integrating design recommendations from usability results into an iterative product lifecycle.

If the design process is NOT iterative, we see the biggest temporal gaps of all. There is no way to travel back in time to revise requirements unless the tester calls a “show-stopper,” and that would be an unlikely call from an internal usability evaluator.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions.

Here we find the seeds of product failure, but the organization colludes to defend the project timelines, to save face, to maintain leadership confidence. Usability colludes to ensure they have a future on the job. With massive failures, everyone is partly to blame, but nobody accepts personal responsibility.

The Roles of User Experience


Figure 1. Failure case study organization – Products and project timeframes. (View figure 1 at full-size.)

As Figure 1 shows, UX reported to development management, and was further subjected to product and project management directives.

In many firms, UX has little independence and literally no requirements authority, and in this case was a dotted-line report under three competing authorities. That being the case, by the time formal usability tests were scheduled, requirements and development were too deeply committed to consider any significant changes from user research. With the pressures of release schedules looming, usability was both rushed and controlled to ensure user feedback was restricted to issues contained within the scope of possible change and with minor schedule impact.

By the time usability testing was conducted, the scope was too narrowly defined to admit any ecologically valid results. Usability test cases were defined by product managers to test user response to individual transactions, and not the systematic processes inherent in the everyday complexity of retail, service, or financial work.

* Testing occurred in a rented facility, and not in the retail store itself.
* The context of use was defined within a job role, and not in terms of productivity or throughput.
* Individual screen views were tested in isolation, not in the context of their relationship to the demands of real work pressures – response time, database access time, ability to learn navigation and to quickly navigate between common transactions.
* Sequences of common, everyday interactions were not evaluated.

And so on.

The product team’s enthusiasm for the new and innovative may prevent listening to the users’ authentic preferences. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable.

Many well-tested products have been released only to fail in the marketplace due to widespread user preference to maintain their current, established, well-known system. This especially so if the work practice requires considerable learning and use of an earlier product over time, as happened in our retail system case. Very expensive and well-documented failures abound due to user preference for a well-established installed base, with notorious examples in air traffic control, government and security, medical / patient information systems, and transportation systems.

When UX is “embedded” as part of a large team, accountable to product or project management, the natural bias is to expect the design to succeed. When UX designers must also run the usability tests (as in this case), we cannot expect the “tester” to independently evaluate the “designer’s” work. The same person in two opposing roles, the UX team reporting to product, and restricted latitude for design change (due to impossible delivery deadlines) – we should consider this a design failure in the making.

In this situation, it appears UX was not allowed to be effective, even if the usability team understood how to work around management to make a case for the impact of its discoveries. But the UX team may not have understood the possible impact at the time, but only in retrospect after the product failed adoption.

We have no analytical or qualitative tools for predicting the degree of market adoption based on even well-designed usability evaluations. Determining the likelihood of future product adoption failure across nationwide or international markets is a judgment call, even with survey data of sufficient power to estimate the population. Because of the show-stopping impact of advancing such a judgment, it’s unlikely the low-status user experience role will push the case, even if such a case is clearly warranted from user research.

The Racket: The Organization as Self-Protection System

Modern organizations are designed to not fail. But they will fail at times when pursuing their mission in a competitive marketplace. Most large organizations that endure become resilient in their adaptation to changing market conditions. They have plenty of early warning systems built into their processes – hierarchical management, financial reports, project management and stage-gate processes. The risk of failure becomes distributed across an ever-larger number of employees, reducing risk through assumed due diligence in execution.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged.

The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged. Groupthink prevails when people conspire to maintain silence about bad decisions. We then convince ourselves that leadership will win out over the risks; the strategy will work if we give it time.

Argyris’ organizational learning theory shows people in large organizations are often unable to acknowledge the long-term implications of learning situations. While people are very good at learning from everyday mistakes, they don’t connect the dots back to the larger failure that everyone is accommodating.

Called “double loop learning,” the goal is learn from an outcome and reconfigure the governing variables of the situation’s pattern to avoid the problem in the future. (Single-loop learning is merely changing one’s actions in response to the outcome). Argyris’ research suggests all organizations have difficulties in double-loop learning; organizations build defenses against this learning because it requires confrontation, reflection, and change of governance, decision processes, and values-in-use. It’s much easier to just change one’s behavior.

What can UX do about it?

User experience/IA clearly plays a significant role as an early warning system for market failure. Context-sensitive user research is perhaps the best tool for available for informed judgement of potential user adoption issues.

Several common barriers to communicating this informed judgment have been discussed:

* Organizational defenses prevent anyone from advancing theories of failure before failure happens.
* UX is positioned in large organizations in a subordinate role, and may have difficulty planning and conducting the appropriate research.
* UX, reporting to product management, will have difficulty advancing cases with strategic implications, especially involving product failure.
* Groupthink – people on teams protect each other and become convinced everything will work out.
* Timing – by the time such judgments may be formed, the timeframes for realistic responsive action have disappeared.

Given the history of organizations and the typical situating of user experience roles in large organizations, what advice can we glean from the case study?

Let’s consider leveraging the implicit roles of UX, rather than the mainstream dimensions of skill and practice development.

UX serves an Influencing role – so let’s influence

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

User experience has the privilege of being available on the front lines of product design, research, and testing. But it does not carry substantial organizational authority. In a showdown between product management and UX, product wins every time. Product is responsible for revenue, and must live or die by the calls they make.

So UX should look to their direct internal client’s needs. UX should fit research and recommendations to the context of product requirements, adapting to the goals and language of requirements management. We (UX) must design sufficient variability into prototypes to be able to effectively test expected variances in preference and work practice differences. We must design our test practices to enable determinations from user data as to whether the product requirements fit the context of the user’s work and needs.

We should be able to determine, in effect, whether we are designing for a product, or designing the right product in the first place. Designing the right product means getting the requirements right.

Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications.

UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.

UX is a recursive process – let’s make recursive organizations as well

User experience is highly iterative, or it fails as well. We always get more than one chance to fail, and we’ve built that into practices and standards.

Practices and processes are repeated and improved over time. But organizations are not flexible with respect to failure. They are competitive and defensive networks of people, often with multiple conflicting agendas. Our challenge is to encourage organizations to recurse (recourse?) more.

We should do this by creating a better organizational user experience. We should follow our own observations and learning of the organization as a system of internal users. Within this recursive system (in which we participate as a user), we can start by moving observations up the circle of care (or the management hierarchy if you will).

I like to think our managers do care about the organization and their shared goals. But our challenge here is to learn and perform from double-loop learning ourselves, addressing root causes and “governing variables” of issues we encounter in organizational user research. We do this by systematic reflection on patterns, and improving processes incrementally, and not just “fixing things” (single-loop learning).

We can adopt a process of socialization (Jones, 2007) rather than institutionalization, of user experience. Process socialization was developed as a more productive alternative to top-down institutionalization for the introduction of UX practices in organizations introducing UX into an intact product development process.

While there is strong theoretical support for this approach (from organizational structuration and social networks), socialization is recommended because it works better than the alternatives. Institutionalization demands that an organization establish a formal set of roles, relationships, training, and management added to the hierarchy to coordinate the new practices.

Socialization instead affirms that a longer-term, better understood, and organizationally resilient adoption of the UX process occurs when people in roles lateral to UX learn the practices through participation and gradual progression of sophistication. The practices employed in a socialization approach are nearly the opposite (in temporal order) of the institutionalization approach:

# Find a significant UX need among projects and bring rapid, lightweight methods to solve obvious problems
# Have management present the success and lessons learned
# Do not hire a senior manager for UX yet, lateral roles should come to accept and integrate the value first
# Determine UX need and applications in other projects. Provide tactical UX services as necessary, as internal consulting function.
# Develop practices within the scope of product needs. Engage customers in field and develop user and work domain models in participatory processes with other roles.
# Build an organic demand and interest in UX. Provide consulting and usability work to projects as capability expands. Demonstrate wins and lessons from field work and usability research.
# Collaborate with requirements owners (product managers) to develop user-centered requirements approach. Integrate usability interview and personas into requirements management.
# Integrate with Product Development. Determine development lifecycle decision points and user information required.
# Establish User Experience as process and organizational function
# Provide awareness training, discussion sessions, and formal education as needed to fit UX process.
# Assessment and renewal, staffing, building competency

We should create more opportunities to challenge failure points and process breakdowns. Use requirements reviews to challenge the fit to user needs. Use a heuristic evaluation to bring a customer service perspective on board. In each of those opportunities, articulate the double-loop learning point. “Yes, we’ll fix the design, but our process for reporting user feedback limits us to tactical fixes like these. Let’s report the implications of user feedback to management as well.”

We can create these opportunities by looking for issues and presenting them as UX points but in business terms, such as market dynamics, competitive landscape, feature priority (and overload), and user adoption. This will take time and patience, but then, its recursive. In the long run we’ll have made our case without major confrontations.

Conclusions

The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.

Scott Cook, Intuit’s Founder, famously said at CHI 2006: “The best we can hope to bat is .500. If you’re getting better than that, you’re not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.”

Intelligent managers actually celebrate failures – that’s how we learn. If we aren’t failing at anything, how do we know we’re trying? The problem is recognizing when failure is indeed an option.

How do we know when a project so large – an organizational level project – will belly-up? How can something so huge and spectacular in its impact be so hard to call, especially at the time decisions are being made that could change the priorities and prevent an eventual massive flop? The problem with massive failure is that there’s very little early warning in the development system, and almost none at the user or market level.

When product development fails to respect the user, or even the messenger of user feedback, bad decisions about interface architecture compound and push the product toward an uncertain reception in the marketplace. Early design decisions compound by determining architectures, affecting later design decisions, and so on through the lifecycle of development.

These problems can be compounded even when good usability research is performed. When user research is conducted too late in the product development cycle, and is driven by usability questions related to the product and not the work domain, development teams are fooled into believing their design will generalize to user needs across a large market in that domain. But at this point in product development, the fundamental platform, process, and design decisions have been made, constraining user research from revisiting questions that have been settled in earlier phases by marketing and product management.

References

Argyris, C. (1992). On organizational learning. London: Blackwell.

Howard, R. (1992). The CEO as organizational architect: an interview with Xerox’s Paul Allaire. Harvard Business Review, 70 (5), 106-121.

Jones, P.H. (2007). Socializing a Knowledge Strategy. In E. Abou-Zeid (Ed.) Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research, pp. 134-164. Hershey, PA: Idea Group.

Raynor, M.E. The strategy paradox: Why committing to success leads to failure (and what to do about it). New York: Currency Doubleday.

Rittel, H.W.J. and Weber, M.W. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169.

Taleb, N.N (2007).The Black Swan: The Impact of the Highly Improbable. New York: Random House.