User Experience Go Away

by:   |  Posted on

There is no UX for us

That’s right! I said it. For us (designers, information architects, interaction designers, usability professionals, HCI researchers, visual designers, architects, content strategists, writers, industrial designers, interactive designers, etc.) the term user experience design (UX) is useless. It is such an over generalized term that you can never tell if someone is using it to mean something specific, as in UX = IxD/IA/UI#, or to mean something overarching all design efforts. In current usage, unfortunately, it’s used both ways. Which means when we think we’re communicating, we aren’t.

Of course there is UX for us

If I was going to define my expertise, I couldn’t give a short answer. Even when UX is narrowly defined, it includes interaction design (my area of deep expertise), information architecture (a past life occupation), and some interface design. To do it well, one needs to know about research, programming, business, and traditional design such as graphic design as well. Once, to do web design you had to be a T-shaped person. This is defined as a person who knows a little bit about many things and a lot about one thing. Imagine a programmer who also understands a bit about business models and some interface design. But as our product complexity grows, we need P and M shaped people–people with multiple deep specialties. To design great user expereinces, you need to specialize in a combination of brand management, interaction design, human-computer factors and business model design. Or you could be part of a team. The term UX was welcomed because we finally had an umbrella of related practices.

Of course, we don’t all belong to the same version of that umbrella. We all bring different focuses under the umbrella, different experiences, mindsets, and practices. While we can all learn from each other, we can’t always be each other.

But trouble started when our clients didn’t realize it was an umbrella, and thought it was a person. And they tried to hire them.

It isn’t about us

If there is any group for whom UX exists now more than ever it is non-UXers. Until 2007, the concept of UX had been hard to explain. We didn’t have a poster child we could point to and say, “Here! That’s what I mean when I say UX.” But in June 2007, Steve Jobs gave us that poster child in the form of the first generation iPhone. And the conversation was forever changed. No matter whether you loved, hated, or could care less about Apple, if you were a designer interested in designing solutions that meet the needs of human beings, you couldn’t help but be delighted when the client held up his iPhone and said, “Make my X like an iPhone.”

It was an example of “getting user experience right.” We as designers were then able to demonstrate to our clients why the iPhone was great and, if we were good, apply those principles in a way that let our clients understand what it took to make such a product and its services happen. You had to admit that the iPhone was one of the first complete packages of UX we have ever had. And it was everywhere.

Now five years later, our customers aren’t saying they want an iPhone any more. They are saying that they want a great “experience” or “user experience.” They don’t know how to describe it, or who they need to achieve it. They have no clue what it takes to get a great one, but they want it. And they’ll know it when they see it, feel it, touch it, smell it.

And they think there must be a person called a “user experience designer” who does what other designers “who we’ve tried before and who failed” can’t do. The title “user experience designer” is the target they are sniffing for when they hire. They follow the trail of user experience sprinkled in our past titles and previous degrees. They sniff us out, and “user experience” is the primary scent that flares their metaphorical nostrils.

It is only when they enter our world that the scent goes from beautiful to rank. They see and smell our dirty laundry: the DTDT (Defining The Damn Thing) debates, the lack of continuity of positions across job contexts, the various job titles, the non-existent and simultaneously pervasive education credentials, etc. There is actually no credential out there that says “UX.” Non! Nada! Anywhere. There are courses for IxD, IA, LIS, HCI, etc. But in my research of design programs in the US and abroad, no one stands behind the term UX. It is amorphous, phase-changing, and too intangible to put a credential around. There are too many different job descriptions all with the same title but each with different requirements (visual design, coding, research being added or removed at will). Arguably it is also a phrase that an academic can’t get behind. There aren’t any academic associations for User Experience, so it’s not possible to be published under that title.

Without a shared definition and without credentialed benchmarks, user experience is snakeoil. What’s made things even worse is the creation of credentialed/accredited programs in “service design” which take all the same micro-disciplines of user experience and add to it the very well academically formed “service management” which gives it academic legitimacy. This well defined term is the final nail in the coffin, and shows UX to be an embattled, tarnished, shifty, and confusing term that serves no master in its attempt to serve all.

“User experience design” has to go

Given this experience our collaborators, managers, clients and other stakeholders have had with UX; how can we not empathize with their confused feelings about us and the phrase we use to describe our work.

And for this reason UX has to go. It just can’t handle the complexity of the reality we are both designing for and of who is doing the designing. Perhaps the term “good user experience” can remain to describe our outcomes, but user experience designer can’t exist to describe the people who do the job of achieving it.

Abby Covert said recently that the term UX is muddy and confusing. Well, I don’t think the term “user experience” is confusing so much as it’s a term used to describe something that is very broad, but is used as if it were very narrow. There is a classic design mistake of oversimplifying something complex instead of expressing the complexity clearly. UX was our linguistic oversimplification mistake. We tried to make what we do easy to understand. We made it seem too simple. And now our clients don’t want to put up with the complexity required to achieve it.

Now that the term has been ruined (for a few generations anyway), we need to hone our vocabulary. It means we can’t be afraid of acknowledging the tremendous complexity in what we do, how we do it, and how we organize ourselves. It means that we focus on skill sets instead of focusing on people. It means understanding our complex interrelationships with all the disciplines formerly in the term UX. And we must understand that they are equally entwined with traditional design, engineering and business disciplines, communities, and practices as they are to each other.

So I would offer that instead of holding up that iPhone and declaring it great UX, you can still use it as an example of great design, but take the simple but longer path of patiently deconstructing why it is great.

When I used to give tours at the Industrial Design department at the Savannah College of Art and Design (SCAD) I would take out my iPhone and use it to explain why it was important that we taught industrial design, interaction design, and service design (among other things). I’d point to it off and explain how the lines materials, and colors all combined to create a form designed to fit in my hand, look beautiful on my restaurant table, and be recognizable anywhere. Then I would show the various ways to “turn it on” and how the placement of the buttons and the gesture of the swipe to unlock were just the beginning of how it was designed to map the customer’s perception and cognition, social behaviors, and the personal narrative against how the device signalled its state, what it was processing, and what was possible with the device. And I explained that this was interaction design. Finally, I’d explain how all of this presentation and interaction were wonderful, but the phone also needed to attach a service to it that allows you to make calls, where you can buy music and applications and that the relationships between content creators, license owners, and customers.

At no time do I use the term “user experience.” By the time I’m done I have taught a class on user experience design and never uttered the term. The people have a genuine respect for all 3 disciplines explored in this example and see them as collaborative unique practices that have to work intimately together.There is no hope left in them for a false unicorn who can singularly make it all happen.

Foundations of Interaction Design

by:   |  Posted on

Somehow, products, services, and systems need to respond to stimuli created by human beings. Those responses need to be meaningful, clearly communicated, and, in many ways, provoke a persuasive and semi-predictable response. They need to behave.

This basic definition of Interaction Design (IxD) illustrates the common threads between definitions crafted by esteemed designers Dan Saffer1 and Robert Reimann2 as well as the Interaction Design Association3.
It’s also important to note that Interaction Design is distinct from the other design disciplines. It’s not Information Architecture, Industrial Design, or even User Experience Design. It also isn’t user interface design. Interaction design is not about form or even structure, but is more ephemeral—about why and when rather than about what and how.

For any design discipline to advance, it needs to form what are known as foundations or elements. The creation of such semantics encourages:
* better communication amongst peers
* creation of a sense of aesthetic
* better education tools
* exploration

There are other reasons, but for now these seem sufficient for a discussion about foundations.

What Are Foundations?

“Foundations” first came to my attention while preparing for Masters of Industrial Design program at the Pratt Institute in Brooklyn, NY. The program was built by Roweena Reed Kostellow based on her educational philosophy of foundations (as detailed in the book Elements of Design by Gail Greet Hannah4).

To Kostellow there were six elements that made up the foundations of Industrial Design: line, luminance & color, space, volume, negative space, and texture. Mixing and experimenting with these was at the heart of designing in the 3D form discipline. Students at Pratt explored these foundations in a year’s worth of studio classes. They would press boundaries and discuss relationships while critiquing abstract and real projects.

I’m not the only person ever to think about this issue though I propose that we think about it differently. Dan Saffer, for example, in his book, “Designing for Interactions”[5] has a great chapter on what he calls the Elements of Interaction Design: Time, Motion, Space, Appearance, and Texture & Sound. Dan’s elements concentrate on what I would call the forms that carry interactions, but to me they are not the form of an interaction, except maybe time.

If there are indeed foundations of Interaction Design, they need to be abstracted from form completely and thus not have physical attributes at all.

Foundations of Interaction Design


“Time” makes interaction design different from the other disciplines of user experience (UX). It is the wrapper of our experience of an interaction and must live over time.
But Time is not a single foundation in Interaction Design. There are too many interrelated facets of time to be manipulated. And as we all learned Time is relative; it is fungible; and exists on many axis all at the same moment. Let us consider three time related foundations of Interaction Design:


Interaction design is the creation of a narrative—one that changes with each individual experience with it, but still within constraints. For example, if I’m using an email client, I’m not going to turn on a stove burner during the process of writing an email.

Narratives have pacing. We experience that most clearly when we watch a movie. A great movie will have you coming out of a theater having never looked at your watch. Pace is also a part of interaction design, but in some cases a good experience may have you looking at your watch—hopefully not out of boredom, but because you need to know the current time so you can complete the goals of the interaction.

The way I think of pace in interaction design often correlates to how much can I do with any given moment. And not just how much can I do, but how much I have to do before moving to the next moment. For example, I can have a single really long form where all of my checkout information is presented in one presentation when I’m buying something, or I can separate different components of the checkout process into more discreet moments.

While it might take the same length of time to complete either experience because the number of form fields is the same, the experience of the pacing of these designs is quite different. Further, it has been argued that one long form is more efficient, and conversely that separating a form into chunks is more manageable. Maybe that means that the total positive experience needs to consider other things beyond efficiency for its own sake.


A simpler way that we design for time in interaction design is “reaction time.” How long does it take for the system to produce a reaction to an event? We’ve all seen our cursor change to an hourglass or the proverbial progress bars as we wait for the system to do what we asked, but there are other reaction time considerations.

Actions done in real time (synchronous) have a level of relationship to the moment, while actions that seem to happen in a black box and come back later (asynchronous) lack that relationship. However, because some systems take time, we need to be cognizant of how we communicate these different types of reactions.


Every major foundation element like time should probably have a “context” sub-element. What this means is that there is always something about the human being in the interaction that would change the course of the design itself. In the case of “time,” we cannot design any application without understanding and exploring the meaning of how much time a human will be spending in direct contact with the system.

How much time we spend with an application and how long we are in relationship to it inform our designs and also participate in the experience we create.

Alan Cooper & Robert Reimann in About Face 2.06 speak about the context of time as the concept of “posture.” There are four postures:
* Sovereign – an application that takes our full attention.
* Transient – applications in the periphery of our attention that call us for short moments.
* Daemonic – alerting systems
* Parasitic – support interaction mode for both sovereign and transient applications.


Metaphor is a literary device which uses one well-understood object or concept to represent with qualification another concept which would be much more difficult to explain otherwise. The virtual nature of computers requires that we bring tangible metaphors to bear to help people understand the vagueness of it all. What type and how many metaphors we use directly impact the quality and emotional connection we have for a product.

A favorite metaphor is the trash can or recycle-bin (pick your OS). The idea your files are in waiting in some virtual “bin” or “can” so that if you were mistaken you can dig through the trash(Ick!) and recover them is ingenious. Of course, you can always “empty” it, making whatever was inside irrecoverable. The metaphor works well for most people mainly because of its preciseness and flexibility with the real. In thinking about the qualities of the metaphors for a bin/can between Mac OS and Windows, one might wonder if the nature of a trash can’s “dirtiness” makes it less likely that we will dig files out than recover files from the recycle-bin.

All metaphors break down at some point; where these metaphors break is how we get things into them. We still use the term “delete” to express how we add something to that bin or can. We don’t delete things into our real trash cans, do we? Despite the break down of the metaphor (and every computer metaphor does break down at some point), it still is tangible enough for us to grasp.

But sometimes metaphors go too far. They require a chasm wider than our ability to imagination. The literal desktop seems to make sense and has been tried in the past. If I have a blotter, a file cabinet, an inbox, a calendar, etc. laid out quite beautifully on my screen, I can call my objects files, use a notepad, keep my messages in an inbox, and keep appointments on a calendar, right?

But metaphors appear to succeed best when they are imprecise and the user has to fill in the gaps from their own understanding. Thus, we have an adaptation of that desktop metaphor on our computers today.

The interaction designer needs to strike this balance, cautiously using the metaphors of their predecessors and building on top of them, so long as the original (maybe convention-setting) metaphor can withstand the new direction.


Working in tandem with metaphor, Abstraction relates more to the physical and mental activity that is necessary for an interaction to take place. I first started thinking about abstraction after reading an article by Jonas Lowgren7 on what he has termed “Pliability.” After reading the article and using the term a few times in talks and discussions, it occurred to me that Jonas was really speaking about how abstracted an interface is from the response of the product.

By most accounts almost everything on a PC is pretty abstracted because you have two primary interface points for input—mouse and keyboard. Some people have placed their monitor inside of some sort of touch device lowering the level of abstraction for some types of interactions, mainly drawing. Still, most of us type, point, click, and move the mouse around on the screen.

Let us focus on “mousing”. We are looking at a monitor where there is a cursor (an icon) we were taught is related to the mouse. Without looking at that mouse (usually) we move it and in whatever direction we move the mouse, the icon on the screen (usually an arrow) moves. Well, sorta. Right and left seem to work, but moving a mouse away from us moves the cursor up and moving it towards us moves the cursor down playing on the metaphor of perspective possibly.

Then when we get the icon over a target, we click a button on the mouse. This is a strong level of abstraction. The mouse, monitor, and CPU work in unison to create a series of effects that communicate to the connection between the three devices. But the connection is very very abstract and must be learned.

Even in moused behaviors there are different levels of abstraction as well. My favorite comparison is between Google Maps and MapQuest. What makes Google Maps a success is that by mousing down and moving my arm I can change the focused area of the map. It has a very quick reaction time (see above), but the type of motion—moving my arm as if moving a piece of paper in my focused line of sight—is much less abstracted than in MapQuest, which is to simply click on the border or on the map (assuming the correct mode is set). Now one might say that the click is easier (a less complex set of behaviors), but this is more abstracted, arguably less engaging, and definitely less accurate. This makes Google Maps (and copycats) a much more pleasing and effective interaction.

Systems are both becoming highly complex and highly integrated into our lives. Many systems are losing abstraction completely, and not always for the better, while complexity is increasing abstraction of information. This is why everyone is so fascinated with touch-screens of late. They quickly reduce the level of abstraction for interacting with a computer device.

Other new and popular technologies will create challenges for the next wave of interact designers.. The expanding world of spatial gestures, RFID, and other near-field communications technologies create interaction experiences basically increase abstraction without any device to interface with directly. For these, we have not found similarly effective metaphors to guide the user’s understanding of the abstraction as we have for the mouse.

Negative space

All good design disciplines have a form of negative space. In Architecture and Industrial Design, it is the hollowness or the space between solids. In Graphic Design, it is “white space” what is left without color, line, or form—literally the white part of the paper to be printed on. Sound design uses silence, and lighting design looks at darkness.

So what is the negative of interaction?

There are many places where you can “lack” something, or, more accurately, there are many layers. Are we only talking about the product action? What about our action? What about the space in between either entity’s action?

Pause – So clearly a moment in time where no action is taking place by anything that is part of the interaction experience. Often in interaction design we try to fill these gaps, but maybe these gaps are useful.

Cessation of thought – What if doing nothing created a reaction from the system? Well, one student thought this up with BrainBall ( at Sweden’s Interaction Institute ( As you think less, the ball moves more.

Inactivity – Doing nothing, or the product doing nothing in reaction to an action may be a negative occurrence. This differs from pause, but in this case inactivity is the reaction to activity as opposed to just a cessation of activity.

Well, whatever the negative space of interaction design is, it isn’t.

Intersection in Interaction

Unlike form-creating design disciplines, interaction design is very intricate in that it requires other design disciplines in order to communicate its whole. For that reason, interaction design is more akin to choreography8 or film making than music or costume making. The foundational elements above only belong to interaction design, or are re-defined to be explicitly for interaction design.

For example, the use of color is an aesthetic tool and a functional tool that can enhance or detract from communication of core interaction styles. Language or semiotics as tools for communicating through another discipline called narrative or story telling also come together and make for a better interaction experience. Further, for many experiences, information architecture is required for the preparation and arrangement of information before the interaction can be created.

As Dan Saffer points out (see above), motion, sound, appearance, texture, and sound all make up the form and are used to create patterns of time, abstraction and metaphor.

It is the interaction designer’s attempt to manipulate these four foundations that separates the practice from industrial design, architecture, graphic design, fashion design, interior design, information architecture, and communication design.

In the end, interaction design is the choreography and orchestration of these form-based design disciplines to create that holistic narrative between human(s) and the products and systems around us.


1Reimann, Robert. “So you want to be an Interaction Designer”

2Saffer, Dan. “A Definition of Interaction Design”

3Interaction Design Association. “What is Interaction Design?”

4As captured in this recent book: Hannah, Gail Greet, Elements of Design: Rowena Reed Kostellow and the Structure of Visual Relationships, New York: Princeton Architectural Press, 2002.

5Saffer, Dan. Designing for Interaction: Creating Smart Applications and Clever Devices, New Riders, 2007.

6Cooper, Alan and Reimann, Robert, About Face 2.0, Indianapolis, IN, Wiley Publishing, Inc., 2003.

7Lowgren, Jonas. “Pliability as an experiential quality: Exploring the aesthetics of interaction design,” Artifact 12:1 (April 10, 2006): 55–66. (republished on the author’s website)

8Heller, David (NKA Malouf, David), “Aesthetics and Interaction Design: Some Preliminary Thoughts.” (ACM membership required), Interactions 12:5 (September-October 2005): 48-50.

HTML’s Time is Over. Let’s Move On.

by:   |  Posted on
“Ultimately, I don’t see a long term future for HTML as an application development solution. It is a misapplied tool that was never meant to be used for anything other than distributed publishing.”As the web finds users and builders demanding more and more richness, we need to re-evaluate the technology that 99% of it is built on. No matter how sophisticated our back ends get, the front ends seem to remain stagnant. Yes, HTML transformed to XHTML, but that is such a small step and it is a problematic one when we consider the still eminent requirement of multi-browser, multi-platform, multi-device support.

Macromedia has been making an all out push of what they call Rich Internet Applications and have been trying to get developers to make this their new front-end web-based technology standard. What went wrong? What went right? What other options are there? What are the real requirements that we as user experience designers face that all these technologies miss the boat on?


The web browser has changed the very shape of what it means to create applications with centralized or peer-to-peer shared repositories of structured and unstructured data. For most users, the web is a bank, a store, or an information-gathering tool. For others, the web has become their primary means of interacting with cross-enterprise and intra-enterprise applications.

What has made most of this possible is the tremendous re-architecting of server- and mainframe-based systems that are now able to communicate with each other through agreed upon standards (usually called web services), as well as the development of application servers that generate web browser-interpretable pages. Most of the time this is done solely in HTML, and that is the point of this article.

Examples of application servers are BEA WebLogic, IBM WebSphere, Microsoft’s Internet Information Server and the free Apache Tomcat. These are powerful systems because of the amount of logic that can be programmed into them, and because of their connectivity to complex databases. This logic remains resident on the server, and limits the amount of bandwidth required to send information to the end-user’s local machine for processing business logic or doing dynamic interface layouts.

The problem

Application servers send a combination of HTML, JavaScript, and Cascading Style Sheets (CSS) to the web browser. These combined technologies are called Dynamic HTML (DHTML) and are standardized around a Document Object Model (DOM). While the basic core of these technologies has remained consistent, the interpretation of them has not been standardized across all platforms and web browsers. For example, while Netscape 7 and Internet Explorer 6 both claim they support specific standards, the way they interpret these standards differs dramatically. Then there are issues with backwards compatibility, bandwidth, and other technology limitations. For example, how many people can say, “I’m only going to design my site for Netscape 7.0 and IE 6.0 for Windows (which Windows?), IE 5.5 and Netscape 7.0 for Mac (which Mac?), and Netscape 7.0 for Unix (which variety?).” The truth is that no one with a conscience can be that specific. Netscape 6.2 is still the Netscape standard and, in many ways, is a far cry from Netscape 7.0. Even the Macintosh world is still not clearly aligned around a single browser. Where does this leave us? Most companies developing what are commonly called thin-clients use a “lowest common denominator” level of DHTML that is not able to take advantage of what few advances in DHTML technology there have been. It also leaves us with the issues mentioned above that don’t go away with any version of DHTML, and design issues beyond those, including:


Bandwidth limits how much data can be downloaded to the client. Visual representations used to be the big problem, limiting graphics and the like, but today these issues are mostly under control due to better education of most web designers. Now, the bulk of the size of a web page in a web application is in the non-display code and in assets such as JavaScript and style information (CSS). In applications with large data sets, the end HTML size becomes so important to the overall performance of the application that reducing attributes in tags is a requirement.


The use of DHTML, or more importantly JavaScript, seriously impedes accessibility. Where it doesn’t impede accessibility, engineers will often need to add to their code logic and variables to handle the differences between browsers and platforms. This increases both coding time and quality assurance resources in order to accommodate accessibility.

Where is the application?

A thin-client isn’t just an application; it is an application that runs inside another discrete application. The end result is that standards such as a menu bar or tool bar become redundant to the workings of the browser itself. Users get confused about which “File,” “Edit,” or “View” they should use. Users also insist on being able to use a ”Back” button, which can cause page and link management issues, especially if you are trying to use frames to solve other problems.

Accessing the desktop

DHTML requires assisted technologies in order to gain access to the user’s desktop. Any sort of desktop registry information with any substance cannot be accomplished with DHTML. JavaScript doesn’t have access to the local files system or to the primary components of the browser system, such as Open and Save dialogues. For example, anyone who has tried to add an attachment to a web-based email program knows that you have to choose one file at a time, initiate the upload, and then repeat for each new file. Sometimes, being able to transfer data from the local system to the centralized one is of such primary importance that it must not only be done in bulk, but it must be able to be controlled both visually and logically. Another side of this issue is that using standard GUI conventions like Drag and Drop and Clipboard are not available between the desktop and the application in the web browser.

Technology solutions for weary designers

Now that we understand the problem set a bit, lets talk about what is out there to help a weary and frustrated designer. This list is far from being comprehensive, and a real programmer or system architect would probably evaluate these technologies in a different way. Some of these options run inside the browser while others do not.

Macromedia Flash MX,

Probably the 2nd most ubiquitous browser-based technology in the world (behind HTML). Flash, until recently, has been relegated to random acts of usability terror when used as a GUI front end. Most people think of it is an animation and game-making tool, and many “serious” business people think of it is as eye candy that they will only consider for their custom marketing needs. The latest version of Flash is part of an initiative by Macromedia to lead the back and the front end of middle-tier web development. By making Flash more accessible, easier to internationalize, and including widgets and other components, Macromedia hopes to make their product an HTML killer.

Upgrading to newer versions of Flash has become almost painless. Also, the footprint of the Flash runtime engine is relatively small compared to other similar plug-in player technologies. The footprint of the transferred media is also small due to several factors: all graphics and text are rendered as vector information, which is a lot smaller than traditional raster bitmaps like gif or jpeg; flash applications can be streamed in components as they are needed, or components can be pre-loaded and stored for later use as other components. Flash can also connect to the desktop, and uses a programming language based on ECMScript (the standard for JavaScript).

What about back-end connections? Can an application server work with Flash? Yes! J2EE, JSP, ASP, and .Net can all work by using the Flash Remote Server to send data to and from Flash-based applications.

While Flash hits the mark on its new front-end capabilities, it doesn’t do it from a developer’s perspective. The overall developer environment is still focused toward animation development. The program uses a score in which you add cast members within a timeline – a method used for traditional animation. But linear timeline-based development just doesn’t make sense in an application environment.

One last positive note for Flash: other companies have begun developing tools for creating Flash player files. Adobe ( is one and a startup company Lazlo Systems ( claims to have a tool geared specifically to applications. There is hope that either Macromedia or other third-party companies will come up with a robust developer solution.


Developed by MIT University and now owned by a privately-held corporation, Curl was developed as an alternative to HTML. It requires its own runtime engine in order to interpret proprietary files. If you go to the Curl site there is a great demo which shows an enterprise business application running. It is clear that there are definite widget controls and customization abilities which lend to a desktop application feel, even though much of the logic is on the server.

Curl lacks maturity and support. The demos are interesting but they are just that, demos. Curl also lacks end-user support. In an enterprise environment, you might be able to dictate the use of a plug-in that isn’t widespread, but in a cross-enterprise environment this becomes increasingly difficult. The run-time environment – which they call Surge – is currently only available in Windows, another limiting factor.

Java 2 (with Swing) ,

Well, we can always use Java, right? Java is a great cross-platform, cross-browser, just-in-time development language. And in Java 2 and Swing there are even significant APIs available between the applet and the operating system, as well as a very juicy set of controllable and customizable widgets. However, the footprint of most Java applets is pretty large. In addition, Java applets are dependent on raster images in many situations. More importantly than any of this, Java, regardless of its promise, has many kinks that make cross-platform use very hard to manage from a quality assurance standpoint. The other problem with Java 2 is that it’s not a ubiquitous standard; even Flash is more widely used and accessed. The Sun Java Virtual Machine (JVM) is required as a separate install to run Swing-based applets. And to run the Sun JVM, the Microsoft JVM that some other sites and applications may be using must be turned off. This complicates things tremendously, as it forces the user to download Swing code every time the application is used. This tacks a significant footprint onto “easily distributed” applications.

Honorable Mentions


Water is more like an HTML replacement than a true thickening agent for web-based thin-clients. It is also immature with very few developers. It requires Java 2 to be installed.


This solution shows some promise. It is a Java-based interpreter for a new XML language. There is a widget set that you define using its easy to learn XML standard. Because it is XML-based it is very flexible, but it still requires a special download and currently has a very small developer base. One of the things that I really like about it is that it can be initiated from a browser, but runs exclusively in its own OS window.

Norpath Elements,

Norpath’s Elements product is very similar to Flash except that the primary programming methodology is visually based. It can connect to databases and have logic based on data or behavior. It also uses a timeline development environment like Flash, but this is secondary to its primary interface for adding visual elements and logical elements. There are pre-fabricated widgets, and you can also build your own. Resulting files are XML packaged with images and accompanying text. The problem with Norpath is market saturation – there isn’t any. Otherwise, there is a lot of promise here. I think Macromedia, Adobe, and Microsoft can learn a few things from Norpath, especially from their visual programming model.

General models to consider

What do we really need? Are there examples of distributed thin-client applications out there that have enough client-side functionality? The answer is yes and no. There are simple applications out there in the world, many with very specific purposes. The best example I can think of is Instant Messaging (IM) software. Some IM software runs on Java, so it is portable across platforms. More often though, it is developed specifically to the operating system. While this means that you can’t have a single code base, these applications succeed because they are practically self-updating. From an user perspective, they are easy to maintain.

The most complex thin-client I have seen to date, has to be Groove’s client. Some might even say it isn’t a thin-client at all, but I would define it as such because its components get installed individually. A calendar or a meeting applet are requested and maintained separately. The connectors between the applets are not clear, but overall the experience of using Groove is very good and easily maintained.

The idea of self-updating software is not new. At Documentum, we have internal applications that do this already. An outdated version of our software will lock a system and force an upgrade. Upgrades in the LAN-space are relatively easy. But over a 56k modem, the experience can be painful; even a single megabyte applet is a dread to download. This, however, has not stopped companies like Microsoft, Adobe, Macromedia, AOL, et al, to rely on the concept of self-updating software. Microsoft’s Windows Update is probably the most ambitious, as it updates the operating system in chunks instead of all at once. Apple has followed suit with similar updating functionality.

Taking it to the enterprise

Taking all this to the enterprise requires special design considerations. The biggest is that “out of the box” applications are seldom used. Most applications by PeopleSoft, SAP, Siebel and Documentum, for example, are extremely customized for the individual end user (enterprise). These customizations are time consuming. This also means that having multiple code bases for each platform can be cost- and resource-intensive. Because of this, a single code base is practically a requirement. When a vendor updates its core offering, how are updates achieved? Can they occur without upsetting the previous customizations? This is a big question, as many vendors make revenue on these upgrades, and the customers want these upgrades because they mean bug fixes and added functionality. So when updates are done, they need to be done with a minimal effect on the customizations. Even with current HTML solutions, this is not easy to achieve.


Ultimately, I don’t see a long term future for HTML as an application development solution. It is a misapplied tool that was never meant to be used for anything other than distributed publishing.

The reality is that we are trying to do too much with a language that was never meant for such heavy-duty applications. We have used incredible ingenuity to make up for the faults of HTML by putting all of the real processing effort on the server side, but the time has come to create a new system that is low bandwidth, utilizes a single code base for all platforms, and is componentized enough to make updating and customizations easy using internet-based distribution. Lastly, we need to develop these applications to run in their own space, without a web browser In the end, this may change the way we think of web browsers. It will also change the way platforms need to be developed, in order to support a wide array of thin-clients that are accessed and addressed directly from the operating system as opposed to from a browser.

David Heller is currently a Sr. User Interface Designer at Documentum. His current projects include adding new web-based clients to Documentum’s currently powerful set of Enterprise Content Management solutions. Before arriving at Documentum, David was the director of Information Architecture for a small firm in NYC called Vizooal, where he worked as lead consultant on projects developing marketplace solutions in the transportation, media, and supply-chain management industries.

Why I’m Not Calling Myself an Information Architect Anymore

by:   |  Posted on

I could probably put it in one simple word—respect. But if I left it at that, it wouldn’t make for much of an article, nor would it provoke discussion, in the squares & directions community, toward moving to an answer to the dilemma of industry labels.

To start, let me say that I learned so much about this issue due to my first real “Information Architecture is not the same as interaction design or user experience design.” set of industry conferences (back-to-back). The AIGA: Experience Design forum was an amazing get together that took the seeds that have been developed and started turning them into saplings. I feel very energized by what I call a movement in our sphere. The CHI conference for me was a loci where practitioners and researchers came together, not to listen, but to talk. This was a tremendous experience outside the papers, panels and demos that helped shape a lot of what I’m going to put forth.

At the reception on Tuesday night I had the opportunity to step into a guru’s conversation about her first attendance to the IA Summit. This person has been around since there was something to be around, and I very much appreciate her contribution to the field, especially around field research and usability. She was speaking with others, who are more old school than myself, about the IA Summit and she was explaining her surprise at how the IAs want to “own the process.” Her surprised stemmed from her previous understanding of IAs as library scientists interested in facets, categories, vocabularies and maybe, at most, navigation. She never thought of IAs as those who make layouts, design behavior, do usability or field research, etc.

It was at this point I interjected my feeling that IA is what she thinks it is, but because of the history whereby IA is also what Richard Saul Wurman expressed it was, that many informally (and formally) trained designers have come to feel at home within this title. It is a title that seems to generate understanding among clients where “user experience” and “usability” have left clients confused or seem too widely or narrowly focused. It’s just what worked and has built up, especially in the consultancy community, a big following that can’t be ignored.

Of course there was the usual cry from those of the old STC, UPA, HCI school, “but we were doing this for centuries” and more discussion ensued. They have been doing this for a long time. But only if you think “this” is user-centered design. But IA isn’t user-centered design. IA is IA and it was with this that I was convinced. Actually, convinced is not the right word. Turned—yes, I was “turned.” I don’t think there is anything I can say about her argument that actually convinced me of her position, but it was more a feeling I got about the state of IA and what people need it to be, if we are going to move our field forward, that changed my thinking. That feeling is clarity. Clarity was missing from the Experience Design group, and that wasn’t good, but the ED group is not trying to define a title or a discipline but a philosophy (in my humble opinion), so the lack of clarity isn’t really an issue.

But this is not just about clarity. As I said, the single word is respect, and clarity is just one way of expressing that respect. I know I am not an Information Architect because I know what Information Architecture is, and I respect those that can do it. I also want to make sure that those who can do it, aren’t obscured by those that can’t.

Information Architecture is not the same as interaction design or user experience design. The line is very clear and the only reason we allow it be blurred is because early adopters from different disciplines within the field coopted the term and have applied it to a broad swath of responsibilities.

Does this mean we are all clear and cozy? No, it doesn’t. There is still a definition, an early definition, of IA that is out there that needs to be reckoned with. As noted above, Richard Saul Wurman coined the term in 1972. He used it in a great and informative way for its time (for all time in fact) by saying that the writer and graphic designer need to be one. He felt that the visual display, layout, texture, surrounding iconography, etc. that set the mood and set context for words directly affect their inference by the reader. This sets in motion (IMHO) the idea of user experience in the print world. But how do we reconcile this early use of the word IA with that which is taught in universities? The IA tied to the library science community that discusses organization and classification (of which RSW spoke about, but did not focus on), and is within the digital and interactive domains.

What I suggest here is that Information Architecture is an arrow in an interaction designer’s quiver. Sometimes that arrow is a whole other human being, who works beside an interaction designer, and that person is known as an information architect. But it works both ways. An information architect should also have interaction design theory as an arrow in their quiver, and sometimes that arrow is a person called an interaction designer (or similar).

I would say the same for a usability engineer. Usability is both a set of theories and a person who specializes in those theories that add support to the creation of interactive digital experiences.

What is important to me is that Information Architecture doesn’t get lost. at CHI, I attended a paper presentation by an HCI researcher and it was so obvious to me that most of the answers this person needed, to fill in her admitted holes, were already known by the Information Architecture community—the official one. Not the one that allowed me to take that title without any knowledge of thesauri, facets, and classification theory, but the one that the Polar Bear book was trying to teach people about, get them excited about, and entice them to join in.

So respectfully, I remain a member of this community, but I revoke (retroactively) all titles I ever held that included Information Architecture in them.

I believe that those who hold this title have more to gain through its controlled use, than through it being drowned out in the debates raging for what should we call this evolving genre of designing human-centered interactive computer experiences. It’s really a battle that is a waste of time (as Alan Cooper said at the Forum). And I believe that IA would win more by not joining in.

David Heller, is currently a Sr. User Interface Designer at Documentum. His current projects include new web-based clients to Documentum’s currently powerful set of Enterprise Content Management solutions.