A Truly Ambitious Product Idea: Making Stuff for People

Written by: Dave Feldman

When I was eleven, my parents bought a Mac Plus. It had a tiny monochrome screen, a floppy drive, and 1MB of memory. And it came with something called HyperCard. HyperCard let you make stuff. It had documents called stacks, each a series of cards – similar to PowerPoint today. In addition to graphics and text, you could create buttons and tell them what to do – flip to another card, show or hide an object, and so forth.

Down at the bottom of the screen was a little window where you could type simple English-like commands – things like go to card 2 or beep. Once you’d mastered those, you could add them to your buttons or trigger them at certain times, creating real interactivity. Pretty soon I was making little games and utilities. It was the coolest thing ever.

HyperCard's Home Stack
HyperCard’s Home Stack: Pure Nostalgia

HyperCard came with something called the home stack that opened when you first launched it. I looked at it and thought, This isn’t very useful. It shows up all the time but it doesn’t do much. So I made a better one. It included various utilities, and of course a rock-paper-scissors game. I made packaging and convinced the local Mac store to sell it for $7.

It sold two copies.

Since then I’ve worked on products with more than twice as many users, but the story remains the same. This isn’t very useful. This doesn’t serve people’s needs. Let’s make a better one.

In college I discovered a career for what I did: user interface design. And though the title has changed over the years – user experience designer, interaction designer, product manager, product designer, founder – the motivation hasn’t. Technology is confusing and doesn’t meet people’s needs. I want to fix that.

Eat Your Vegetables

These days, it’s fashionable to talk about audacious ideas. Paradoxically, it’s also popular to focus on ideas that can be built in a month.

In a post last year, Paul Graham listed Frighteningly Ambitious Startup Ideas and spawned a bumper crop of companies (though my favorites, Bring Back Moore’s Law and The Next Steve Jobs, don’t seem to have much traction). Wired’s cover story for February was 7 Massive Ideas That Can Change the World.

But I can’t help thinking we’ve skipped our vegetables and gone straight to dessert. We are insinuating ourselves into more and more of people’s lives, yet we haven’t managed to meet their needs in predictable, understandable, let alone enjoyable ways.

I watch people using their devices and I cringe. They get their single-click and double-click mixed up. They open an email attachment, update it, and then can’t understand why their changes aren’t in Documents. They try to set up iCloud and end up creating three Apple IDs. They miss out on all the useful things technology can do for them, lost in a sea of complexity, confusion, and techie-centric functionality. These things were supposed to be labor-saving devices, right?

Make no mistake: This is our fault. To begin with, we’ve created ever-more-inconsistent expectations over time. Consider single- vs. double-click. Easy, right? You single-click to select, double-click to open. Unless it’s a webpage. Or Apple’s column view, where selecting and opening are the same thing so it doesn’t matter. Well, for folders; for documents, it matters.

Anyway, it’s really easy to tell if you’re in a webpage or not so you know which convention to use. Just look at the top of the screen, on the left. It should say Firefox, or Safari, or Chrome. Oh wait, you’re on Windows. Look at the top of the window. No, the frontmost window. See, it has a bigger shadow than the others. Oh wait, you’re on Windows 8? Well, are you in Metro or not? Oh wait, they don’t call it Metro anymore. I forget what they call it. Do you see a lot of colorful flat boxes? What were you trying to do again? Hey, where are you going?

You may think I’m overcomplicating things for effect. I’m not. It seems simple to you because all that stuff is already in your head. When you switch from GMail in a browser, to Outlook on Windows, to Mail.app on Mac, you know which conventions change. You have what designers call a mental model, rooted in years of experience and history, that allows you to make the right call. Most people don’t – nor should we have to.

And these interaction details are the tip of the iceberg. We do a disappointing job of understanding what people outside our bubble are trying to accomplish. Let’s be honest: We mostly make products for ourselves. Later, when they’re successful, we start wondering how people use them. We do user studies and surveys and ethnographies and then ignore the results because it’d be expensive to fix and besides, they’ll figure it out, right? I mean, we did. We lack the comprehensive understanding we’d need to make real, substantive change, to make products that are both usable and useful.

Downward Arrow

Therapists sometimes use the downward arrow technique with their clients. It starts with the apparent problem and proceeds through a series of “why” questions to the underlying issue:

Client: “I get nervous speaking in class.”

Therapist: “Why do you get so nervous?”

Client: “I’m worried that I might say something stupid.”

Therapist: “And if you did?”

Client: “I would be so embarrassed!”

Therapist: “Why? What would be so bad about it?”

Client: “It would mean I’m not good enough.”

And so forth.

Product design requires a similar process: start with a design or feature question and dig down until you find the assumptions that underlie it:

Me: Why do you ask for a user’s password every time he downloads a free app?

Imaginary Apple Guy: For security.

Me: What do you mean by security?

IAG: Well, if someone gets hold of your phone, they’d be able to install apps without your permission.

Me: And what would be so bad about that?

IAG: The apps could do malicious things with your phone.

Me: But doesn’t Apple sandbox apps and review them for malicious behavior?

IAG: Sure, but a maliciously-installed app could connect to your Facebook account.

Me: And is the risk of that happening when your phone is stolen worth requiring a password for every install?

Note that the point isn’t to make me look smart, or simply to reveal flaws. By the end of that (fictitious) exchange, we’ve gone from an ill-defined concept (“security”) to a specific question that deals in user needs.

The Product Mantra

To answer such questions we need the fundamental, defining goals of our product. Who is it for? What purpose does it serve? It’s impossible to evaluate trade-offs otherwise.

When I was at AOL our illustrious head of Consumer Experience, Matte Scheinker, introduced the notion of a product mantra: a clear, concise description of your product. Critically, it must be specific enough to disagree with.

Using my own to-do app, Stky, as an example:

  • Stky

    Mantra A: Stky is a to-do app for naturally disorganized people. It keeps overload in check by having you reprioritize each day’s tasks anew.

  • Mantra B: Stky is a productivity app anyone can use. Unlike its competitors it keeps you in control of your tasks and on top of your life.

Both mantras are accurate. But only Mantra A is specific enough to disagree with. Do disorganized people need a to-do app? Is daily reprioritization too much work, especially for such people?

Mantra B could describe nearly anything.

Now, suppose I’m deciding whether to add a new feature to Stky: multiple sticky notes. You could have your Work sticky, your Home sticky, maybe a Stuff to Read sticky, and the like. Seems useful, and certainly I’ve had users request it. Let’s hold it up to our mantras:

  • Using Mantra A: Do we want to add additional management overhead to an app for disorganized people? Probably not. And if the sticky represents our daily list of priorities, doesn’t adding multiple stickies break the whole paradigm? Probably. So maybe it’s not a good idea.
  • Using Mantra B: Well, multiple stickies means more control, right? And lots of people want it, and we want a product anyone will use. So I guess it’s a good idea…along with nearly any other idea.

Even better, this exercise almost forces us back into downward arrow. Why do users want multiple stickies? What are they trying to accomplish? Is that deeper goal consistent with our mantra? If so, is there another feature that would meet their need in a way that fits the product better?

Asking why and writing a mantra won’t magically give us insight into our users. But it will force us to form hypotheses, which can be tested against evidence in the world around us.

And the constraints we create via those hypotheses allow us to make choices. Because the great products, the ones we revere, are invariably the work of product teams brave enough to make choices. We marvel at Apple’s clean, usable design. We call it simplicity but it’s not that: It’s knowing what to keep and what to leave out and having the guts to disappoint some of the users all of the time and all of the stakeholders some of the time. Many of us already know that, but we can’t bring ourselves to choose when push comes to shove.

None of this is a substitute for user research. We still need usability tests, ethnographies, brainstorming sessions, click data, bucket tests, discovery, and all the rest. But in the absence of clear hypotheses and specific questions, user research is a little like the proverbial tree falling in the forest. Research tests our assumptions and tells us where we’re right or wrong; it doesn’t tell us what to build.

This isn’t the kind of audacious problem we solve all at once…nor do we have to. Every product that actually makes someone’s life better is a piece of the solution – not just for the life it improves, but for the designer who’s inspired by it, the team that decides to one-up it.

Make no mistake: This is hard stuff. It requires tenacity, and bravery, and empathy. It requires observing how people live their lives, and then handing them products that aren’t at all what they asked for. It needs more user-centered ways of doing bug triage and structuring development workflow. But as technology becomes everyone’s ever-more-constant companion I can think of no greater or more worthy challenge.

When I renamed my blog last year, I created a tagline: “We make stuff, for people.” It was meant to be funny, sure, but also to encapsulate everything I’ve said here. Technology is meaningless without people; yet, as technologists, we’re prone to forgetting that. We end up debating strange, empty questions. Does the world really need another photo sharing service? Is skeumorphic design good or bad? Is Ruby better than Python? None of it matters on its own.

It’s important to make stuff. But it only matters if we make stuff, for people.

Let Them Pee: Avoiding the Sign-Up/Sign-In Mobile Antipattern

Written by: Greg Nudelman

This is an excerpt from the upcoming Android Design Patterns: Interaction Design Solutions for Developers (Wiley, 2013) by Greg Nudelman

Anything that slows down customers or gets in their way after they download your app is a bad thing. That includes sign-up/sign-in forms that show up even before potential customers can figure out if the app is actually worth using.

It’s a simple UX equation

This antipattern seems to be going away more and more as companies are beginning to figure out the following simple UX equation:

Long sign-up form before you can use the app = Delete app

However, a fair number of apps still force customers to sign up, sign in, or perform some other useless action before they can use the app.


The application SitOrSquat is a brilliant little piece of social engineering software that enables people to find bathrooms on the go, when they gotta go. Obviously, the basic use case implies a, shall we say, certain sense of urgency. This urgency is all but unfelt by the company that acquired the app, Procter and Gamble (P&G), as it would appear for the express purposes of marketing the Charmin brand of toilet paper. (It’s truly a match made in heaven—but I digress.)

Not content with the business of simply “Squeezing the Charmin” (that is, simple advertising), P&G executives decided for some unfathomable reason to force people to sign up for the app in multiple ways. First, as you can see in Figure 1, the app forces the customer (who is urgently looking for a place to relieve himself, let’s not forget) to use the awkward picker control to select his birthday to allegedly find out if he has been “potty trained.” This requirement would be torture on a normal day, but—I think you’ll agree—it’s excruciating when you really gotta go.

Registration Torture
Figure 1: Registration Torture: Sign Up/Sign In antipattern in SitOrSquat app.

But the fun does not stop there—if (and only if) the customer manages to use the picker to select the month and year of his birth correctly (how exactly does the app know it’s correct?), he then sees the EULA (Figure 2), which, as discussed in the previous article, End User License Agreement (EULA) Presentation (Boxes and Arrows January 2nd, 2013), is an antipattern all to itself.

EULA on a mobile device
Figure 2: Reading the EULA while wanting to pee should be an Olympic sport.

SitOrSquat’s EULA is long, complex, and written in such tiny font that reading it while waiting to go to the bathroom should be considered an Olympic sport, to be performed only once every four years. Assuming the customer gets through the EULA, P&G presents yet another sign-up screen, offering the user the option to sign in with Facebook, as shown in Figure 3.

Sharing bathroom habits
Figure 3: Finally! Sharing my bathroom habits on Facebook has never been easier!

I guess no one told the P&G execs that the Twitter message “pooping” is actually a prank. They must have legitimately thought that they could transfer some sort of social engineering information about the person’s bathroom habits to “achieve and maintain synergistic Facebook connectivity.” I would have to struggle hard to find monumental absurdities from social networking experiments that are equal to this. I can’t imagine that anyone thinks “Finally! Sharing my bathroom habits on Facebook has never been easier!”

Assuming that the user is a legitimate customer looking to use the bathroom for its intended purpose, and not a coprophiliac Facebook exhibitionist, we may hope that he will naturally dismiss the Facebook sign-in screen and come to the next jewel: the Tutorial, shown in Figure 4.

Tutorial is a sub-par Welcome experience pattern. Here it is another impediment to progress.
Figure 4: Tutorial is a sub-par Welcome experience pattern. Here it is another impediment to progress.

SitOrSquat tutorial is an extra screen that provides very little value, other than impeding the use of the app for its intended purpose. (If you need a tutorial, I recommend a much more effective contextual Watermark pattern, which I discuss in Chapter 5 of the Android Design Patterns book.)

50 Taps and 7 Screens of Antipatterns

Note that the entire app outside of registration consists of basically four screens (if you count the functionality to add bathrooms!). However, if you include all the sign-up antipattern screens (including my initial failure to prove that my potty training certificate is up to date, as referred to in Figure 1), it takes seven screens of the “preliminary” garbage before the content you are looking for finally shows up (refer to Figure 5). If you count the number of taps necessary to enter my birthday, that becomes almost 50 taps!

The Glory of 50 taps needed to get through the Sign Up/Sign In antipattern in SitOrSquat app.
Figure 5: The Glory of 50 taps needed to get through the Sign Up/Sign In antipattern in SitOrSquat app.

One of my favorite UX people, Tamara Adlin (who coauthored The Persona Lifecycle: Keeping People in Mind During Product Design with John Pruitt) wrote brilliantly: “For Heaven’s Sakes, Let Them Pee.” I believe that never before has this line been so appropriate. In the absurd pursuit of social media “exposure” coupled with endless sign-up screens, with heavy-handed “lawyering up,” P&G executives completely lost sight of the primary use case: letting their customer SitOrSquat.

Long sign-up screens detract from the key mobile use case: quick, simple information access on the go. Overly invasive sign-up/sign-in screens presented up front and without due cause will cause your customers to delete the app.

There is no reason to force anyone to register for anything

When deciding whether to force the customer to perform an action, consider this: If this were a web app, would you force the customer to do this? If you have Internet connection, you can save everything the customer does and connect it back to his device using a simple session token and a guest account. And even if you don’t (for example, while riding in a subway, using airplane mode, and so on), today’s smartphones have plenty of on-board storage you can use for later syncing with your servers when the mobile network eventually becomes available.

This means there is simply no reason to force anyone to register for anything, other than if they want to share the data from their phone with other devices. As a general rule, rather than forcing registration upon download or at the first opportunity, it is much better to allow the customer to save a piece of information locally on the phone without requiring that he log in. Wait until the customer asks for something that requires registration, such as sharing the information with another device or accessing information already saved in his account; at that point completing the registration makes perfect sense.

For example, imagine how absurd the Amazon.com shopping experience would be if the app asked you for your home address, billing address, and credit card upfront—before allowing you to see a single item for sale! Yet entering the home address (where would you like to have the items shipped?) and credit card (how would you like to pay for this?) makes perfect sense during the checkout, after the customer selects a few items and indicates she would like to complete the purchase.

Finally, remember that “Forms suck,” as brilliantly quipped by Luke Wroblewski in his book Web Form Design (Rosenfeld Media, 2008). Only ask for what you strictly need to proceed to the next step and omit extraneous information. (Effective mobile data entry controls and forms is a huge topic to which I devote chapters 10-12 of my upcoming Android Design Patterns book (Wiley March 11, 2013), now available on Amazon.com).

Conceptual Models in a Nutshell

Written by: Jeff Johnson

This article explains what conceptual models are and describes the value of developing a conceptual model of a software application before designing its user interface.

Conceptual Model: a Model for Users’ Mental Model

A conceptual model of an application is the model of the application that the designers want users to understand.  By using the application, talking with other users, and reading the documentation, users build a model in their minds of how to use the application. Hopefully, the model that users build in their minds is close to the one the designers intended. This hope has a better chance of being realized if the designers have explicitly designed a clear conceptual model as a key part of their development process.

A conceptual model describes abstractly — in terms of tasks, not keystrokes, mouse-actions, or screen graphics — what users can do with the system and what concepts they need to be aware of. The idea is that by carefully crafting a conceptual model, then designing a user interface from that, the resulting application will be cleaner, simpler, and easier to understand. The goal is to keep the conceptual model: 1) as simple as possible, with as few concepts as are needed to provide the required functionality, and 2) as focused on the task-domain as possible, with few or no concepts for users to master that are not found in the application’s target task domain.

Object/Operation Analysis

An important component of a conceptual model is an Object/Operation analysis: an enumeration of the user-visible object-types in the application, the attributes of each object-type, and the operations that users can perform on each object-type. Purely presentational and purely implementation object-types have no place in an application’s conceptual model because users will not have to be aware of them.

Objects in the conceptual model of an application can usually be organized in a type-hierarchy, with sub-types inheriting operations from their parent types. Depending on the application, objects may also be organized into a containment hierarchy, i.e., in which some objects contain other objects. Laying out these two hierarchies in a conceptual model greatly facilitate the design and development of a coherent, clear user interface.

This analysis can help guide implementation, because it indicates the most natural hierarchy of implementation objects and the methods each must have. It also simplifies the application’s command structure by allowing designers to see what operations are common to multiple objects and therefore can be designed as generic operations. This, in turn, makes the command structure easier for users to master: they must only learn a few generic commands that apply to many object-types, rather than a larger number of more narrowly applicable object-specific commands.

For example, in a well-thought-out application that allows users to create and manipulate both Thingamajigs and Doohickeys, when users know how to create a Thingamajig and want to create a Dohickey, they already know how because creation works the same way for both. Ditto copying, moving, deleting, editing, printing, etc.

Example: Object/Operation Analysis for a Simple Office Calendar App

For example, let’s examine an objects/operations analysis for a simple office calendar application. The objects, attributes, operations, and relationships might be as follows:

Objects: It would include objects like calendar, event, to-do item, and person (see Table 1).  It would exclude non-task-related objects like buffer, dialog box, database, and text-string.

Attributes: A calendar would have an owner and a default focus (day, week, month). An event would have a name, description, date, time, duration, and a location. A to-do item would have a name, description, deadline, and priority. A person would have a name, a job-description, an office location, and phone number. However, Events should not have byte-size as an exposed attribute, because that is implementation-focused, not task-focused.

Operations: Calendars would have operations like examine, print, create, change view, add event, delete eventEvents would have operations like examine, print, and editTo-do items would have more-or-less the same operations as events. Implementation-related operations like loading databases, editing table rows, flushing buffers, and switching modes would not be part of the conceptual model.


Objects Attributes Operations


owner, current focus

examine, print, create, add event, delete event


name, description, date, time, duration, location, repeat, type (e.g., meeting)

examine, print, edit (attributes)

To-Do item

name, description, deadline, priority, status

view, print, edit (attributes)


name, job-description, office, phone

send email, view details

Table 1. Object/operation analysis for a simple office calendar application.

Keep it Simple

Sometimes it is tempting to add concepts to provide more functionality. But, it is important to realize that each additional concept comes at a high cost, for two reasons: 1) it adds a concept that users who know the task domain will not recognize and therefore must learn, and 2) it increases the complexity of the application exponentially, because each added concept interacts with many of the other concepts in the application.  Therefore, extra concepts should be resisted if possible. The operative design mantra with conceptual models is: “Less is more.”

A Conceptual Model Provides a Foundation for the App and the Project

The user interface design translates the abstract concepts of the conceptual model into concrete presentations and user-actions. For best results, the user interface is designed after the conceptual model has been designed.  Scenarios can then be rewritten at the level of the user interface design. Designing the UI from the conceptual model may expose problems in the conceptual model, in which case the conceptual model may be improved.

A conceptual model provides a foundation not only for the UI design, but also for the application’s implementation and documentation. It therefore plays a central role in the design and development of the overall product.

Summary: Six Benefits of Conceptual Models

Starting a design by devising a conceptual model has several benefits:

  1. By laying out the objects and operations of the task-domain, it allows designers to notice operations that are shared by many objects. Common operations across objects make the UI simpler for users to learn and remember.
  2. Even ignoring the simplification that can result from noticing shared operations, devising a user-model forces designers to consider the relative importance of concepts, the relevance of concepts to the task domain (as opposed to the computer domain), the type hierarchy of objects, and the containment hierarchy of objects. Having thought about these things greatly facilitates designing a user-interface.
  3. A conceptual model provides a starting point for the development of a product vocabulary, i.e., a dictionary of terms that will be used to identify each of the objects and operations embodied in the software. This helps ensure that terms are used consistently thoughout the app and its documentation.
  4. Once designers have a conceptual model for an app, they can write scenarios depicting people using the app to perform tasks, using only concepts from the conceptual model and terms from the vocabulary. For example, a conceptual-level scenario for the calendar application might be: “John checks his appointments for the week. He schedules a team meeting, inviting team members, and adds a dental appointment.” Such scenarios (which can be separated into use-cases), help validate the design in functional reviews. They can also be included in product documentation and training. Conceptual scenarios describe tasks and goals without revealing the UI-level user interactions required to achieve those goals, so they can be used as task descriptions in usability tests.
  5. A conceptual model provides a first cut at the app’s object-model (at least for the objects that users will be aware of), so developers can use it to begin implementing the app.
  6. An actively-maintained conceptual model supports a better development process. It can insure that all user-visible aspects of an application (functionality, terminology, UI, documentation, support, …) are consistent. By making the conceptual model the joint responsibility of all team members, the application can be made coherent. Both of these also reduce development resources by reducing rework.

Further Reading

•   Johnson, J. & Henderson, D.A., “Conceptual Models: Begin by Designing What to Design”, Interactions, Jan-Feb 2002.

•   Johnson, J. & Henderson, D.A., Conceptual Models: Core to Good Design, Morgan & Claypool, 2011.

End User License Agreement (EULA) Presentation

Written by: Greg Nudelman

This is an excerpt from the upcoming “Android Design Patterns: Interaction Design Solutions for Developers” (Wiley, 2013) by Greg Nudelman

The first thing your customers see when they download and open your app is the welcome mat you roll out for them. Unfortunately, this welcome mat commonly contains unfriendly impediments to progress and engagement: End User License Agreements (EULAs). Like the overzealous zombie cross-breed between a lawyer and a customs agent, this antipattern requires multiple forms to be filled out in triplicate, while keeping the customers from enjoying the app they have so laboriously invested time and flash memory space to download. This article exposes the culprit and suggests a friendlier welcome strategy for your mobile apps.

Antipattern: End User License Agreements (EULAs)

When customers open a mobile website, they can often engage immediately. Ironically, the same information accessed through apps frequently requires agreeing to various EULAs, often accompanied by ingenious strategies that force customers to slow down. EULA is an antipattern.

When and Where It Shows Up

EULAs are typically shown to the customer when the application is first launched and before the person can use the app. Unfortunately, when they do show up, EULAs are also frequently accompanied by various interface devices designed to slow people down. Some EULAs require people to scroll or paginate to the end of a 20-page document of incomprehensible lawyer-speak before they allow access. Others purposefully slow people down with confirmation screens that require extra taps. Truly, things in a torture department have evolved nicely since the days of Spanish Inquisition!


Financial giant Chase provides a good example of a EULA. As shown in figure 1, when customers first download the Chase app, they are faced with having to accept a EULA even before they can log in.

Figure 1: EULA antipattern in Chase app.











What makes this example interesting, is that the same information is accessible on the mobile phone without needing to accept the EULA first: through the mobile web browser, as shown in Figure 2.


Figure 2: There is no EULA on the Chase mobile website.













Why Avoid It

The remarkable thing is not that the EULA is required. Lawyers want to eat, too, so the EULAs are an important component of today’s modern litigious society. Dealing with a first-world bank in the “New Normal” pretty much guarantees that you’ll be faced with signing some sort of a legal agreement at some point in the relationship. The issue is not the EULA itself—it is the thoughtlessness of the timing of the EULA’s appearance.

The app has no idea if you have turned on the mobile access on or have your password set up properly. (Most people have at least a few issues with this.) Therefore, the app has no idea if the bank can serve you on this device. However, already, the bank managed to warn you that doing business on the mobile device is dangerous and foolhardy and, should you choose to be reckless enough to continue, the bank thereby has no reasonable choice but to relinquish any and all responsibility for the future of your money. This is hardly an excellent way to start a mature brand relationship.

What should happen instead? Well, the mobile website provides a clue. First, it shows what a customer can do without logging in, such as finding a local branch or an ATM. Next, the mobile site enables the customer to log in. Then the system determines the state of the EULA that’s on file. If (to paraphrase Eric Clapton in “The Tales of Brave Ulysses”) the customers’ “naked ears were tortured by the EULA’s sweetly singing” at some point in the past, great—no need to repeat the sheer awesomeness of the experience. If not, well, it’s Lawyer Time. Consequently, if customers do not have Bill Pay turned on, for example, they don’t need to sign a Bill Pay EULA at all, now do they? The point is that the first page customers get when they first launch your app is your welcome mat. Make sure yours actually says “Welcome.”

Additional Considerations

Has anyone bothered asking, “How many relationships (that end well) begin with a EULA anyway?” How would Internet feel if every website you navigated to first asked you to agree to a EULA, even before you could see what the site was about? That just does not happen. You navigate to a website and see awesome welcome content immediately. (Otherwise, you’d be out of there before you could spell E-U-L-A.) When you use a site to purchase something, you get a simple Agree and Proceed button with a nearby link to a EULA agreement (not that anyone ever bothers to read those things anyway, especially on mobile) and merely proceed on your way.

If you can surf the web happily, taking for granted the awesomeness of the smorgasbord of information on the mobile and desktop, without ever giving a second thought to the EULAs, why do you need to tolerate a welcome mat of thoughtless invasive agreements on a mobile app platform?

Additional Information

You can find 70 essential mobile and tablet design ideas and antipatterns in my new book, Android Design Patterns: Interaction Design Solutions for Developers (Wiley, 2013) now available for pre-order at http://AndroidDesignBook.com where you can also sign up for the next free monthly Android Design Question and Answer session.

User Experience Go Away

Written by: Dave Malouf

There is no UX for us

That’s right! I said it. For us (designers, information architects, interaction designers, usability professionals, HCI researchers, visual designers, architects, content strategists, writers, industrial designers, interactive designers, etc.) the term user experience design (UX) is useless. It is such an over generalized term that you can never tell if someone is using it to mean something specific, as in UX = IxD/IA/UI#, or to mean something overarching all design efforts. In current usage, unfortunately, it’s used both ways. Which means when we think we’re communicating, we aren’t.

Of course there is UX for us

If I was going to define my expertise, I couldn’t give a short answer. Even when UX is narrowly defined, it includes interaction design (my area of deep expertise), information architecture (a past life occupation), and some interface design. To do it well, one needs to know about research, programming, business, and traditional design such as graphic design as well. Once, to do web design you had to be a T-shaped person. This is defined as a person who knows a little bit about many things and a lot about one thing. Imagine a programmer who also understands a bit about business models and some interface design. But as our product complexity grows, we need P and M shaped people–people with multiple deep specialties. To design great user expereinces, you need to specialize in a combination of brand management, interaction design, human-computer factors and business model design. Or you could be part of a team. The term UX was welcomed because we finally had an umbrella of related practices.

Of course, we don’t all belong to the same version of that umbrella. We all bring different focuses under the umbrella, different experiences, mindsets, and practices. While we can all learn from each other, we can’t always be each other.

But trouble started when our clients didn’t realize it was an umbrella, and thought it was a person. And they tried to hire them.

It isn’t about us

If there is any group for whom UX exists now more than ever it is non-UXers. Until 2007, the concept of UX had been hard to explain. We didn’t have a poster child we could point to and say, “Here! That’s what I mean when I say UX.” But in June 2007, Steve Jobs gave us that poster child in the form of the first generation iPhone. And the conversation was forever changed. No matter whether you loved, hated, or could care less about Apple, if you were a designer interested in designing solutions that meet the needs of human beings, you couldn’t help but be delighted when the client held up his iPhone and said, “Make my X like an iPhone.”

It was an example of “getting user experience right.” We as designers were then able to demonstrate to our clients why the iPhone was great and, if we were good, apply those principles in a way that let our clients understand what it took to make such a product and its services happen. You had to admit that the iPhone was one of the first complete packages of UX we have ever had. And it was everywhere.

Now five years later, our customers aren’t saying they want an iPhone any more. They are saying that they want a great “experience” or “user experience.” They don’t know how to describe it, or who they need to achieve it. They have no clue what it takes to get a great one, but they want it. And they’ll know it when they see it, feel it, touch it, smell it.

And they think there must be a person called a “user experience designer” who does what other designers “who we’ve tried before and who failed” can’t do. The title “user experience designer” is the target they are sniffing for when they hire. They follow the trail of user experience sprinkled in our past titles and previous degrees. They sniff us out, and “user experience” is the primary scent that flares their metaphorical nostrils.

It is only when they enter our world that the scent goes from beautiful to rank. They see and smell our dirty laundry: the DTDT (Defining The Damn Thing) debates, the lack of continuity of positions across job contexts, the various job titles, the non-existent and simultaneously pervasive education credentials, etc. There is actually no credential out there that says “UX.” Non! Nada! Anywhere. There are courses for IxD, IA, LIS, HCI, etc. But in my research of design programs in the US and abroad, no one stands behind the term UX. It is amorphous, phase-changing, and too intangible to put a credential around. There are too many different job descriptions all with the same title but each with different requirements (visual design, coding, research being added or removed at will). Arguably it is also a phrase that an academic can’t get behind. There aren’t any academic associations for User Experience, so it’s not possible to be published under that title.

Without a shared definition and without credentialed benchmarks, user experience is snakeoil. What’s made things even worse is the creation of credentialed/accredited programs in “service design” which take all the same micro-disciplines of user experience and add to it the very well academically formed “service management” which gives it academic legitimacy. This well defined term is the final nail in the coffin, and shows UX to be an embattled, tarnished, shifty, and confusing term that serves no master in its attempt to serve all.

“User experience design” has to go

Given this experience our collaborators, managers, clients and other stakeholders have had with UX; how can we not empathize with their confused feelings about us and the phrase we use to describe our work.

And for this reason UX has to go. It just can’t handle the complexity of the reality we are both designing for and of who is doing the designing. Perhaps the term “good user experience” can remain to describe our outcomes, but user experience designer can’t exist to describe the people who do the job of achieving it.

Abby Covert said recently that the term UX is muddy and confusing. Well, I don’t think the term “user experience” is confusing so much as it’s a term used to describe something that is very broad, but is used as if it were very narrow. There is a classic design mistake of oversimplifying something complex instead of expressing the complexity clearly. UX was our linguistic oversimplification mistake. We tried to make what we do easy to understand. We made it seem too simple. And now our clients don’t want to put up with the complexity required to achieve it.

Now that the term has been ruined (for a few generations anyway), we need to hone our vocabulary. It means we can’t be afraid of acknowledging the tremendous complexity in what we do, how we do it, and how we organize ourselves. It means that we focus on skill sets instead of focusing on people. It means understanding our complex interrelationships with all the disciplines formerly in the term UX. And we must understand that they are equally entwined with traditional design, engineering and business disciplines, communities, and practices as they are to each other.

So I would offer that instead of holding up that iPhone and declaring it great UX, you can still use it as an example of great design, but take the simple but longer path of patiently deconstructing why it is great.

When I used to give tours at the Industrial Design department at the Savannah College of Art and Design (SCAD) I would take out my iPhone and use it to explain why it was important that we taught industrial design, interaction design, and service design (among other things). I’d point to it off and explain how the lines materials, and colors all combined to create a form designed to fit in my hand, look beautiful on my restaurant table, and be recognizable anywhere. Then I would show the various ways to “turn it on” and how the placement of the buttons and the gesture of the swipe to unlock were just the beginning of how it was designed to map the customer’s perception and cognition, social behaviors, and the personal narrative against how the device signalled its state, what it was processing, and what was possible with the device. And I explained that this was interaction design. Finally, I’d explain how all of this presentation and interaction were wonderful, but the phone also needed to attach a service to it that allows you to make calls, where you can buy music and applications and that the relationships between content creators, license owners, and customers.

At no time do I use the term “user experience.” By the time I’m done I have taught a class on user experience design and never uttered the term. The people have a genuine respect for all 3 disciplines explored in this example and see them as collaborative unique practices that have to work intimately together.There is no hope left in them for a false unicorn who can singularly make it all happen.