When I was eleven, my parents bought a Mac Plus. It had a tiny monochrome screen, a floppy drive, and 1MB of memory. And it came with something called HyperCard. HyperCard let you make stuff. It had documents called stacks, each a series of cards – similar to PowerPoint today. In addition to graphics and text, you could create buttons and tell them what to do – flip to another card, show or hide an object, and so forth.
Down at the bottom of the screen was a little window where you could type simple English-like commands – things like go to card 2 or beep. Once you’d mastered those, you could add them to your buttons or trigger them at certain times, creating real interactivity. Pretty soon I was making little games and utilities. It was the coolest thing ever.
HyperCard came with something called the home stack that opened when you first launched it. I looked at it and thought, This isn’t very useful. It shows up all the time but it doesn’t do much. So I made a better one. It included various utilities, and of course a rock-paper-scissors game. I made packaging and convinced the local Mac store to sell it for $7.
It sold two copies.
Since then I’ve worked on products with more than twice as many users, but the story remains the same. This isn’t very useful. This doesn’t serve people’s needs. Let’s make a better one.
In college I discovered a career for what I did: user interface design. And though the title has changed over the years – user experience designer, interaction designer, product manager, product designer, founder – the motivation hasn’t. Technology is confusing and doesn’t meet people’s needs. I want to fix that.
Eat Your Vegetables
These days, it’s fashionable to talk about audacious ideas. Paradoxically, it’s also popular to focus on ideas that can be built in a month.
In a post last year, Paul Graham listed Frighteningly Ambitious Startup Ideas and spawned a bumper crop of companies (though my favorites, Bring Back Moore’s Law and The Next Steve Jobs, don’t seem to have much traction). Wired’s cover story for February was 7 Massive Ideas That Can Change the World.
But I can’t help thinking we’ve skipped our vegetables and gone straight to dessert. We are insinuating ourselves into more and more of people’s lives, yet we haven’t managed to meet their needs in predictable, understandable, let alone enjoyable ways.
I watch people using their devices and I cringe. They get their single-click and double-click mixed up. They open an email attachment, update it, and then can’t understand why their changes aren’t in Documents. They try to set up iCloud and end up creating three Apple IDs. They miss out on all the useful things technology can do for them, lost in a sea of complexity, confusion, and techie-centric functionality. These things were supposed to be labor-saving devices, right?
Make no mistake: This is our fault. To begin with, we’ve created ever-more-inconsistent expectations over time. Consider single- vs. double-click. Easy, right? You single-click to select, double-click to open. Unless it’s a webpage. Or Apple’s column view, where selecting and opening are the same thing so it doesn’t matter. Well, for folders; for documents, it matters.
Anyway, it’s really easy to tell if you’re in a webpage or not so you know which convention to use. Just look at the top of the screen, on the left. It should say Firefox, or Safari, or Chrome. Oh wait, you’re on Windows. Look at the top of the window. No, the frontmost window. See, it has a bigger shadow than the others. Oh wait, you’re on Windows 8? Well, are you in Metro or not? Oh wait, they don’t call it Metro anymore. I forget what they call it. Do you see a lot of colorful flat boxes? What were you trying to do again? Hey, where are you going?
You may think I’m overcomplicating things for effect. I’m not. It seems simple to you because all that stuff is already in your head. When you switch from GMail in a browser, to Outlook on Windows, to Mail.app on Mac, you know which conventions change. You have what designers call a mental model, rooted in years of experience and history, that allows you to make the right call. Most people don’t – nor should we have to.
And these interaction details are the tip of the iceberg. We do a disappointing job of understanding what people outside our bubble are trying to accomplish. Let’s be honest: We mostly make products for ourselves. Later, when they’re successful, we start wondering how people use them. We do user studies and surveys and ethnographies and then ignore the results because it’d be expensive to fix and besides, they’ll figure it out, right? I mean, we did. We lack the comprehensive understanding we’d need to make real, substantive change, to make products that are both usable and useful.
Therapists sometimes use the downward arrow technique with their clients. It starts with the apparent problem and proceeds through a series of “why” questions to the underlying issue:
Client: “I get nervous speaking in class.”
Therapist: “Why do you get so nervous?”
Client: “I’m worried that I might say something stupid.”
Therapist: “And if you did?”
Client: “I would be so embarrassed!”
Therapist: “Why? What would be so bad about it?”
Client: “It would mean I’m not good enough.”
And so forth.
Product design requires a similar process: start with a design or feature question and dig down until you find the assumptions that underlie it:
Me: Why do you ask for a user’s password every time he downloads a free app?
Imaginary Apple Guy: For security.
Me: What do you mean by security?
IAG: Well, if someone gets hold of your phone, they’d be able to install apps without your permission.
Me: And what would be so bad about that?
IAG: The apps could do malicious things with your phone.
Me: But doesn’t Apple sandbox apps and review them for malicious behavior?
IAG: Sure, but a maliciously-installed app could connect to your Facebook account.
Me: And is the risk of that happening when your phone is stolen worth requiring a password for every install?
Note that the point isn’t to make me look smart, or simply to reveal flaws. By the end of that (fictitious) exchange, we’ve gone from an ill-defined concept (“security”) to a specific question that deals in user needs.
The Product Mantra
To answer such questions we need the fundamental, defining goals of our product. Who is it for? What purpose does it serve? It’s impossible to evaluate trade-offs otherwise.
When I was at AOL our illustrious head of Consumer Experience, Matte Scheinker, introduced the notion of a product mantra: a clear, concise description of your product. Critically, it must be specific enough to disagree with.
Using my own to-do app, Stky, as an example:
Mantra A: Stky is a to-do app for naturally disorganized people. It keeps overload in check by having you reprioritize each day’s tasks anew.
- Mantra B: Stky is a productivity app anyone can use. Unlike its competitors it keeps you in control of your tasks and on top of your life.
Both mantras are accurate. But only Mantra A is specific enough to disagree with. Do disorganized people need a to-do app? Is daily reprioritization too much work, especially for such people?
Mantra B could describe nearly anything.
Now, suppose I’m deciding whether to add a new feature to Stky: multiple sticky notes. You could have your Work sticky, your Home sticky, maybe a Stuff to Read sticky, and the like. Seems useful, and certainly I’ve had users request it. Let’s hold it up to our mantras:
- Using Mantra A: Do we want to add additional management overhead to an app for disorganized people? Probably not. And if the sticky represents our daily list of priorities, doesn’t adding multiple stickies break the whole paradigm? Probably. So maybe it’s not a good idea.
- Using Mantra B: Well, multiple stickies means more control, right? And lots of people want it, and we want a product anyone will use. So I guess it’s a good idea…along with nearly any other idea.
Even better, this exercise almost forces us back into downward arrow. Why do users want multiple stickies? What are they trying to accomplish? Is that deeper goal consistent with our mantra? If so, is there another feature that would meet their need in a way that fits the product better?
Asking why and writing a mantra won’t magically give us insight into our users. But it will force us to form hypotheses, which can be tested against evidence in the world around us.
And the constraints we create via those hypotheses allow us to make choices. Because the great products, the ones we revere, are invariably the work of product teams brave enough to make choices. We marvel at Apple’s clean, usable design. We call it simplicity but it’s not that: It’s knowing what to keep and what to leave out and having the guts to disappoint some of the users all of the time and all of the stakeholders some of the time. Many of us already know that, but we can’t bring ourselves to choose when push comes to shove.
None of this is a substitute for user research. We still need usability tests, ethnographies, brainstorming sessions, click data, bucket tests, discovery, and all the rest. But in the absence of clear hypotheses and specific questions, user research is a little like the proverbial tree falling in the forest. Research tests our assumptions and tells us where we’re right or wrong; it doesn’t tell us what to build.
This isn’t the kind of audacious problem we solve all at once…nor do we have to. Every product that actually makes someone’s life better is a piece of the solution – not just for the life it improves, but for the designer who’s inspired by it, the team that decides to one-up it.
Make no mistake: This is hard stuff. It requires tenacity, and bravery, and empathy. It requires observing how people live their lives, and then handing them products that aren’t at all what they asked for. It needs more user-centered ways of doing bug triage and structuring development workflow. But as technology becomes everyone’s ever-more-constant companion I can think of no greater or more worthy challenge.
When I renamed my blog last year, I created a tagline: “We make stuff, for people.” It was meant to be funny, sure, but also to encapsulate everything I’ve said here. Technology is meaningless without people; yet, as technologists, we’re prone to forgetting that. We end up debating strange, empty questions. Does the world really need another photo sharing service? Is skeumorphic design good or bad? Is Ruby better than Python? None of it matters on its own.
It’s important to make stuff. But it only matters if we make stuff, for people.