Ending the UX Designer Drought

Written by: Fred Beecher

The user experience design field is booming. We’re making an impact, our community is vibrant, and everyone has a job. And that’s the problem. A quick search for “user experience” on indeed.com reveals over 5,000 jobs posted in the last 15 days (as of March 15, 2014) in the United States alone! Simple math turns that into the staggering statistic of 10,000 new UX-related jobs being created every month.

This amount of work going undone is going to prevent us from delivering the value that UX promises. It’s going to force businesses to look toward something more achievable to provide that value. For user experience design to remain the vibrant, innovation-driving field it is today, we need to make enough designers to fill these positions.

Fortunately, there are a tremendous number of people interested in becoming a UX designer. Unfortunately, it is nearly impossible for these people to land one of these jobs. That’s because of the experience gap. All these UX jobs are all for people with 2-3 years of experience–or more.

UX design is a strategic discipline in which practitioners make recommendations that can have a big impact on an organization’s revenue. Frankly, a designer isn’t qualified to make these kinds of recommendations without putting in some time doing fundamental, in-the-trenches research and design work. While this might seem like an intractable problem, the skills required to do this fundamental work can be learned!

Someone just has to teach them.

Solving the problem

There are many ways to to teach fundamental UX design skills. Design schools have been doing it for years (and the new, practically-focused Unicorn Institute will start doing it soon). However, to access the full breadth of people interested in UX design, education in UX design needs to be accessible to people at any stage of their lives. To do that, you need to make learning a job.

This is not as crazy as it sounds. Other professions have been doing this for hundreds of years in the form of apprenticeship. This model has a lot to offer the UX design field and can be adapted to meet our particular needs.

What is apprenticeship?

In the traditional model of apprenticeship, an unskilled laborer offers their labor to a master craftsman in exchange for room, board, and instruction in the master’s craft. At the end of a certain period of time, the laborer becomes a journeyman and is qualified to be employed in other workshops. To be considered a master and have their own workshop and apprentices, however, a journeyman must refine their craft until the guild determines that their skill warrants it.

While this sounds medieval–because it is–there are a few key points that are still relevant today.

First, apprenticeship is learning by observation and practice. Designing a user experience requires skills that require practice to acquire. Apprentices are also compensated with more than just the training they receive. Even “unskilled,” they can still provide value. A baker’s apprentice can haul sacks of flour; a UX apprentice can tame the detritus of a design workshop.
Apprenticeship is also limited to a specific duration, after which the apprentice is capable of the basics of the craft. In modern terms, apprenticeship is capable of producing junior designers who can bring fundamental, tactical value to their teams. After a few years of practicing and refining these skills, those designers will be qualified to provide the strategic UX guidance that is so sought after in the marketplace.

A new architecture for UX apprenticeship

The apprenticeship model sounds good in theory, but does it work in practice? Yes. in 2013, The Nerdery, an interactive design and development shop in Minneapolis, ran two twelve-week cohorts of four apprentices each. There are now eight more UX designers in the world. Eight designers might seem like a drop in the 10,000-jobs-per-month bucket, but if more design teams build apprenticeship programs it will fill up very quickly.

Building an apprenticeship program might sound difficult to you. However, The Nerdery’s program was designed in such a way that it could be adapted to fit different companies of different sizes. We call this our UX Apprenticeship Architecture, and I encourage you to use it as the basis of your own apprenticeship program.

There are five components to this architecture. Addressing each of these components in a way that is appropriate for your particular organization will lead to the success of your program. This article only introduces each of these components. Further articles will discuss them in detail.

Define business value

The very first step in building any UX apprenticeship program is to define how the program will benefit your organization. Apprenticeship requires an investment of money, time, and resources, and you need to be able to articulate what value your organization can expect in return for that investment.

Exactly what this value is depends on your organization. For The Nerdery, the value is financial. We train our apprentices for them to become full members of our design team. Apprenticeship allows us to achieve our growth goals (and the revenue increase that accompanies growth for a client services organization). For other organizations, the value might be less tangible and direct.

Hire for traits, not talent

Once you’ve demonstrated the value of apprenticeship to your organization and you’ve got their support, the next thing to focus on is hiring.

It can take a long time at first until you narrow down what you’re looking for. Hiring apprentices is much different from hiring mid to senior level UX designers. You’re not looking for people who are already fantastic designers; you’re looking for people who have the potential to become fantastic designers. Identifying this potential is a matter of identifying certain specific traits within your applicants.

There are two general sets of traits to look for, traits common to good UX designers and traits that indicate someone will be a good apprentice. For example, someone who is defensive and standoffish in the face of critical feedback will not make a good apprentice. In addition to these two sets of traits, there will very likely be an additional set that is particular to your organization. At The Nerdery, we cultivate our culture very carefully, so it’s critical for us that the apprentices we hire fit our culture well.

Pedagogy

“Pedagogy” means a system of teaching. Developing the tactics for teaching UX design can take time as well, so it’s best to begin focusing on that once recruiting is underway. At The Nerdery, we found that there are four pedagogical components to learning UX design: orientation, observation, practice, and play.

Orientation refers to exposing apprentices to design methods and teaching them the very basics. In observation, apprentices watch experienced designers apply these methods and have the opportunity to ask them about what they did. Once an apprentice learns a method and observes it in use, they are ready to practice it by doing the method themselves on a real project. The final component of our pedagogy is play. Although practice allows apprentices to get a handle on the basics of a method, playing with that method in a safe environment allows them to make the method their own.

Mentorship

Observation and practice comprise the bulk of an apprentice’s experience. Both of these activities rely on close mentorship to be successful. Mentorship is the engine that makes apprenticeship go.

Although mentorship is the most critical component of apprenticeship, it’s also the most time-intensive. This is the biggest barrier an organization must overcome to implement an apprenticeship program. At The Nerdery, we’ve accomplished this by spreading the burden of mentorship across the entire 40-person design team rather than placing it full-time on the shoulders of four designers. Other teams can do this too, though the structure would be different for both smaller and larger teams.

Tracking

The final component of our apprenticeship architecture is tracking. It is largely tracking apprentice progress that gives apprenticeship the rigor that differentiates it from other forms of on-the-job training. We track not only the hours an apprentice spends on a given method but qualitative feedback from their mentors on their performance. Critical feedback is key to apprentice progress.

We track other things as well, such as feedback about mentors, feedback about the program, and the apprentice’s thoughts and feelings about the program. Tracking allows the program to be flexible, nimble, and responsive to the evolving needs of the apprentices.

Business value, traits, pedagogy, mentorship, and tracking: Think about these five things in relation to your organization to build your own custom apprenticeship program. Although this article has only scratched the surface of each, subsequent articles will go into details.
Part two of this series will cover laying the foundation for apprenticeship, defining its business value and identifying who to hire.

Part three will focus on the instructional design of apprenticeship, pedagogy, mentorship, and tracking.

If you’ve got a design team and you need to grow it, apprenticeship can help you make that happen!

Siri, Chess, and Prostheses

Written by: Sorin Pintilie

Intelligent machines.

There was a time when the mere mention of artificial intelligence was wrapped in constant debate and triggered images of Hollywood-crafted products, like Hal 9000. The concept itself is quite controversial; it challenges human thought as Darwin once challenged human origins. But we moved on, and now we carry these intelligent machines in our pockets.

There’s a 38.9% chance you have one, too. Siri, the out-of-sight personal assistant from Apple, delivers an amazing experience. It listens to you, understands you, does what you say, and even talks back to you.

Sounds simple enough for us humans, but these are remarkable achievements for a machine. It has to process language, interpret context, understand intent, and orchestrate multiple services and information sources. And it brings together technologies that rely on dialog and natural language understandings, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, and service delegation to do it.

Spin back the clock 50 years and all of this wasn’t even remotely possible. But just two years after Turing published the first documented idea of intelligent machines, three people were already working on the first system capable of speech recognition, named Audrey.

It could only process digits. Spoken by a single voice. With pauses in between. And it occupied a six-foot high relay rack.

Not exactly a marvel of technology, by today’ standards. But back then, when computers had only 1kb of RAM, it was an impressive achievement. More impressive still, when you think about how such a system came to be.

It all started with an illusion act

Many elements from very different spheres come together in the story of Siri, and it all starts with a man doing some magic.

Tracing Siri’s ancestry takes us back roughly 250 years, to Austria, when Vienna still had an empress. The story begins with a man known mostly for what was perhaps the most famous illusion in history: the Mechanical Turk, a machine that could play chess on its own and claimed to win over any opponent.

In reality, it was just a wooden cabinet with a life-size, mustache-wearing doll on top and a man inside, playing chess. It tricked people into thinking the machine was intelligent, but the idea itself was enough to intrigue the likes of Napoleon. (He played the Turk. He lost.)

And while the Turk made its creator—Wolfgang von Kempelen—popular, it is another of von Kempelen’s inventions that marks the beginning for Siri’s story.

The first speaking machine was a pretty straight-forward concept that tried to simulate the human vocal tracts—it had lungs and everything. Nevertheless, it was the first machine that could replicate whole words and sentences. It was this machine that would set the stage for Audrey.

Chess, the game that made it all possible

von Kempleton’s Turk was the first machine that could replicate human speech. Audrey was the first that could recognize human speech. But Siri is the first machine that can understand human speech.

Understanding is the unique ability that swings the story back to the Turk. The machine’s connection with chess isn’t random. Chess is more than a game; it’s an entirely mental activity. And it’s a perfect metaphor that would allow for the birth of a new scientific discipline, artificial intelligence.

A machine capable of defeating a human opponent at a mind game is an intelligent machine, by any logical standards—or, at least, that was the premise.

While the Turk was, for the first time in history, the first real image of a machine that could be better than us at anything, it was just an illusion with a man operating it. But ever since, the idea of an intelligent machine started slowly morphing into physical technologies.

The next obvious stage would certainly seem to be a machine that could play chess and be self-operated. In 1912, the real thing quickly followed. It was called Ajedrecista and it was the first computer game. Only, without an actual, you know, computer.

Making this happen required a deep understanding of how we think when we play chess.

Every move weaves together an amazing chain of mental processes: Perception transforms the pieces on the board into a series of symbols, and long-term memory overlaps perceptions with previous knowledge. Logical thought then searches for variations, and decision-making is needed for the actual move. (Intrigued like Napoleon? I found Chess Metaphors: Artificial Intelligence and the Human Mind quite useful.)

Move after move, the chess game becomes a sequence of decision-making events governed by strict logical rules. And it is this logic module in our brain that chess heavily stimulates, so much so that it can be simulated. It doesn’t take a big imaginary leap to imagine that thought can be simulated.

This realization gave way to wonderful theoretical breakthroughs. Concepts like an algorithms, recursiveness and programming were born. Having to analyze how we think about chess quickly lead to computer thinking.

AI: A new, old way of designing experiences

A special group of people made a great imaginative leap. They realized that a game holds the secret into human thought. For people like Edward Feigenbaum, Marvin Minsky, Allen Newell, Herbert Simon, Alan Turing, John von Neumann, and Norbert Wiener—the founders of AI as a scientific discipline—pinpointing all the mental processes that are necessary to generate high-level cognitive activities played a very important role in the development of simulated thought processes through computer programming.

Logic and process alone wasn’t enough though. We expanded our concepts to expert systems, knowledge engineering, neural networks, and so on. The subsequent knowledge-based models of thought are nothing short of amazing. But the real breakthrough came from an anti-type of approach: The father of expert systems, Edward Feigenbaum, called it representation. This approach supported the idea that knowledge-modeling the real world was much too difficult; instead, systems should adapt and respond effectively to real interactions with the world.

This is important because it has finally allowed for the development of a truly human-centered approach to designing systems, an approach initially articulated by Bill Moggridge and one which inspired a major shift in design thinking that we see maturing today.

AI and HCI have been described as having opposite views on how humans and computers should interact. Human-centered computing is somewhat bringing all that together by combining intelligent systems, human-computer interaction, and contextual design. Instead of trying to imitate (or substitute) the human, the goal is to amplify and extend his capabilities, much like a prostheses does, although not in the sense that they compensate for the specific disabilities of any given individual, but rather because they enable us to overcome the biological limitations shared by all of us.

Above all else, a prostheses needs to fit, otherwise it will be rejected. In the same manner, systems designed to assist, rather than replace, need to be personal and contextual. They need to be intelligent in order to fit.

In terms of actual capabilities, Siri wouldn’t pass a Turing Test. But it doesn’t set out to do so. It doesn’t try to augment our abilities, but rather extend them.

For example, say you want to go to the best restaurant around. You know you can do that. With the help of technology, you can combine information from different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and your own favorites).

But why would you want to? You want to use technology as a tool, not get immersed in the experience of interacting with it.

Siri delegates everything you don’t want to do. It lets you use technology as it’s supposed to be used, as a tool. By doing so, it becomes a digital prostheses. As a result, the experience is truly human-centered, built for humans based on real human needs.

Final lessons

The story of Siri is full of great achievements of the human mind. It shows us how the power of thought can fuel great technological breakthroughs. It ends with the same man that started it all: von Kempelen, the man with the kind of thinking that gave birth to the first speaking machine, a truly amazing technological achievement. But more importantly, the kind of thinking that creates genuine human experiences.

The Turk’s biggest achievement was to challenge how we think about machines. This is the type of thinking that I like to call design thinking.

Yes, Siri still has its shortcomings, starting with the fact that it’s voice-controlled. But the mechanisms behind it are nothing short of amazing. Properly pairing machine intelligence with true contextual awareness is what created the first conversational interface that actually works.

And simply because it works, it marks an important milestone: It becomes a template for all future voice-controlled interactions. Even Google has updated its interfaces to include conversational and contextual interfaces. What Siri did was show the world a bright idea and made it stick.

More importantly, for professionals, the story behind Siri offers valuable lessons in true experience design, vital lessons in times clearly dominated by form instead of content, where an excessive preoccupation with formalism can impede further developments.

Experience design is more than numbers, boxes, and diagrams. It’s emotional, invisible at the time of inception, innovative, developed intelligently, and deeply contextual. A complex multiplex, feeding on a variety of different disciplines, such as neuroscience, psychology, linguistics, logic, biology, social sciences, computer science, software engineering, mathematics, and philosophy.

Much in the same way that Siri forges new tools from old technologies, good design feeds on AI for the raw materials to conquer human experience. To add function to experience. To add personality.

“Avoid fields. Jump fences.

Disciplinary boundaries and regulatory regimes are attempts to control the wilding of creative life. They are often understandable efforts to order what are manifold, complex, evolutionary processes. Our job is to jump the fences and cross the fields.”

—Bruce Mau

We Don’t Research. We Build.

Written by: Dan Turner

The following is a composite of experiences I’ve had in the last year when talking with startups. Some dialog is paraphrased, some is verbatim, but I’ve tried to keep it as true as possible and not skew it towards anyone’s advantage or disadvantage.

As professionals in the user-centered design world, we are trained and inclined to think of product design as relying on a solid knowledge, frequently tested, of our potential users, their real-life needs and habits.

We’ve seen the return on investment in taking the time to observe users in their daily lives, in taking our ideas as hypotheses to be tested. But the founders and business people we often interview with have been trained in a different worldview, one in which their ideas are sprung fully formed like Athena from the brow of Zeus. This produces a tension when we come to demonstrate our value to their companies, their products, and their vision. We want to test; they want to build. Is there a way we can better talk and work together?

Most of my interactions with these startups were job interviews or consulting with an eye toward a more permanent position; the companies I spoke with ranged from “I’m a serial entrepreneur who wants to do something” to recent B-school grads in accelerator programs such as SkyDeck, to people I’ve met through networking events such as Hackers & Founders.

In these conversations, I tried to bring the good news of the value of user experience and user research but ran into a build-first mentality that not only depreciates the field but also sets the startup on a road to failure. Our questions of “What are the user needs?” are answered with “I know what I want.” We’re told to forget our processes and expertise and just build.

Can we? Should we? Or how can we make room for good UXD practices in this culture?

“I did the hard work of the idea; you just need to build it”

Over the past two years, I’ve been lucky to find enough academic research and contract work that I can afford to be picky about full-time employment (hinging on the mission and public-good component of potential employers). But self-education, the freelance “UX Team of One,” and Twitter conversations can’t really match the learning and practice potential of working with others, so I keep looking for full-time UX opportunities.

This has lately, by happenstance, meant startups in the San Francisco Bay area. So I’ve been talking to a lot of founders/want-to-be-founders/entrepreneurs (as they describe themselves).

But I keep running into the build-first mentality. And this is often a brick wall. I’m not saying I totally know best, but the disconnect in worldviews is a huge impediment to doing what I can, all of which I know can help a startup be better at its goals, so that it can have a fighting chance to be in that 10-20% that doesn’t end up on the dust heap of history.

“Build first” plays out with brutal regularity. The founders have an idea, which they see as the hard part; I’ve actually had people say, “You just need to implement my idea.” They have heard about something called “UX” but see user experience design as but a simple implementation of their idea.

As a result, the meaning of both the U and the X get glossed over.

The started-up startup

We’ll start with the amalgam of a startup that had already made it into an accelerator program. A round of funding, a web site, an iOS app, an origin story on (as you’d expect) TechCrunch.

It began with a proof of concept: A giant wall, Photoshopped onto a baseball stadium, of comments posted by the app’s users. The idea was basically to turn commercial spaces into the comments thread below any HuffPo story (granted, a way to place more advertising in front of people). The company was composed of the founder, fresh from B-school; a technical lead also just out of school; a few engineers; and sales/marketing, which was already pitching to companies.

The company was juggling both the mobile and web apps and shooting for feature-complete from the word Go. Though there were obvious issues, such as neither actually working and the lack of any existing comment walls or even any users; they were trying to build a house of cards with cards yet to be drawn.

In talking with the tech lead, I saw that they were aware of some issues (crashes, “it’s not elegant enough”) but didn’t see others (the web and mobile app having no consistent visual metaphors and interaction flows, typos, dead ends, and the like). To their credit, they wanted something better than what they had. Hence, hiring someone to do this “UX thing.” But what did they think UX was?

I had questions about the users. How did they differ from the customers–the locations that would host walls, which would generate revenue by serving ads to the users who posted comments?

I had questions about the company. What was their business process? What had they done so far?

This was, I thought, part of what being interviewed for a UX position would entail–showing how I’d go about thinking about the process.

I was more than ready to listen and learn; if I were to be a good fit, I’d be invested in making the product successful as well as developing a good experience for users. I was also prepared with some basic content strategy advice; suggestions about building a content strategy process seemed nicer than pointing out all the poor grammar and typos.

Soon, I was meeting with the founder. He talked about how a B-school professor had liked his idea and helped him get funding. I asked about the users. He responded by talking about selling to customers.

When he asked if I had questions, I asked, “What problem does this solve, for whom, and how do you know this?” It’s my standard question of any new project, and, I was learning, also a good gauge of where companies were in their process. He said he didn’t understand. He said that he had financial backing, so that was proof that there was a market for the app. What they wanted in a UX hire, he said, was someone to make what they had prettier, squash bugs, and help sell.

I got a bad feeling at that point; the founder dismissed the very idea of user research as distracting and taking time away from building his vision. Then I started talking about getting what they had in front of users, testing the hypotheses of the product, iterating the design based on this: all basic UX and Lean (and Lean UX!) to boot, at least to someone versed in the language and processes of both.

This, too, the founder saw as worse than worthless. He said it took resources away from selling and coding, and he thought that testing with users could leak the idea to competitors. So, no user research, no usability testing, no iteration of the design and product.

(Note on one of startups that’s part of this amalgam: As of this writing, there has been neither news nor updates to the company site since mid-2012, and though the app is still on the iTunes Store, it has too few reviews to have a public rating. This after receiving $1.2 million in seed funding in early 2012.)

The pre-start startup

I’ve also spoken with founders at earlier stages of starting up. One had been in marketing at large tech companies and wanted to combine publishing with social media. Another wrote me that they wanted to build an API for buying things online. I chatted with a B-school student who thought he’d invented the concept of jitneys (long story) and an economist who wanted to do something, though he wasn’t sure what, in the edu tech space. What they all had in common was a build-first mission. When I unpacked this, it became obvious that what they all meant was, “we don’t do research here.”

Like the company amalgam mentioned above, they all pushed back against suggestions to get out of the building (tm Steve Blank) to test their ideas against real users. Anything other than coding or even starting on the visual design of their products was seen as taking time away from delivering their ideas, which they were sure of (I heard a lot of “I took the class” and “we know the market” here).

And their ideas might end up being good ones–I can’t say. They seem largely well-intentioned, nice people. But when talking with them about how to make their product or service vital for users and therefore more likely to be a success, it soon becomes clear that what UX professionals see as vital tools and processes in helping create great experiences are seen quite differently by potential employers, to the point that even mentioning user research gets you shown the door. Politely, but still.

I’d like to bring up here the idea that perhaps we, as UX people, perhaps have contributed to the problem. The field is young and Protean, so the message of “what is UX?” can be garbled even if there were a good, concise answer. Also, in the past, user research has indeed been long and expensive and resulted in huge documents of requirements and so on, which the Lean UX movement is reacting to. So nobody’s totally innocent, to be sure. But that’s another article in the making (send positive votes to the editors).

One (anonymized) quote:

“Yep, blind building is a real disaster and time waste… I’ve seen huge brands go down that path… I have identified a great proof-of-concept market and have buy-in from some key players. My most immediate need, however, is a set of great product comps to help investors understand how the experience would work and what it might look like. I’ve actually done a really rough set of comps on my own, but while I’m a serious design snob, I am also terrible designer…”

So: Blind building is a real disaster, but she’s sketched out comps and just wants someone to make it look designed better. Perhaps she saw “buy in from some key players” as user research?

We had an extended exchange where I proposed lightweight, minimum-viable-product  prototypes to test her hypotheses with potential users. She objected, afraid her idea would get out, that testing small parts of the idea was meaningless, that she didn’t have time, that it only mattered what the “players” thought, that she never saw this at the companies she worked at (in marketing).

Besides, her funding process was to show comps of how her idea would work to these key players, and testing would only appear to reduce confidence in her idea. (Later that week, I heard someone say how “demonstrating confidence” was the key ingredient in a successful Y Combinator application.)

With her permission, I sent her a reading list including Steve Blank, Erika Hall, Bill Buxton, Eric Reis, and Jeff Gothelf. I still haven’t heard back.

Another (anonymized) quote:

“We’re looking for somebody who’s passionate about UI/UX to work with us on delivering this interface.

“Our industry specifics make us a game of throwing ideas around with stakeholders, seeing what sticks and building it as fast as possible. Speed unfortunately trumps excellence but all products consolidate while moving in the right direction.

“We certainly have the right direction business-wise and currently need to upgrade our interface. We require UX consulting on eliminating user difficulty in the process of buying, as well as an actual design for this.”

So: To him, it’s all about implementing an interface. Which, to him, is just smoothing user flows and, you know, coming up with a design. Frankly, I’m not sure how one could do this well, or with a user-centered ethic, without researching and interacting with potential users. I’m also not sure how to read his “upgrade our interface”; is that just picking better colors and shapes, in the absence of actual research and testing on whether it works well for users? That doesn’t strike me as useful, user-centric design. (During the interview process at Mozilla, I was asked the excellent question of how I’d distinguish art and design; I’m not sure I nailed the answer, but I suspect there’s more to design than picking colors and shapes.)

And I wasn’t sure even if he was receptive to the idea of users qua users in the first place. Before this exchange, when he described his business model, I pointed out that his users and his customers were two different sets of people and this can mean certain things from a design perspective. Given that his response was that they have been “throwing ideas around with stakeholders,” I gathered that his concept of testing with users was seeing what his funders liked. That did not bode well for actual user-centered design processes.

When I asked how they’d arrived at the current user flows and how they knew they were or weren’t good, he said that they internally step through them and show them to the investors (neither population is, again, the actual user). He was adamant both that talking to users would slow them down from building, and that because they were smart business people, they know they’re going in the right direction. It was at this point I thought that he and I were not speaking the same language.

I referred him to a visual designer I know who could do an excellent job.

I do not have the answers on how to bridge this fundamental gap between worldviews and processes. A good UX professional knows the value of user research and wants to bring that value to any company he or she joins. But though we can quote Blank, though we can show case studies, though we can show how a Gothelfian Lean UX process could be integrated into a hectic build schedule–when all this experience runs into a “build first” mentality, the experience and knowledge loses. At least in my experience. What is to be done?

An Open Letter to Project Managers

Written by: Michael Lai

Dear Project Managers,

It has been a very enjoyable experience working with everyone over the last couple of months and sharing our ideas on UX design. The various discussions about user interface, product usability, and user engagement have been an enlightening experience for me as well, and it is very positive to see that everyone involved in the product thinks so highly about improving the user experience.

In an ideal world with unlimited time and resources, I think the best way to address UX issues is to perform the same tasks as the user under the same environment/pressure–even if we’ve built something never done before–because then we would understand the exact problems that they have to solve and hopefully come up with the best solution.

User-centric design principles, however, do not replace the fact-finding mission we all need to take as UX designers; they merely serve as a starting point for making design decisions. We are not here to critique or provide expert opinions, but we are here to help ask the right questions and get the right answers from the users.

So, let’s talk for a minute about this thing we just launched.

What went wrong?

When you asked me what the users think without giving me time or money for research, you are in fact asking me what I think the users think.

When you asked me to apply standard guidelines and industry best practices, you are asking me to ignore what users have to say and to treat them like everyone else.

If our users are feeling a little bit neglected, it is because we’ve allowed ourselves to think we know better than they do.

Standards and guidelines abound, but not all of them apply. You have to know the rules first to know when to break them. These then need to be combined with as much knowledge or information as possible about our users so we can make some design decisions on the assumption that it is in their best interest.

Finally, we need to test and validate these assumptions so we can correct any misconceptions and continue to improve the product.

Somehow, SCRUM masters have convinced senior managers that standing in a circle in front of a board full of sticky notes constitutes a meeting and playing poker figures out the work schedule and priorities, but any suggestion of UX designers talking with end-users seems to be a waste of time and effort and not worth considering. If we aren’t given the right tools and resources to do our work, how can we be expected to deliver the best outcomes?

UX practitioners are not mind readers, and even if we do manage to guess right once, you can be assured that users won’t stay the same forever.

What could have gone right?

The more time you can spend thinking about UX and talking about it, the less time you will spend on fixing your products later.

If improving the user experience is something that the organization as a whole thinks is important, then everyone should be involved in UX design, just as the UX designer interacts with various people within the organization to come up with solutions.

Critical to improving an organization’s UX competency is removing the ‘black box’ view of UX design. There are definitely technical skills and knowledge involved, but I believe the most important skill for a UX practitioner is empathy, not Photoshop or CSS or how to read heatmap reports–as handy as those skills are to have and despite what many of the recruitment agencies would have you believe.

Certain aspects of UX design are familiar to all of us, in the visible and tangible part of the user experience. The user interface has a very visual and often subjective element to its design, but as a graphic designer can tell you, there are definite components (color, typography, layout, and the like) that are used in its creation. User interaction has a more technical and logical focus to its design because the nature of programming is modular and systematic.

Where I think people struggle to make a link with is the less accessible aspects of UX design, like dealing with user engagement of the product or the connection between the user experience of the product and the corporate brand/image. An organization may have many channels of communication with the end-users, but the messages spoken by the business unit can be very different than those of the product development team or customer support team.

Within the general scope of UX design there are different ways to involve the users: generating new ideas for product features, getting feedback on new releases/betas, running conferences or webinars, conducting research workshops, and so on, and it’s not as if organizations aren’t doing some of this already.

However worthwhile these activities are in themselves, if we make our decisions based on just one or two of them–or worse, carry any of them out but don’t act on the results–we’ve missed the opportunity to improve the user experience.

People who make complaints may just want attention–or perhaps they have been suffering for so long they can no longer deal with this unusable product. How do we know if all the complaints are filtering through customer support, and do fewer support tickets necessarily mean greater customer satisfaction?

Where to from here?

If we don’t like a particular color, we know how to change it. If a particular technology is incompatible, we can modify it or find an alternative.

But if we want to influence the behavior of our users, where do we start? Like any complex problem, the best way is to break the problem down into smaller and more manageable pieces.

If we want to make an impact on our product design, how do we go about it in the right manner? I think reversing some of the current attitudes toward UX design is a good starting point, because clearly the status quo is not creating the appropriate environment and culture for a UX-focused organization.

Don’t make the only UX designer in your company the UX team, don’t restrict the scope of UX design to the user interface alone, and don’t hide the users from the UX designers.

Do spend the time and resources to implement company-wide UX strategies, do try and understand UX design a little bit better, and do it as soon as possible.

But if we haven’t done anything yet, is it too late? Like everything else worth doing, it is never too late. However, not doing UX at all is probably not much worse than doing UX poorly. To act on good assumptions with caution beats acting on bad assumptions with confidence. A good UX designer knows that nothing about the user should be assumed or taken for granted, and we always need to be on our toes because just like the product, the user may see the need for change–even more readily than we do.

Having said that, if you don’t start taking small steps now, the challenge will become even greater. Make everything you do in UX design a learning experience that helps to reduce the problem.

If I haven’t lost you yet, then I think we are ready to talk some details.

Remember, there are a lot of standards and guidelines already, so we don’t need to reinvent the wheel–we just need to work out what works for us and what we can disregard.

As with any problem-solving process, we have to go through an iterative cycle of observing, hypothesizing, and testing until we derive at the optimal solution. I emphasize the word optimal, because there isn’t necessarily a right or wrong answer but there may be the most optimal solution given the circumstances (time, resources, assumptions…).

For those of you that have gone through the pain (and joy) of implementing Agile methodologies, I think you will agree that there is no out of the box solution that is guaranteed to work for any organization. You can certainly embrace the philosophy and principles, but how you adopt them to work for your team will be quite different depending on how you define the goals and objectives you want to achieve, not to mention the type of teams that you work with.

Remember, I am not here to critique or provide expert opinions, but to help you ask the right questions and get the right answers from the users. What UX means for the organization is up to you to decide, but if I have managed to spur you into some action, then I will have considered my job complete.

Thank you for your time.

Clicking Fast and Slow

Written by: Paul Matthews

Through social psychology and cognitive science, we now know a great deal about our own frailties in the way that we seek, use, and understand information and data. On the web, user interface design may work to either exacerbate or counteract these biases. This article will give a brief overview of the science then look at possible ways that design and implementation can be employed to support better judgements.

Fast and slow cognitive systems: How we think

If you are even remotely interested in psychology, you should read (if you haven’t already) Daniel Kahneman’s master work “Thinking Fast and Slow.”1 In it, he brings together a mass of findings from his own and others’ research into human psychology.

The central thesis is that there are two distinct cognitive systems: a fast, heuristic-based and parallel system, good at pattern recognition and “gut reaction” judgements, and a slower, serial, and deliberative system which engages more of the processing power of the brain.

We can sometimes be too reliant on the “fast” system, leading us to make errors in distinguishing signal from noise. We may incorrectly accept hypotheses on a topic, and we can be quite bad at  judging probabilities. In some cases we overestimate the extent of our own ability to exert control over events.

The way of the web: What we’re confronted with

We are increasingly accustomed to using socially-oriented web applications, and many social features are high on the requirements lists of new web projects. Because of this, we need to be more aware of the way people use social interface cues and how or when these can support good decision-making. What we do know is that overreliance on some cues may lead to suboptimal outcomes.

Social and informational biases

Work with ecommerce ratings and reviews have noted the “bandwagon” effect, where any item with a large number of reviews tends to be preferred, often when there is little knowledge of where the positive reviews come from.2 A similar phenomenon is the “Matthew” effect (“whoever has, shall be given more”), where items or users with a large number of up-votes will tend to attract more up-votes, regardless of the quality of the item itself.3

Coupled with this is an “authority” effect, when any apparent cue as to authenticity or expertise on the part of the publisher is quickly accepted as a cue to credibility. But users may be poor at distinguishing genuine from phony authority cues, and both types may be overridden by the stronger bandwagon effect.

A further informational bias known as the “filter bubble” phenomenon has been much publicized and can be examined through user behavior or simple link patterns. Studies of linking between partisan political blogs, for instance, may show few links between the blogs of different political parties. The same patterns are true in a host of topic areas. Our very portals into information, such as the first page of a Google search, may only present the most prevalent media view on a topic and lack the balance of alternative but widely-held views.4

Extending credibility and capability through the UI (Correcting for “fast” cognitive bias)

Some interesting projects have started to look at interface “nudges” which may encourage good information practice on the part of the user. One example is the use of real-time usage data (“x other users have been  viewing this for xx seconds”), which may–through harnessing social identity–extend the period with which users interact with an item of content, as there is clear evidence of others’ behavior.

Another finding from interface research is that the way the user’s progress is presented can influence his willingness to entertain different hypotheses or reject currently held hypotheses.5

Screen grab from ConsiderIt showing empty arguments
Screen grab from ConsiderIt showing empty arguments

The mechanism at work here may be similar to that found in a study of the deliberative online application ConsiderIt. Here, there was a suggestion that users will seek balance when their progress is clearly indicated to have neglected a particular side of a debate–human nature abhors an empty box!6

In online reviews, much work is going on to detect and remove spammers and gamers and provide better quality heuristic cues. Amazon now shows verified reviews; any way that the qualification of a reviewer can be validated helps prevent the review count from misleading.

Screen grab showing an Amazon review.
Screen grab showing an Amazon review.

To improve quality in in collaborative filtering systems, it is important to understand that early postings have a temporal advantage. Later postings may be more considered, argued, and evidence-based but fail to make the big time due never gaining collective attention and the early upvotes.

In any sort of collaborative resource, ways to highlight good quality new entries and rapid risers are important, whether this is done algorithmically or through interface cues.  It may also be important to encourage users to contribute to seemingly “old” items, thereby keeping them fresh or taking account of new developments/alternatives. On Stack Overflow, for instance, badges exist to encourage users to contribute to old threads:

Screen grab from Stack Overflow showing a call to action.
Screen grab from Stack Overflow showing a call to action.

 

Designing smarter rather than simpler

We know that well-presented content and organized design makes information appear more credible. Unfortunately, this can also be true when the content itself is of low quality.

Actual interaction time and engagement may increase when information is actually slightly harder to decipher or digest easily. This suggests that simplification of content is not always desirable if we are designing for understanding over and above mere speedy consumption.

Sometimes, perhaps out of the fear of high bounce rates, we might be ignoring the fact that maybe we can afford to lose a percentage of users if those that stick are motivated to really engage with our content. In this case, the level of detail to support this deeper interaction needs to be there.

Familiarity breeds understanding

Transparency about the social and technical mechanics of an interface is very important. “Black boxing” user reputation or content scoring, for instance, makes it hard for us to judge how useful it should be to decision making. Hinting and help can be used to educate users into the mechanics behind the interface. In the Amazon example above, for instance, a verified purchase is defined separately, but not linked to the label in the review itself.

Where there is abuse of a system, users should be able to understand why and how it is happening and undo anything that they may have inadvertently done to invite it. In the case of the “like farming” dark pattern on Facebook, it needed a third party to explain how to undo rogue likes, information that should have been available to all users.

There is already evidence that expert users become more savvy in their judgement through experience. Studies of Twitter profiles have, for instance, noted a “Goldilocks” effect, where excessively high or low follower/following numbers are treated with suspicion, but numbers more in the middle are seen as more convincing.7 Users have come to associate such profiles with more meaningful and valued content.

In conclusion: Do make me think, sometimes

In dealing with information overload, we have evolved a set of useful social and algorithmic interface design patterns. We now need to understand how these can be tweaked or applied more selectively to improve the quality of the user experience and the quality of the interaction outcomes themselves. Where possible, the power of heuristics may be harnessed to guide the user rapidly from a to b. But in some cases, this is undesirable and we should look instead at how to involve some more of the greater deliberative power of the mind.

Do you have examples of interface innovations that are designed either to encourage “slow” engagement and deeper consideration of content, or to improve on the quality of any “fast” heuristic cues? Let me know through the comments.

References

1 Kahneman D. Thinking, fast and slow. 1st ed. New York: Farrar, Straus and Giroux; 2011.

2 Sundar SS, Xu Q, Oeldorf-Hirsch A. Authority vs. peer: how interface cues influence users. CHI New York, NY, USA: ACM; 2009.

3 Paul SA, Hong L, Chi EH. Who is Authoritative? Understanding Reputation Mechanisms in Quora. 2012 http://arxiv.org/abs/1204.3724.

4 Simpson TW. Evaluating Google as an Epistemic Tool. Metaphilosophy 2012;43(4):426-445.

5 Jianu R, Laidlaw D. An evaluation of how small user interface changes can improve scientists’ analytic strategies. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems New York, NY, USA: ACM; 2012.

6 Kriplean T, Morgan J, Freelon,D., Borning,A., Bennett L. Supporting Reflective Public Thought with ConsiderIt. CSCW 2012; 2012; .

7 Westerman D, Spence PR, Van Der Heide B. A social network as information: The effect of system generated reports of connectedness on credibility on Twitter. Computers in Human Behavior 2012; 1;28(1):199-206.