Global pandemics are a challenge for everyone. Customers look to institutions and businesses they already trust for answers. Meanwhile, companies must scramble to figure out the best way to maintain excellent Customer Experience (CX) during unprecedented times.
No matter what the economy does, you can take some proactive steps to ensure your customers remain loyal to your brand. Creating an excellent CX takes dedication and focus, especially during a global pandemic.
This article investigates content recommender systems. Because Netflix is probably the best known recommendation system and numerous articles have been published about their system, I will concentrate on their content recommendation mechanism as representative of the type.
I will show that the Netflix mechanism contains characteristics of updated theories of emotion—mainly constructed emotions theory—but it still lacks several essential components.
The lack of these components can explain some inaccuracies in Netflix recommendations and can suggest broader implications.
An important part of any user experience department should be a consistent outreach effort to users both familiar and unfamiliar. Yet, it is hard to both establish and sustain a continued voice amongst the business of our schedules.
Recruiting, screening, and scheduling daily or weekly one-on-one walkthroughs can be daunting for someone in a small department having more than just user research responsibilities, and the investment of time eventually outweighs the returns as both the number of participants and size of the company grow.
This article is targeted at user experience practitioners at small- to mid-size companies who want to incorporate a component of user research into their workflow.
It first outlines a point of advocacy around why it is important to build user research into a company’s ethos from the very start and states why relying upon standard analytics packages are not enough. The article then addresses some of the challenges around being able to automate, scale, document, and share these efforts as your user base (hopefully) increases.
Finally, the article goes on to propose a methodology that allows for an adjustable balance between a department’s user research and product design and highlights the evolution of trends, best practices, and common avoidances found within the user research industry, especially as they relate to SaaS-based products.
Why conduct usability sessions?
User research is imperative to the success and prioritization of any software application–or any product, for that matter. Research should be established as an ongoing cycle, one that is woven into the fabric of the company, and should never drop-off nor be simply ‘tacked on’ as acceptance testing after launch. By establishing a constant stream of non-biased opinions and open lines of communication which are immune to politics and ever-shifting strategies, research keeps design and development efforts grounded in what should already be the application’s first priority–the user.
A primary benefit in working with SasS products is that you’re able to gain feedback in real-time when any feature is changed. You don’t have to worry about obsolete versions, or download packages–web-based software enables you to change directions quickly. Combining an ongoing research effort with popular software development methods such as agile or waterfall allows for immediate response when issues with an application’s usability are found.
Different from analytics
SaaS are unique in that there is not the same type of tracking needed in-product. Metrics such as page views or bounce-rates are largely irrelevant, because the user could be spending their entire session on configuring functions of a single feature on a single page.
For example, for our application here at Loggly, the user views an average of ~2 pages (predominantly login and then search) and spends on average 8x as long on search then any other page. Progression is made within the page-level functions, not among multiple pages within the application’s structure.
Javascript-heavy applications don’t have the same URL and tree structure content-heavy sites are built around but instead make calls to different states of the application from within the same page.
Say your analytics package gives an indication that something is wrong with the setup flow or configuration screen, but you don’t yet have a good concept of at what point in the process the users are getting stuck.
Perhaps a button might be getting click after click because it is confusing and unresponsive, not because its useful. Trying to solve this exclusively with an analytics package will pale in comparison to the feedback you’ll get from a single, candid user who hits the wall. As discussed later in this article, with screensharing, you’re able see the context in which the user is trying to achieve a specific task, defining the ‘why’ in their confusing becomes more apparent than just the ‘what’ are they clicking on.
Determining a testing audience
The first component of defining any research effort should be to define who you want to talk to. Ideally, you’ll be able to have a mix of both new users and veterans that are able to provide a well-rounded feedback loop on both initial impressions of your application as well as historical perspective on evolution and found shortcomings after repeated use, but not all companies have this luxury.
Once in the door
Focus first on the initial steps the user has to take when interacting with your application. It seems obvious, but if these are not fulfilled with maximum efficiency, the user will never progress into more advanced features.
Increasing the effectiveness of the flow through set-up, configuration, and properly defining a measure of activation will pay dividends to all areas of the application. This should be a metric that is tested, measured, and monitored closely, as it functions as a type of internal bounce rate. Ensuring that the top of the stream for the majority of application users is sound will guarantee improved usage further down the road to the deeper, buried interactions.
These advanced features should be also be tracked and measured with the correlation that starts to paint a profile of conversion. Some companies define conversion as free-to-paid; others do so in a more viral sense–conversion being defined as someone who has shared on social media or similar.
As you start itemize these important features, you’ll get a better sense of the usage profile for where you’re trying to point the user to. For example, adding a listing record, or perhaps customizing a page–these might match a profile for someone who is primed for repeat visitation, someone who has created utility and a lasting connection, and ultimately ready to convert.
Avoiding overlap
If there is a focus on recruiting participants who are newly signed-up users, then you’ll likely overlap with outbound sales efforts. Because your company’s sales and marketing funnel tries as hard as possible to convert trial users to paid, or paid to upgrade, the company’s priority will likely be on conversion, not on research.
Further, if a researcher tries to outreach for usability surveys at this point, from the user’s perspective (especially those deemed potential high-value customers) it would mean different prompts for different conversations with different people from various groups within your company, all competing for spots on their calendar. This gives a very hectic and frenetic impression of your company and should be avoided.
In the case of a SaaS product, sometimes the sales team has already made contact with potential customers, and many of these sales discussions involve demonstrations around populated, best-case scenarios (which showcase the full features) of your product.
As a result, you may find the participant has been able to ‘peek behind the curtain’ through watching the sales team provide these demonstrations, giving them an unfair advantage as to how much he / she knows before trying to finally use the product themselves. For the inexperienced user, your goal is to capture the genuine instinct of the uninitiated, not those who have seen the ‘happy path’ and are trying to trace back the steps to get to that fully-populated view.
To make sure you’re not bumping heads with the sales and conversion team, ask if you can take their castoffs–the customers they don’t think will convert. You can pull these from their CRM application and automate personalized emails asking for their time. I’ll outline this method in further detail in the section following, because it pertains to the veteran users as well.
Conferences are a great way to survey new and existing users.
As described in a previous post, guerrilla testing at conferences is a great way of fulfilling what gets seen and what parts of the interface or concept get ignored. These participants are great providers of honest, unbiased feedback and haven’t been exposed to the product other than some initial impressions of the concept.
Desiring the messy room
But what about the users that have been using your product for months now, those who have skin in the game, have already put their sweat and dollars behind customization of their experience? Surveying these participants allows us to see where they’ve found both utility and what areas need be expanded upon. Surveying only the uninitiated won’t provide feedback on any nagging functional roadblocks, those which are found only after repeated use. These are the participants that will provide the most useful feedback, sessions where you can observe the environment that they’ve created for themselves, the ‘messy room.’
Making an observational research analogy, a messy room is more telling of the occupants’ personality than an empty one. Given your limitations, how has the participant been forced to find workarounds? Despite these workarounds, they’ve continued to use the product, in despite of how we’ve expected them to use it–and these two can be contrastingly very different.
Example of a feedback form, initiated via email. User is able to schedule a 1:1 screensharing session on the confirmation page.
Automated recruitment
Find your friendly marketing representative/sales engineer at your company (or just roll your own) and discuss with them the best way to integrate a user experience outreach email into the company’s post-funnel strategy. For example, post-funnel would be after their trial periods have long since expired and the user is either comfortable in their freemium state or fully paid up.
As mentioned earlier, you can also harvest leads from the top of the funnel in the discarded CRM leads. However, you’ll likely have a greater percentage of sessions with users that are misfires–those indifferent or only just poking around the app, with not yet a full understanding of what it might do. Thankfully, the opt-in approach for participation filters this out for the most part.
Focusing again on the recruitment of the veteran, experienced users, another, more complex scenario would be to trigger this UX outreach email once a specific set of features have been initiated–giving off the desired signature of an advanced, informed user.
Going from purely legacy-based perspective, six months of paid, active use should be enough time to establish a relationship with a piece of software, whether they love or hate it. If there exists enough insight into the analytics side of the sales process, it would behoove you to also make sure that the user has had a minimum number of logins across these six months (or however long you’ll allow the users to mature).
Outreach emails triggered through the CRM should empower the recipient to make the experience of the product better, both for themselves and their fellow customers. Netflix does a great job of this by continually asking about the streaming quality or any delays around arrival times of their product.
I also recommend asking the users a couple of quantitative and qualitative questions, as this metric something you should be doing for your greater UX efforts already. These questions follow the guidelines of general SUS (System Usability Survey) practices that have been around for decades. Make the questions general enough so that they can be re-used and compared going forward, without fear of needing the change the goalposts when features or company priorities change.
A peek into an active user’s work environment.
When engineering this survey, be sure to track which tier of customer is filling out these surveys, because both their experience and expectations could be wildly different. Remember also to capture the user’s email address as a hidden field so you can cross reference against any CRM or analytics packages that are already identifying existing customers.
Setting boundaries
It depends on the complexities of your product, but typically 20-30 minutes is enough time to cover at least the main areas of function. Any longer, and you might encounter people not wanting to fit in an entire hour block into their schedule. If these recorded sessions are kept to just a half-hour, I find that a $25 is sufficient compensation for this duration of time, but your results may certainly vary.
In any type session, do iterate that this is neither a sales, nor a support call. You’re researching on how to make the product better. However, you should be comfortable on how to avoid (or sometimes suggest) workarounds to optimize the participant’s experience, giving them greater value of use.
Tools of the trade
For implementation of the questionnaire, I hacked the HTML / CSS from a Google Form to exist as self-hosted page but still pushing results through the matching form and input IDs to the extensible Google Spreadsheet.
There are a few tutorials that explain how to retain your branding while using Google’s services. I went through the trouble so I can share the URL of either the form or the raw results with anyone, without the need to create an account or login. As we discuss the sharing component of these user research efforts, this will become more important. Although closed systems like SurveyMonkey or Wufoo are easy to get up and running, the extensibility or a raw, hosted result set does not compare.
Insert a prompt at the end of the questionnaire for the user to participate in a compensated user research survey, linking to a scheduling applications such as Calend.ly. This application has been indispensable for opt-in mass scheduling like this. The features of gCal syncing, timezone conversion, daily session capping, email reminders, and custom messaging all are imperative to a public-facing scheduling board. Anyone can grab a 30-minute time slot from your calendar with just your custom URL, embeddable at the end of your questionnaire.
To really scale this user research effort to the point where it can be automated, you cannot spend the time trying to negotiating mutually-available times, converting time zones and following up with confirmations. Calend.ly allows for you to cap the number of participants who can grab blocks of your time, so you can set a maximum number of sessions per day, preventing a complete overload of bookings in your schedule.
As a part of the scheduling flow within Calend.ly, a customizable input field asks the participant for their Skype handle in order to screen share together, and I’d advise for the practitioner to create a separate Skype account for this usability effort. With every session participant, you’ll begin to add and add more seemingly random contacts, any semblance of organization and purity for your personal contact list will be gone.
Calend.ly booking utility – a publicly-accessible reservation system.
Calend.ly booking utility – a publicly-accessible reservation system.
Once the user is on the Skype call, ask for permission to record the call and make sure that you give a disclaimer that their information will be kept private and shared with no one outside the company. You might also add ahead of time that any support questions that come up, you’ll be happy to direct to the proper technicians.
Permissions granted, be sure to re-iterate to the participant the purpose and goal of the call, and provide them with a license to say whatever they want, good or bad–you want to hear it. Your feelings won’t be hurt if they have frustrations or complaints about certain approaches or features of your product.
For recording the call, there are plenty of options out there, but I find that SnagIt is a good tool to capture video, especially given the resolution and dimension of the screen share tends to change based upon the participant’s monitor size. When compressing the output, a slow frame rate of 5/10 fps should suffice, saving you considerable file size when having to manage these large recordings.
Tagging annotations
When you’re walking the participant through the paces of the survey, be sure to annotate the time started and any high/lowlights you see along the way. While in front of your desktop, a basic note-taking utility application (or even pad and paper) should suffice. This will allow you to go back after the survey is finished and pull quotes for use elsewhere, such as powerpoint presentations or similar.
I always try to write a running diary of the transcript and a summary at the end just to cover what areas of the application we explored, as well as a quick summary of what feedback we gathered. Summarizing the typed transcript and posting the relative recorded video files should take no more than 10 minutes, which will still keep your total per-participant (including processing) time to under an hour each, certainly manageable as a part of your greater schedule.
Share the love (or hate)
I want to make sure that these sessions are able to be referred to by the executive and product management team for use in their prioritization strategy. Setting up an instance of MAMP / WordPress on a local box (we’re using one of the Mac Minis that power a dashboard display) which allows me to pass around the link internally and not have to deal with some of the issues around large video file sizes being uploaded, as well as alleviate any permissions concerns with these sessions being out in the wild.
Our UX session archive, with hundreds of recorded and tagged sessions.
Also important is to tag these posts attached to these files when you upload them. This allows faster indexing when trying to find evidence around a certain feature or function. Insert your written summary into the post content, and you’ll be able to better search on memorable quotes that might have been written down.
These resources can be very good for motivation internally, especially among the engineers who don’t often get to see people using the product they continually pour themselves into. They’ll also resonate with the product team, who will see first-hand what’s needed to re-prioritize for the next sprint.
After awhile, you’ll start to get a great library of clips that you can draw knowledge from. There’s also a certain satisfaction to seeing the evolution of the product in the interface through these screengrabs. That which was shown as confusing at one time may now be fixed!
Follow-up
Fulfillment of a participant compensation can be done through Amazon or other online retailers; you can wire a gift card through an email address, which you’ll be able to scrape as a hidden field from the spreadsheet of user inputs. Keep a running list of those that you’ve reached out to and contacted for responses.
You might also incorporate contacts met during sessions described in the Guerrilla Usability Testing at Conferences article, so you’ll be able to follow up when attending the next year’s conference to recruit again. After enough participants and feedback, think about establishing a customer experience council that you can follow up on with specific requests and outreach, even for quick vetting of opinions.
Conclusion
This article first outlined the strategies and motivation behind the research, advocating creating an automated workflow of continually-scheduled screenshares with customers, rather than trying to recruit participants individually. This methodology was then broken down to distinct steps of recruitment via email, gathering quantitative and qualitative feedback, and automating an opt-in booking of the sessions themselves. Finally, this article went on to discuss how to best leverage and organize this content internally, so that all might benefit from your process.
User research is imperative to the success and prioritization of any software application (or any product, for that matter). Yet, too often we forget to consume or own product. Whether it be server log management as I’ve chosen, or apartment listing or ecommerce purchases, shake off complacency and try to spend 30-mins a week trying to accomplish typical user tasks from start-to-finish.
Also make it a point to conduct some of these sessions among those you work alongside; you’ll be surprised what you can find just by the simple repetition with a fresh set of eyes and ears. The research process and its dependencies does not have to be as intricate as the one listed above.
When your company starts to incorporate user opinion into a design and development workflow, it will begin to pay out dividends, both in the perceived usability of your application as well as the gathered metrics of user satisfaction.
The following is a composite of experiences I’ve had in the last year when talking with startups. Some dialog is paraphrased, some is verbatim, but I’ve tried to keep it as true as possible and not skew it towards anyone’s advantage or disadvantage.
As professionals in the user-centered design world, we are trained and inclined to think of product design as relying on a solid knowledge, frequently tested, of our potential users, their real-life needs and habits.
We’ve seen the return on investment in taking the time to observe users in their daily lives, in taking our ideas as hypotheses to be tested. But the founders and business people we often interview with have been trained in a different worldview, one in which their ideas are sprung fully formed like Athena from the brow of Zeus. This produces a tension when we come to demonstrate our value to their companies, their products, and their vision. We want to test; they want to build. Is there a way we can better talk and work together?
Most of my interactions with these startups were job interviews or consulting with an eye toward a more permanent position; the companies I spoke with ranged from “I’m a serial entrepreneur who wants to do something” to recent B-school grads in accelerator programs such as SkyDeck, to people I’ve met through networking events such as Hackers & Founders.
In these conversations, I tried to bring the good news of the value of user experience and user research but ran into a build-first mentality that not only depreciates the field but also sets the startup on a road to failure. Our questions of “What are the user needs?” are answered with “I know what I want.” We’re told to forget our processes and expertise and just build.
Can we? Should we? Or how can we make room for good UXD practices in this culture?
“I did the hard work of the idea; you just need to build it”
Over the past two years, I’ve been lucky to find enough academic research and contract work that I can afford to be picky about full-time employment (hinging on the mission and public-good component of potential employers). But self-education, the freelance “UX Team of One,” and Twitter conversations can’t really match the learning and practice potential of working with others, so I keep looking for full-time UX opportunities.
This has lately, by happenstance, meant startups in the San Francisco Bay area. So I’ve been talking to a lot of founders/want-to-be-founders/entrepreneurs (as they describe themselves).
But I keep running into the build-first mentality. And this is often a brick wall. I’m not saying I totally know best, but the disconnect in worldviews is a huge impediment to doing what I can, all of which I know can help a startup be better at its goals, so that it can have a fighting chance to be in that 10-20% that doesn’t end up on the dust heap of history.
“Build first” plays out with brutal regularity. The founders have an idea, which they see as the hard part; I’ve actually had people say, “You just need to implement my idea.” They have heard about something called “UX” but see user experience design as but a simple implementation of their idea.
As a result, the meaning of both the U and the X get glossed over.
The started-up startup
We’ll start with the amalgam of a startup that had already made it into an accelerator program. A round of funding, a web site, an iOS app, an origin story on (as you’d expect) TechCrunch.
It began with a proof of concept: A giant wall, Photoshopped onto a baseball stadium, of comments posted by the app’s users. The idea was basically to turn commercial spaces into the comments thread below any HuffPo story (granted, a way to place more advertising in front of people). The company was composed of the founder, fresh from B-school; a technical lead also just out of school; a few engineers; and sales/marketing, which was already pitching to companies.
The company was juggling both the mobile and web apps and shooting for feature-complete from the word Go. Though there were obvious issues, such as neither actually working and the lack of any existing comment walls or even any users; they were trying to build a house of cards with cards yet to be drawn.
In talking with the tech lead, I saw that they were aware of some issues (crashes, “it’s not elegant enough”) but didn’t see others (the web and mobile app having no consistent visual metaphors and interaction flows, typos, dead ends, and the like). To their credit, they wanted something better than what they had. Hence, hiring someone to do this “UX thing.” But what did they think UX was?
I had questions about the users. How did they differ from the customers–the locations that would host walls, which would generate revenue by serving ads to the users who posted comments?
I had questions about the company. What was their business process? What had they done so far?
This was, I thought, part of what being interviewed for a UX position would entail–showing how I’d go about thinking about the process.
I was more than ready to listen and learn; if I were to be a good fit, I’d be invested in making the product successful as well as developing a good experience for users. I was also prepared with some basic content strategy advice; suggestions about building a content strategy process seemed nicer than pointing out all the poor grammar and typos.
Soon, I was meeting with the founder. He talked about how a B-school professor had liked his idea and helped him get funding. I asked about the users. He responded by talking about selling to customers.
When he asked if I had questions, I asked, “What problem does this solve, for whom, and how do you know this?” It’s my standard question of any new project, and, I was learning, also a good gauge of where companies were in their process. He said he didn’t understand. He said that he had financial backing, so that was proof that there was a market for the app. What they wanted in a UX hire, he said, was someone to make what they had prettier, squash bugs, and help sell.
I got a bad feeling at that point; the founder dismissed the very idea of user research as distracting and taking time away from building his vision. Then I started talking about getting what they had in front of users, testing the hypotheses of the product, iterating the design based on this: all basic UX and Lean (and Lean UX!) to boot, at least to someone versed in the language and processes of both.
This, too, the founder saw as worse than worthless. He said it took resources away from selling and coding, and he thought that testing with users could leak the idea to competitors. So, no user research, no usability testing, no iteration of the design and product.
(Note on one of startups that’s part of this amalgam: As of this writing, there has been neither news nor updates to the company site since mid-2012, and though the app is still on the iTunes Store, it has too few reviews to have a public rating. This after receiving $1.2 million in seed funding in early 2012.)
The pre-start startup
I’ve also spoken with founders at earlier stages of starting up. One had been in marketing at large tech companies and wanted to combine publishing with social media. Another wrote me that they wanted to build an API for buying things online. I chatted with a B-school student who thought he’d invented the concept of jitneys (long story) and an economist who wanted to do something, though he wasn’t sure what, in the edu tech space. What they all had in common was a build-first mission. When I unpacked this, it became obvious that what they all meant was, “we don’t do research here.”
Like the company amalgam mentioned above, they all pushed back against suggestions to get out of the building (tm Steve Blank) to test their ideas against real users. Anything other than coding or even starting on the visual design of their products was seen as taking time away from delivering their ideas, which they were sure of (I heard a lot of “I took the class” and “we know the market” here).
And their ideas might end up being good ones–I can’t say. They seem largely well-intentioned, nice people. But when talking with them about how to make their product or service vital for users and therefore more likely to be a success, it soon becomes clear that what UX professionals see as vital tools and processes in helping create great experiences are seen quite differently by potential employers, to the point that even mentioning user research gets you shown the door. Politely, but still.
I’d like to bring up here the idea that perhaps we, as UX people, perhaps have contributed to the problem. The field is young and Protean, so the message of “what is UX?” can be garbled even if there were a good, concise answer. Also, in the past, user research has indeed been long and expensive and resulted in huge documents of requirements and so on, which the Lean UX movement is reacting to. So nobody’s totally innocent, to be sure. But that’s another article in the making (send positive votes to the editors).
One (anonymized) quote:
“Yep, blind building is a real disaster and time waste… I’ve seen huge brands go down that path… I have identified a great proof-of-concept market and have buy-in from some key players. My most immediate need, however, is a set of great product comps to help investors understand how the experience would work and what it might look like. I’ve actually done a really rough set of comps on my own, but while I’m a serious design snob, I am also terrible designer…”
So: Blind building is a real disaster, but she’s sketched out comps and just wants someone to make it look designed better. Perhaps she saw “buy in from some key players” as user research?
We had an extended exchange where I proposed lightweight, minimum-viable-product prototypes to test her hypotheses with potential users. She objected, afraid her idea would get out, that testing small parts of the idea was meaningless, that she didn’t have time, that it only mattered what the “players” thought, that she never saw this at the companies she worked at (in marketing).
Besides, her funding process was to show comps of how her idea would work to these key players, and testing would only appear to reduce confidence in her idea. (Later that week, I heard someone say how “demonstrating confidence” was the key ingredient in a successful Y Combinator application.)
“We’re looking for somebody who’s passionate about UI/UX to work with us on delivering this interface.
“Our industry specifics make us a game of throwing ideas around with stakeholders, seeing what sticks and building it as fast as possible. Speed unfortunately trumps excellence but all products consolidate while moving in the right direction.
“We certainly have the right direction business-wise and currently need to upgrade our interface. We require UX consulting on eliminating user difficulty in the process of buying, as well as an actual design for this.”
So: To him, it’s all about implementing an interface. Which, to him, is just smoothing user flows and, you know, coming up with a design. Frankly, I’m not sure how one could do this well, or with a user-centered ethic, without researching and interacting with potential users. I’m also not sure how to read his “upgrade our interface”; is that just picking better colors and shapes, in the absence of actual research and testing on whether it works well for users? That doesn’t strike me as useful, user-centric design. (During the interview process at Mozilla, I was asked the excellent question of how I’d distinguish art and design; I’m not sure I nailed the answer, but I suspect there’s more to design than picking colors and shapes.)
And I wasn’t sure even if he was receptive to the idea of users qua users in the first place. Before this exchange, when he described his business model, I pointed out that his users and his customers were two different sets of people and this can mean certain things from a design perspective. Given that his response was that they have been “throwing ideas around with stakeholders,” I gathered that his concept of testing with users was seeing what his funders liked. That did not bode well for actual user-centered design processes.
When I asked how they’d arrived at the current user flows and how they knew they were or weren’t good, he said that they internally step through them and show them to the investors (neither population is, again, the actual user). He was adamant both that talking to users would slow them down from building, and that because they were smart business people, they know they’re going in the right direction. It was at this point I thought that he and I were not speaking the same language.
I referred him to a visual designer I know who could do an excellent job.
I do not have the answers on how to bridge this fundamental gap between worldviews and processes. A good UX professional knows the value of user research and wants to bring that value to any company he or she joins. But though we can quote Blank, though we can show case studies, though we can show how a Gothelfian Lean UX process could be integrated into a hectic build schedule–when all this experience runs into a “build first” mentality, the experience and knowledge loses. At least in my experience. What is to be done?
It is an honest question: how smart are your users? The answer may surprise you: it doesn’t matter. They can be geniuses or morons, but if you don’t engage their intelligence, you can’t depend on their brain power.
Far more important than their IQ (which is a questionable measure in any case) is their Effective Intelligence: the fraction of their intelligence they can (or are motivated to) apply to a task.
Take, for example, a good driver. They are a worse driver when texting or when drunk. (We don’t want to think about the drunk driver who is texting.) An extreme example you say? Perhaps, but only by degree. A person who wins a game of Scrabble one evening may be late for work because they forgot to set their alarm clock. How could the same person make such a dumb mistake? Call it concentration, or focus, we use more of our brain when engaged and need support when we are distracted.
So, what does a S.T.U.P.I.D. user look like?
Stressed
“Fear is the mind killer”, Frank Herbert wrote. Our minds are malleable and easily affected by their context. The effect of stress on the brain is well known, if not well understood. People under stress take less time to consider a decision thoroughly, and they choose from the options presented to them rather than consider alternatives. Stress is often due to social pressures. Car salespeople know to not let a customer consider an offer overnight, but pressure them to buy right away.
Tired
Tiredness is one of the largest causes of industrial and motor vehicle accidents. Interfaces used by tired people should take into account their lowered sense of self-awareness and number of details that the user is likely to miss. A classic example of an interface used by sleepy people, the iPhone alarm clock is typically set right before bed. Unfortunately, it doesn’t ring if the phone is set to vibrate, the default state for many people. When a user sets the alarm, it would be useful to override the vibrate feature, or at least remind them that it won’t ring.
Untrained
Training for enterprise applications is more often discussed then enacted. Users are thrown at an application with a manual and a Quick Reference Card. Applications that are not designed around the user’s workflow have to explain their conceptual model while they are being used: “where” things are stored, how to make changes, who to send things to.
Complex systems that are used infrequently are a particular problem. In the design of the automated external defibrillator, it is assumed the user may have no knowledge of the science or training on the device, and will be using it in a chaotic, stressful environment. The frequency of use should drive design. Yearly processes, like doing your taxes, should assume that the users have never done it before. In rarely used interfaces, customization is likely to be less useful, but a comparison to previous year’s entries is very useful as they remind the user what they did before.
Passive
Nothing reduces effective intelligence faster than doing a boring task against one’s will.
More important than the user’s mental model of an application is their mental attitude toward the task. Someone sitting in the front passenger seat of a car may have the same field of view as the driver, but unless they are focused on it, they will not remember the path driven. Nothing reduces effective intelligence faster than doing a boring task against one’s will. When a user is passive, complexity becomes insurmountable. Games aimed at casual gamers know to keep the interaction model simple, using a flat navigation and avoiding “modes” (e.g. edit vs view).
Independent
User centered design is a powerful approach because it recognizes that there are many reasons people use a system. Airline booking sites are used to buy tickets, but also to see if the family can afford to go on vacation. The designer should recognize that they cannot solve every problem, but should give users the tools to help themselves, to work independently of the application’s intended method. In internal enterprise systems, the top user request is often “export to Excel”. This often reflects that the system does not meet the user’s needs. Excel empowers the user to do ‘out of the box’ actions. It is the API to the real world.
…The top user request is often ‘export to excel’…. Excel empowers the user to do ‘out of the box’ actions. It is the API to the real world.
Distracted
People are multi-tasking more than ever, whether it is simply listening to music while driving or playing Farmville while watching TV. Effective multi-tasking has been shown to be a myth, but it is a popular one. Paying “partial attention” to multiple activities has significant impact to your perception of an interface. Users are often said to be on “autopilot”, clicking on things by shape, rather than reading the text. An interface cannot rely on the user having a clear and consistent working memory across multiple screens. The task and details must be re-stated at each step to remind the user the step they are on and what they need to do. Frequent, automatic saving of user entered data is essential, especially as connections can time out.
Help S.T.U.P.I.D. users by designing S.M.A.R.T.
Start-ups often experience a shock when they emerge from the hothouse of heads-down development. Their intended customers barely have time to listen to their idea, let alone devote time to explore its features. The contrast between a small group of friends working intensely together on a single project with the varied needs and limited free time of their customers can be a disheartening experience.
Projects often fail not because the idea is bad, but because the value their service will provide is not easily understood. The question I ask my team is “What problem, from the user’s point of view, are you solving?” It has to be a problem the user knows they have. If the problem is not obvious to the user, in terms they understand, the solution doesn’t matter. Focusing on the problem keeps a project from drifting into fantasy requirements: solutions looking for a problem.
Design teams often use themselves as model users, but…. The user knows nothing about the product, doesn’t understand the concept, and doesn’t care.
Design teams often use themselves as model users, but they are almost the perfect storm of differences between themselves and the users.
They know the product exists and what it is supposed to do.
They understand the internal concept, including its past and future ideas.
They care, personally, about the product. Their success depends on it.
The user has none of these things. The user knows nothing about the product, doesn’t understand the concept, and doesn’t care.
What can be done to make S.T.U.P.I.D. users S.M.A.R.T?
Simplify
Why are simple apps popular these days? It is not that people don’t like features, it’s because instant comprehensibility trumps powerful features. In the old search engine wars, Google may have had a better search algorithm, but they became known for having a simpler design. Yahoo and others tried to become portals, losing sight of the users primary goal. I advise people to “Design the mobile version first” to help them focus on the key user benefits.
The down side is that any successful project expands and adds features to address additional user needs. What starts out as “Writer for iPad” can end up as Microsoft Word. Simple is not always better, but keeping the new user in mind helps find the right balance.
Memorable
An app is only as good as the user understands it. That starts with the name – is it cute or does it explain what it does? Is it “pidg.in” or “Automatic Mailbox”? The iPhone / iPad apps’s television ads were effective sales tools, but also trained a generation by simply showing them in use. Each step of a workflow is subject to delays and distractions. Ecommerce sites know to reduce links during the final checkout process. With complex transactions, the risk is greater that the user will have lost their focus. Remind the user what they are doing in big title text. Focus on delivering Clear and Consistent messaging and instructions, for example, adding side notes like Ally.com’s password guidance.
Accept Autopilot
Standard design patterns are good, but they also throw the user into autopilot. It makes sense to break them for critical decisions. The hard part is determining what a critical decision point is. Observing user behavior, customer service records, and identifying risks to the user’s data are good clues. If something is simple enough that the users are mostly on autopilot, for example installing software, make the default action a single click.
Recovery
The dark side of users on “autopilot” is that they will regularly make mistakes by not paying attention. Mistakes are generally not obvious to a system, but it is good practice to highlight destructive actions and enable recovery. Capture data in little steps. Saving form fields instead of form pages, prevents large data loss. It’s a good idea to highlight and ask for confirmation on big, destructive changes, like deleting a database. “Undo”, common on computers, but slow to come to the web, enables the user to recover from errors.
Gmail lets users undo moving a message to the trash.
Gmail also let you restore your contacts if you accidentally make a large, destructive change.
Test in realistic situations
There is an essential flaw in the two-way mirror usability test method. In the interest of copying the form of the lab-coated scientist, these rooms create an artificial aura of “science”. But as ethnographic research can tell you, real world usage is so different as to make the test questionable. It selects for a test population that is free in the middle of the day, motivated by $50, and M&Ms, puts them in an unfamiliar environment with a personal guide to focus on a specific task with no distractions. This is about as unrealistic as it gets.
There is an essential flaw in the two-way mirror usability test method…. It selects for a test population that is free in the middle of the day, motivated by $50, and M&Ms.
In reality, the same person may have a child on their lap and only 10 minutes to look up a flight. The fact that an ecommerce session may expire after a few hours is trivial for some, but significant for people who only have a few hours a day to use the computer. “Universal Design” is a great approach, because methods to help specific disabilities tend to be useful to the general public.
Testing should go beyond the user interface and cover the basic business model. The Apple iTunes video download “rental” is for 24 hours. Unfortunately, people tend to watch movies at the same time each day, for example, after the kids go to bed. If your kids wake up, you have to finish it earlier the next day. Would it have killed them to make the rental 27 hours, so parents could actually use it?
Design for the right level of Effective Intelligence
Effective intelligence obviously varies across situations. People are ingenious at figuring out things they really want, but the simplest task is insurmountable to the unmotivated. Both scenarios are solvable, but an application that makes the wrong assumptions about its users will fail. (Interestingly, this study suggests that easier-to-use design can affect the user’s perception of difficulty, and encourage them to complete the task.)
One should adapt their strategy to the user’s desire and the problem’s complexity. Here’s an unscientific matrix for effective intelligence with software interfaces.
This matrix compares the amount a user desires to complete the task versus the complexity of the task to that user type. Different user types will have different measures of complexity, so one might create several matrices.
Low Desire, Low Complexity – The goal here is to finish these tasks as fast as possible. Follow standard design conventions, seek to eliminate steps.
Low Desire, High Complexity Complex – Tasks that the user doesn’t want to do are a danger zone. Can the problem be reconsidered or eliminated?
High Desire, Low Complexity – The easiest quadrant.
High Desire, High Complexity – This is the most interesting quadrant. A self-training interface, (integrated help, training modules) can get the user started; they will often take it the rest of the way. Video games often have a “training” level to train the user on basic skills like moving around.
Get Smart
Effective Intelligence is a helpful concept in the design toolbox. User research and testing are the best ways to know your users, but knowing what may limit a user in reality helps design ways to make them smarter.
Like this article? Want to keep Stephen’s wisdom close at hand? Download the handy, cubicle-friendly, 61kb PDF to hang on a nearby wall and you’ll always remember to design SMART.
Would you rather take a photo using your phone, a point-and-shoot camera, or a digital SLR? How you answer this question is probably a good indicator of your photographic expertise. If you snap casual shots, your phone or a point-and-shoot camera will probably suffice. If you’re a professional photographer, on the other hand, you probably prefer using an SLR that gives you control over the focus, aperture, and exposure.
Expertise significantly impacts how we seek information online. Just as novice and expert photographers prefer different tools, so novices and experts behave differently when searching for information. Understanding these differences will help us design better search interfaces for both groups of users.
There are experts, and then there are experts
User expertise exists on two levels. If you’re an avid photographer, your domain expertise in photography will be quite high: that is, you’ll be familiar with the terms and techniques of the trade. Each of us is likely a domain expert in a few areas, and a complete novice in others. A second aspect is technical expertise. Familiarity with how computers, the internet, and search engines work significantly impacts how users seek information. Consider these personifications of each quadrant of expertise:
* *Angela Baer*, since completing her MFA at Pratt 5 years ago, is quickly building a reputation as one of New York’s up-and-coming fashion photographers. In the office connected to her studio, Angela edits her photographs on two large monitors and top-end computer. She delivers the edited shoots electronically to her clients, and regularly updates her online portfolio and blog. Angela is highly proficient using her computer, and when it comes to photography, she’s a domain expert.
* Though officially retiring over 10 years ago after a successful career in banking, *William Hayes* still sits on the board of a number of financial institutions. From his Elizabethan cottage on the Kent coast, he uses a 5-year old computer to exchange emails and access financial reports, though he prefers doing business on the phone and keeping up with the world though The Financial Times. While William is a domain expert when it comes to finance, his technical expertise is lacking.
* 18-year-old *Fane Tomescu* helps run an internet cafe in Braşov, Romania. Having saved for over a year, Fane recently came across a car that he’s considering purchasing. But when the time came to arrange car insurance, Fane had no clue how things worked. He asked his parents and friends for advice, and then spent several hours comparing providers online. Fane is a technical expert, but when it comes to insurance, he’s a domain novice.
* *Claire Jones* is a 9-year-old from Colorado Springs. Her school is holding a science fair and Claire has decided to build a model of the solar system using styrofoam balls suspended with string. Having left her science textbook in her locker over the weekend she was meant to start building the model, Claire used the internet to lookup information on the order, size, and appearance of each planet. Though she did eventually find what she was looking for (with her parents help), Claire would be considered both a technical and a domain novice.
While either dimension of expertise is valuable, users are most likely to succeed when both are present. There are, however, a number of design guidelines which can help both novices and experts succeed in their pursuit of knowledge.
Novices Orienteer
Image 2: An orienteer at the 2010 World Orienteering Championships in Trondheim, Norway. Photo by Torben Utzon.
Wayfinding is a challenge as old as humankind, but the discipline of orienteering originated in the Swedish military in the 1800s and is now a sport practiced throughout Scandinavia. Equipped with a map and compass, participants navigate between control points spread across many miles, making tradeoffs between distance and difficult terrain as they strive to complete the course in the shortest amount of time.
The strategies employed by novice users seeking information resemble the sport of orienteering. [1] Users with low levels of domain and technical expertise, typified by Claire Jones, share three main characteristics.
Short queries
Novices tend to enter queries that use about half as many words as experts.[2] Domain novices (like both Claire and Fane Tomescu), feel particularly unsure of which terms to use.
Many queries
Novices perform more queries than experts, but look at fewer documents. Although they frequently reformulate their query, technical novices often suffer from an anchoring bias [3] and make only small, inconsequential changes.
Going back
Novices are much more likely than experts to hit dead ends and seek to get back to a previous state.
These behaviours result in an orienteering-like strategy where novices “test the waters” with a short, general query, quickly skim the top results returned, and immediately reformulate the query based on their improved knowledge of the subject. [4]
Design considerations for Novices
There are a number of design considerations which can help novice users succeed at orienteering. In particular, novices need help formulating their query, refining their query, and backing out of trouble.
Autosuggest
As-you-type suggestions can help users get off on the right foot when they’re uncertain what to search for. Research has shown [3] that users are more capable of choosing a viable option from a list than they are of composing a question out of thin air. Autosuggest provides an opportunity to help users express specific terms (such as airports or stocks), and to suggest queries that other users have performed in the past.
Image 3: Autosuggest on Etsy.com
Related searches
After users have performed an initial search, they may still need help refining the query. A list of related searches can help the user break out of their anchoring bias and help them arrive at the optimal set of results.
Image 4: Foodily.com place related searches on the same line as breadcrumbs
Avoid zero results
If the user is presented with no search results, he may be disheartened enough to give up his quest. Avoid zero-result screens if possible. Tools such as automatic spelling corrections and query expansion (using synonyms and lemmatisation,[5] for instance) can help.
Image 5: Amazon.com’s handling of zero results
Breadcrumbs
Because novices tend to take wrong turns, they often need help navigating back to a previous state. Breadcrumbs are an ideal solution because they communicate both the user’s current location, as well as how to go back.
Image 6: Breadcrumbs on Zappos.com
Experts Teleport
Image 7: In Star Trek, crew members of the USS Enterprise stand on transporter platforms to be beamed down to a nearby planet.
While novices orienteer, experts teleport. Akin to being teleported to a precise but distant location, users with high domain and technical expertise like Angela Baer tend to jump directly to their final destination.
Longer queries
Experts enter longer, more specific queries than novices. Domain experts like William Hayes often rely on their vocabulary of specific terminology, while technical experts such as Fane Tomescu are more likely than novices to use formatting techniques such as quotation marks in their queries (87% of experts compared with 47% of novices according to a 2000 study [1]).
Fewer queries
Experts usually amend their queries less often than novices and move forward with a higher degree of confidence.
More Documents Examined
Experts tend to review more documents and follow a greater number of links within those documents. Domain experts are especially adept at quickly determining whether or not a given document is useful.
In essence, experts often construct queries using numerous highly specific words which act to teleport [6] them directly to a destination, cutting out the query reformulation often practiced by novices. After having arrived at a destination, experts are then likely to explore the surrounding territory.
Design considerations for Experts
Designing for experts involves facilitating their teleporting behaviour, helping them get to their destination as quickly as possible.
Advanced syntax
Technical experts like Fane are often willing to learn special commands in exchange for having greater control. Commonly supported operators include AND, OR, and quotes for searching for exact phrases.
Image 8: Wolfram Alpha is designed to understand domain-specific terminology and return computed answers.
Keyboard shortcuts
Keyboard shortcuts can also increase the speed of interaction. Google, for instance, allows users to press the up/down arrow keys on the keyboard to traverse results, and press return to go to the URL of the selected result.
Image 9: Google places a caret beside the currently-selected result.
Filtering & sorting
Experts are more likely to engage with advanced sort and filtering controls than novices, including operations such as selecting ranges, filtering by format, or excluding certain terms (e.g. everything that includes “apples” but does not mention “oranges”).
Image 10: Getty Image’s Moodstream lets users search for stock photos using sliders.
As-you-type results
As-you-type completion interfaces most often display query suggestions to users. However, another use case is to present actual results in the autocompletion interface, enabling users to skip the search results screen altogether and go directly to a specific document.
Image 11: Rather than suggesting terms to search for, Nutshell returns search results directly without needing to go to a separate page.
Result table of contents
Providing links to the top destinations within a result can reduce the number of steps required for the expert to reach his destination.
Image 12: Google sometimes provides links to the top-level pages within a given site.
Yin and Yang
While novices and experts practice two very different approaches to information seeking, it’s important not to overemphasis one at the expense of the other. As illustrated by the ancient Chinese symbol, understanding the behaviour of both novices and experts can help us design more informed, balanced search experiences.
The author would like to thank Cennydd Bowles for organising the UK writer’s retreat during which this article was written, as well as for the editorial guidance that he provided.
References
[1] Vicki L. O’Day and Robin Jeffries; “Orienteering in an Information Landscape”:http://www.hpl.hp.com/techreports/92/HPL-92-127.pdf
[2] Christoph Hölscher & Gerhard Strube; “Web Search Behavior of Internet Experts and Newbies”:http://www9.org/w9cdrom/81/81.html
[3] Marti A Hearst; “Search User Interfaces”:http://searchuserinterfaces.com/book/sui_ch3_models_of_information_seeking.html#section_3.5
[4] Morten Hertzum and Erik Frokjaer; “Browsing and Querying in Online Documentation”:http://www.cparity.com/projects/AcmClassification/samples/230570.pdf
[5] Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, “Introduction to Information Retrieval”:http://www.cambridge.org/us/knowledge/location/?site_locale=en_US , Cambridge University Press. 2008.
[6] Jaime Teevan, Christine Alvarado, Mark S. Ackerman and David R. Karger; “The Perfect Search Engine is Not Enough”:http://people.csail.mit.edu/teevan/work/publications/papers/chi04.pdf
– The customer is a stranger. On the Web, the customer isn’t king—they’re dictator. When they come to your website, they have a small set of tasks (long neck) that really matter to them. If they can’t complete these top tasks quickly, they leave.
– There is an existential challenge going on right now between organization-centric and customer-centric thinking. Customer-centric thinking is winning.
From Long Tail to Dead Zone
– The Long Tail theory says that the Web allows you to sell more of less, that we are seeing the decline of the blockbuster and the rise of the niche.
– The Long Tail is often a Dead Zone of extremely low demand and hard to find, poor quality products.
The rise of the Long Neck
– The Web is exploding with quantity but quality is still relatively finite. Quality is the ‘long neck’; the small set of stuff that really matters to the customer.
– Understanding and managing the long neck has never been more important.
– Remember that the customer’s long neck—what really matters to the customer—is rarely the organization’s long neck —what really matters to the organization.
A secret method for understanding your customers
– A unique voting method that identifies your customers’ long neck.
– Developed over 10 years, with over 50,000 customers voting in multiple languages and countries.
– Used by the BBC, Tetra Pak, IKEA, Schlumberger, Wells Fargo, Microsoft, Cisco, OECD, Vanguard, Rolls-Royce, US Internal Revenue Service, etc.
Organization thinking versus customer thinking
– Case study that shows how car company managers think differently about how customers buy cars to how customers themselves think.
– Explanation of how to frame the task identification question.
Deliver what customers want—not what you want
– Case study of Microsoft Pinpoint, a website to help businesses find approved Microsoft IT vendors and consultants.
– What’s the top task of US small and medium businesses when it comes to IT? Security.
Measuring success: Back to basics
– Why traditional web metrics such as page views, number of visitors, etc., are often misleading
– Observation-based technique to measure online behaviour.
– The key metrics of task measurement: completion rate, disaster rate, completion time
Carrying out a task measurement
– The benefits of remote measurement
– How to run an actual measurement session
This podcast has been sponsored by:
Publishers of world class content for students, researchers, and practitioners in the UX and HCI fields. To learn more visit http://www.mkp.com/hci
From concepts to rich prototypes and detailed specifications, all in one tool. Get your free 30-day trial at www.axure.com
Boxes & Arrows: Since 2001, Boxes & Arrows has been a peer-written journal promoting contributors who want to provoke thinking, push limits, and teach a few things along the way.
Large scale websites require groups of specialists to design and develop a product that will be a commercial success. To develop a completely new site requires several teams to collaborate and this can be difficult, particularly as different teams may be working with different methods.
This case study shows how the ComputerWeekly user experience team integrated with an agile development group. It’s important to note the methods we used do not guarantee getting the job done. People make or break any project. Finding and retaining good people is the most important ingredient for success.
When the exciting opportunity to work in a post-bubble dot.com startup arose, I jumped to take it. I had the luxury of doing things exactly as I thought right, and for a while it was truly fantastic. I built a team with a dedicated user researcher; information architect; interaction and visual designers and we even made a guerilla usability lab and had regular test sessions.
Unfortunately, the enthusiasm I had for my new job waned after six months when an executive was appointed Head of Product Development — who insisted he knew SCRUM1 better than anybody. As the Creative Director, I deferred authority to him to develop the product as he saw fit. I had worked with SCRUM before, done training with Ken Schwaber (author1 and co-founder of the Agile Alliance) and knew a few things from experience about how to achieve some success integrating a design team within SCRUM. This required the design team to work a “Sprint” (month long iteration) ahead of the development team. But the new executive insisted that SCRUM had to be done by-the-book. Which meant, all activities had to be included within the same sprint, including design.
Requirements came from the imagination of the Head of Product Development; design was rushed and ill-conceived as a result of time pressure; development was equally rushed and hacked together, or worse, unfinished. The end of Sprint debriefing meetings reliably consisted of a dressing down of the entire team by the executives (since nobody had delivered what they’d committed to i.e. they had tried to do too much, or had not done enough). Each Sprint consisted of trying to fix the mess from the Sprint before or brushing it under the carpet and developing something unstable atop the code-garbage. Morale languished, the product stank, good staff began to leave… it was horrible.
This is an extreme example of where SCRUM went bad. I am not anti-Agile although I’ve been bitten a few times and feel trepidation when I hear someone singing its praises without having much experience with it. Over the last eight years, I’ve seen Agile badly implemented far more often than well (and yes, it can be done well, too). The result of this is mediocre product released in as much time as it would have taken a good team to release great product using a waterfall approach. In this article, I will describe Agile and attempt to illuminate a potential minefield for those who are swept up in the fervor of this development trend and want to jump in headlong. Then I will present how practices within User Centred Design (UCD) can mitigate the inherent risks of Agile and how these may be integrated within Agile development approaches.
Where did Agile come from?
Envisioned by a group of developers, Agile is an iterative development approach that takes small steps toward defining a product or service. At the end of each step, we have something built that we could release to the market if we choose to and therefore it can assure some speed to market where waterfall methods usually fail. Agile prefers to work out how to build something as we go, rather than do a waterfall style deep dive into specification and then finding out we can’t build parts of the spec for some reason e.g. a misjudgment of feasibility, misjudgment of time to build, or changing requirements.
A group of developers such as Kent Beck, Martin Fowler and Ken Schwaber got together to come up with a way to synthesize what they had discovered was the most effective ways to develop software – The Agile Alliance was born. It released a manifesto2 to describe its tenets and how it differs from waterfall methods.
Agile can be thought of as a risk-management strategy. Often developers are approached directly by a client who does not know what a user experience designer, information architect or user interface designer is. Roles such as these usually interpret what clients want and translate it to some kind of specification for developers. Without this role, it’s down to the developer to work out and build what the customer wants. Because Agile requires a lot of engagement with the client (i.e. at the end of every iteration, which can be as little as a week) it mitigates the risk of going too far toward creating something the client doesn’t want. As such, it is a coping mechanism for a client’s shifting requirements during development as they begin to articulate what they want. To quote the Agile Manifesto’s principles “Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
Why do people rave about it?
At the heart of what makes Agile attractive is the possibility of quicker return on investment for development effort, because we can release software earlier than we would have otherwise. In the short term, this is typically borne out. In the long term it can be too, though only when the team hasn’t fallen victim to temptation (more on that later). Agile is also good at generating momentum because the iterations act as a drumbeat to which the team marches toward manageable deadlines. The regular "push" to finish a sprint ensures that things move along swiftly. Agile is also good at avoiding feature bloat by encouraging developers to do only what is necessary to meet requirements.
Because it emphasizes face to face contact for a multidisciplinary team, Agile tends to encourage contribution from different perspectives. This is generally a positive influence on, pragmatism, innovation and speed of issue resolution. The team is empowered to make decisions as to how requirements should best be met.
The Minefield
In of itself, Agile does a good job of flexing to the winds of change. But one has to ask whether it was devised to treat a symptom of the larger cause: the business doesn’t know what it wants. While Agile enables the development team to better cope with this, it doesn’t solve the problem and in most cases creates new problems.
Mine 1: An unclear role for design
In the best cases of business approaching developers to build some software, some of those developers may have design skills. But that’s not a particularly common scenario. Many developers have also had bad experiences with designers who don’t know what they’re doing. It took a long time for the design profession to come to grips with designing for complex systems and there is still a deficit of expertise in this field. “Business people and developers must work together daily throughout the project” is another principle of Agile. Where does the designer fit into the frame?
Mine 2: The requirements gathering process is not defined
Agile accommodates design activities from the perspective of a developer. It tends to shoe-horn these activities into their view of the world where requirements fall from the sky (from the business or customer who is assumed to be all-knowing) and takes for granted that they are appropriate.
According to Ken Schwaber, SCRUM intends to be a holistic management methodology and leaves space for activities other than programming to occur within the framework of iterative cycles. But when organizations adopt SCRUM, too often the good parts of a waterfall process like research and forming a high-level blueprint for the overall design become the proverbial baby thrown out with the documentation bathwater. As the Agile Manifesto says, “Working software over comprehensive documentation.”2 Many latch onto this and don’t want to do any type of documentation that might outline a vision, even if in a rudimentary sense.
Mine 3: Pressure to cut corners
Implementations of Agile that put design activities within the same iteration as they must be developed, ensure designs are achievable in code. But they also put tremendous pressure on the experience design team to ‘feed the development machine’ in time enough for them to implement their vision. This can and does lead to impulsive design. So, what’s wrong with that? Well, nothing if you’re not adhering to user centric principles which suggest you should test ideas with end users before committing them to code.
Some assert that there are plenty of examples of best-practice interfaces to copy out there. So, why reinvent the wheel? Surely we can save time that way? Sometimes they’re right, but how will we know which best-practice interface works best in context with the user’s goals, with no time to test with the user? How can we innovate by copying what already exists? Before Google reinvented internet search, other search engines assumed a status quo which behooved the user to learn how to form proper search queries. It was institutional knowledge among the other search engines that this is how searching was done and customers simply had to learn to use it. Most people’s search results were poor at best. Then Google came along and realized what is now obvious. People just want to find what they’re looking for, not learn how to drive a search engine first. I’m not suggesting the other search engines could not have done what Google did sooner, but I am pointing the finger at a mentality which meant they missed the opportunity. Interestingly, Google is not known for its designers. It’s mainly a development house, but lots of those developers can clearly put a design hat on too.
There is absolutely nothing wrong with using Agile to produce results quickly; that is, if you don’t intend to release them on your poor, unsuspecting user without some usability testing. Just don’t be fooled that this is going to save you a lot of time if you want your new product to be right, because you will have to iterate to arrive at an appropriate solution. Alan Cooper has argued that this creates a kind of ‘scar tissue’ where code that has to be changed or modified leaves a ‘scar’ that makes the foundations of the program unsound.4
Mine 4: The temptation to call it “good enough”
Invariably when we have release-ready working code at the end of each cycle, even if it’s sub-optimal, there’s a strong temptation to release it because we can. Agile condones releasing whatever we have so long as it works. Sometimes, that means doing what we can get away with, not what is ultimately best for the user. Equally, if we do decide that a feature isn’t right yet, it’s amendments get fed back into the requirements backlog where temptation strikes again. Should we spend time in our next iteration on a feature that we’ve already got a version of? Or shall we develop something new instead? Too often, the rework gets left in favor of exciting new stuff. An so on we go building a product full of features that don’t quite meet the bar.
Mine 5: Insufficient risk-free conceptual exploration time
Iteration “zero” (i.e. a planning and design iteration prior to the first development iteration) can be used to do this and other planning activities. However, depending on how long this iteration is, the level of rigor applied to exploration may be insufficient. An argument used by some Agile practitioners asserts that a working example of a solution is the best way to validate whether it is the right one through exposure to the market. This ‘suck it and see’ approach bypasses an activity called “concepting.” Concept activities dedicate time to sketching different solutions at a high level and validating them in the rough with users before digging into detailed design or code. “Suck it and see” would have us just build it, launch it and see if it flies. This way, we’ve wasted time building something we will probably have to take apart or rebuild. The counter argument is: if it took as long to build as it would have to research and design before laying a line of code, then we break even. This statement is a stretch in practice because development itself usually does take longer than well-managed design research and conceptual exploration. Also, there has to be some level of design regardless of which methodology is used, and this adds days to the timeline.
Mine 6: Brand Damage
Let’s just say that design and research takes the same amount of time as development for argument’s sake. In the worst case, we completely miss the mark with the non-researched and designed solution and we have to start all over again. Then we’re back to the same total duration after developing it a second time, but there’s no guarantee we’ll get the solution right the second time either. All the while we’ve repeatedly foisted a botched product design on our users and adversely affected our brand. Many companies succeed on the back of their reputation for producing consistently appropriate products and services. When a company releases a flawed product or service, then their image in the customers mind (i.e. brand) is tarnished. Brand damage takes far longer to mend than it does to make. Software creators that fall victim to the temptation of "good enough" and fail to innovate through conceptual exploration put their companies revenues at risk. In a competitive market, repeated failure to meet user needs well leads to serious brand and subsequently financial repercussions, as other companies who do get it right take the business.
Agile is good for refining, not defining.
If you have an existing product that you want to develop to the next level, then Agile in its truest sense works because you have a base upon which to improve. This means that if you know what your requirements are and these have been properly informed with user research, comparative analysis, business objectives, and analysis of what content you have and what you can technically achieve, then Agile alone can work well.
But spending money on software development without a plan of what to build is like asking a construction crew to erect a tower with no blueprint. Some level of plan is necessary to avoid a Frankenstein of each individual’s perspective on the best design solution.
User Centered Design
UCD requires iteration – design, test with users, refine, test with users again, refine… repeat till it’s right. This is where Agile and UCD can work brilliantly together. Agile really is about presuming you’ll need to change things, and that’s a good thing when it comes to refinement.
Uncovering requirements to form a strategy
User Centered Design (UCD) is not about answering requirements alone, but also includes defining requirements. When we practice UCD end-to-end, we pretend we know little. Little about what the solution to a problem should be; little about what the problem actually is because assumptions close us off to new possibilities. We prefer to allow some design research to create a viewpoint and then form a hypothesis as to what we might build. In this regard, we cross into the realm of product managers, producers, program managers, business analysts and the like, trampling toes with gay abandon and meeting resistance all around. Facing confinement to defining the boring old business need (distinct from the user or customer need), these folks would prefer we constrain our UCD work to usability testing on designs meeting the requirements they set out. They’d prefer we stick to just helping with development… and if we can do that quicker using Agile? Wahey!
Is it always appropriate to do extensive research before starting design? That’s a good question and one that Jared Spool’s Market Maturity Framework5 helps answer. Sometimes, just getting something off the ground, regardless of how precisely we meet user’s needs with it is all we can afford to do. Once we graduate out of this "Raw Iron" stage into "Checklist Battles" focused on getting the right features and then beyond, research is a core ingredient to putting our feet in the right place.
After researching what the user and business requires, we can make the “Strategy” tier of Jesse James Garret’s Elements of User Experience3which underpins everything we do during the project. Do this well, and you really shouldn’t come up with something that’s fundamentally wrong. Agile doesn’t account for this beyond a planning phase (i.e. iteration zero), which may well define a strategy of sorts. But does it really define the correct strategy? Surely, that’s created through careful consideration of three things:
Empathetic qualitative research that uncovers the user’s context, needs, goals and attitudes i.e. user requirements. Cooper suggests that the customer doesn’t know what they want and advocates a role of interaction designer as requirements planner.4 This would avert building to the wrong requirements in the first place, but the time to do this must come into the development lifecycle somewhere. It involves talking to users, preferably visiting with them in their environments to create experience models and user personas.
A thorough appreciation of what else in the big wide world exists in terms of products, features and technology that can be emulated somehow (not necessarily addressing a similar situation to ours).
A clear articulation of the business problem, objectives, success measures and constraints. Business people sat in a room discussing what they think should be done must be informed by all these things if the right strategy is to emerge. Agile doesn’t preclude that kind of consideration, but it does not mandate it either.
Concept Development
If we manage to built something usable and reasonably intuitive without research or strategy, did we succeed? Most MP3 players fit this bill but none took off like the Apple iPod. Leaving interface usability aside, the iPod had a service concept behind it which included digitizing, replenishing and managing your entire music library with iTunes. This was part of the iPod concept from the outset and in combination with good marketing and design, continues to eclipse the competition over seven years later. But that concept needed to be sketched and iterated at some point. If we don’t explicitly build this into our Agile methodology, we can miss that thinking time.
The best of both worlds
UCD can be too documentation-heavy, isolated and risky but Agile needs help with defining requirements and concept development. How can Agile and user centric principles work together? First let’s understand what works well with Agile and not so well with user centered design. In this regard, the work that user centered design calls the ‘design’ phase can produce buckets of documentation which isn’t read, describing interfaces specified in isolation which may not be feasibly coded in the time allotted to them. So, doing detailed design is best done in conjunction with the development team and in a way where resulting interfaces can be tweaked as you go.
A shared vision of the interaction fundamentals
In good software development, a conceptual interaction model that has been thought through beforehand, outlines how the user navigates the system, performs tasks and uses tools in generic terms, i.e. not each and every navigation label, task or tool but rather the interface and interaction patterns that will persist. This produces something rudimentary to test with users to see if we got the big picture right. Following this roadmap sketched on the back of research and concepting prior to development activity, ensures consistency and cohesiveness when each component is coded separately to each other later. In many cases, the concept will need iterating to accommodate lessons from the journey. But we’ll at least have some indication of direction at a macro scale. Then, when in the midst of Agile iterations working out the details alongside our developer brethren, a level of expertise and experience is required of the designer because what we design will be built before we’ve had a chance to second-guess ourselves. Domain knowledge and an understanding of interface paradigms that work is also a big help. But to build new projects from scratch without a shared vision is a mistake.
Risky interfaces that are new or significant improvements on what has been seen before, are best tackled as design-only activities in a sprint prior to when they will be developed (i.e. do involve developers, don’t try to produce code). This circumvents the pressure to deliver something before proper thought, reflection and user testing, which ensures you’re not wasting time and effort. Sometimes most of the product will be done this way and that’s fine so long as developers and designers are still working together and talking every day. The first development iterations are an important time for the developers to lay the architectural foundations based on the vision. Designers should use this time to get a jump on any high-priority tricky interfaces so the development team isn’t waiting for something meaty to start on when it comes time to build features.
Most important to success, the business needs to accept that some things won’t be right the first time around and commit to iterating them prior to release i.e. not be led into the temptation to release something that’s not right yet.
Conclusion
In summary, dogmatic attitudes about each of these approaches should be avoided if they are to be combined. Remember, Agile does not mandate how to define concepts or overall design direction, but it is a great way to execute on solid design research and well laid plans. UCD needs to be flexible enough to respond to the reality on the ground when the implementation team encounters issues that mandate a different design solution. Document only what is needed to get the message across and co-locate if at all possible, because cross-disciplinary collaboration and face to face communication are vital. Working a sprint ahead of the development team is helpful in allowing the design team enough time to test and iterate. If these rules of engagement are followed, the two approaches can work very well together.
No current software supports the full process of collaboration.
That’s a bold claim, and I hope that someone can prove me wrong.
This article is more of a “Working Towards …” position paper than the final word; written in the hope that the ensuing discussion will either bring to light some software of which I’m not aware, or motivate the right people to develop what’s needed.
There is plenty of hype about “Collaboration 2.0” at the moment, but the bugle is being blown too loudly, too soon. Take, for instance, the Enterprise Collaboration Panel at last year’s Office 2.0 Conference. Most of the discussion was really about communication rather than collaboration, with only a hint that beyond forming a social network (“putting the water cooler inside the computer”) there was still a lack of software that actually helped groups of people get the work done. What’s missing from the discussion is any formulation of what the process of collaboration entails; there’s no model from which collaborative applications could arise. If we can figure out a model then we in the UX community should be able to make a significant contribution to it.
I want to start this discussion by proposing a model for collaboration1 that links the various elements of collaboration, comment on the so-called “collaboration software” currently available, and make some tentative suggestions about IA and UX requirements for a real collaboration platform.
A proposed model
Definition
Collaboration is a co-ordinated sequence of actions performed by members of a team in order to achieve a shared goal.
The main concepts in this definition are:
Collaboration is action-oriented. People must do something to collaborate. They may exchange ideas, arrange an event, write a report, lay bricks, or design some software. To collaborate is to act together and it is the combined set of actions that constitutes collaboration.
Collaboration is goal-oriented. The reason for working together is to achieve something. There is some purpose behind the actions: to create a web site, to build an office block, to support each other through grief, or some other human goal. The collaborators may have varying motivations, but the collaboration per se focuses on a goal that is shared.
Collaboration involves a team. No-one can collaborate alone. Collaboration requires a group of people working together. The team may be any size, may be geographically co-located or dispersed, membership may be voluntary or imposed, but there is at least some essence of being part of the team.
Collaboration is co-ordinated. That is, the team is working together in some sense. The co-ordination may follow some formal methodology, but can equally well be implicit and informal. There needs to be some sense at least that there are a number of things to be done, some sequences of actions, some allocation of tasks within the group, and some way to combine the contributions of different team members.
Components of collaboration
Any collaboration process involves interactions between six elements, as shown in the following diagram:
Figure 1. The basic components of collaboration
Artifacts
The Artifacts are the tangible objects relating to the collaboration. They include the outcomes of the process – the office block that progressively gets built, the web site that finally gets commissioned – as well as a variety of objects that were used along the way to promote, direct and record collaboration – such as design documents, project schedules, and meeting agendas.
Team
The Team element includes the collaborators and the interactions between them: Team membership and authorization, inter-personal dynamics, personal identity, decision making processes, and communication.
Tasks
The Tasks element includes the list of things to be done in order to reach the goal, along with all the processes necessary to manage that list. How do tasks get formulated? How is their status recorded and tracked over time? How is the list prioritized and scheduled? How are tasks assigned to team members and how are personal ‘To Do’ lists presented?
Calendar
Most collaboration is extended across time, and consequently requires some degree of time-management: setting deadlines, milestones and task completion dates; scheduling team meetings; and keeping an historical record of events.
Actions
Team members perform Actions based on the Tasks assigned to them. The Actions might just involve searching or viewing the Artifacts, but more typically mean modifying the Artifacts in some way. There might also be some meta-Actions such as maintaining the Artifact repository, keeping a log of Actions and commenting on the Artifacts.
Resources
Resources enable the Team members to perform the Actions. They include physical equipment, money, external advice, and all manner of software (project management, Wiki, spreadsheet, and content management systems, among others).
The current state of collaborative software
There are three primary ways in which humans interact: conversations, transactions, and collaborations. There is plenty of software that enables conversation–email, VOIP, chat, IM, forums–and plenty of software for transactions–eBay, PayPal, internet banking, shopping carts. But what is available for collaboration?
There are many software applications that seek to enable collaboration2. But let’s see what happens when they are evaluated under these three categories:
The extent to which the software provides the required functional components (i.e. the boxes in Figure 1)
The extent to which the software supports the interaction between those components (i.e. the lines in Figure 1)
The usual criteria that apply to all software , such as ease of interaction, security, integration with other applications, and so on.
It is true that there are software packages for most of the individual components of collaboration:
Artifacts: we have software for maintaining and accessing a repository of digital Artifacts (e.g. any number of CMS applications–well-established ones like Documentum or Stellent, more recent one’s like Joomla! or any of the 925 others listed at The CMS Matrix), and we can easily construct databases for tracking the status of non-digital Artifacts.
Team: software for maintaining team membership, facilitating group-based decision support, and managing remote meetings (e.g. WebEx) and video conferencing. There is even some possibility that virtual worlds like Second Life may provide an effective environment for team interaction. In Growing Pains: Can Web 2.0 Evolve Into An Enterprise Technology?, Andy Dornan quotes a business manager as saying that “Second Life allows more user engagement than traditional video or phone conferencing.” I know of one company whose preliminary experiments with Second Life found that there was a more relaxed and open interaction via avatars than when a team interacted face-to-face.
Tasks: software for maintaining task lists (e.g. Jira, ScrumWorks); task dependencies and scheduling, Gantt Charts (Microsoft Project, @task); brainstorming; workflow and process modeling; and others.
Resources and Actions: Many software applications act as Resources for implementing diverse Actions. For instance, Wikis enable editing of shared documents, and there are any number of calculators, electronic dictionaries, encyclopedias, search engines, web design tools – software that team members might use as they do their work.
These ‘point’ solutions may address their targeted functions effectively and even showcase the core ideals of Web 2.0 – user-generated content and taxonomies, broad-based participation, software-as-a-service (SaaS), and rich user-interfaces within a web browser. But they can’t just be lumped together and called “Collaboration” (with or without the 2.0 suffix). If you buy into the definition and model described above, it should be clear that true collaboration software must go beyond a set of disconnected point solutions and reach for the broader goal of enabling the whole collaboration process.
A key shortcoming of current so-called “collaborative software” is that there is no compelling metaphor or unifying vocabulary. We have many of the necessary pieces, but they do not interact at either the backend or user interface levels.
CSCW and CSC both promised such systems, but where are the practical results? While these research areas from previous decades generated many novel and hopeful ideas, there seems to have been an overly academic orientation rather than much focus on software design. Although the theory made useful distinctions, such as the categorization of collaboration by time and space, the software that resulted from these efforts dealt more with communication and co-ordination than with real collaboration.
Google offers an assortment of products that promote collaboration: Google Calendar, Google Apps, and more are promised. I was hoping that their acquisition of JotSpot in 2006 might result in a broader Wiki-based collaboration platform that unified those other offerings. But to date JotSpot has been silent. At this stage, Google’s offering is still an “assortment” rather than a clearly-conceived package.
The Zoho suite encapsulates virtually all the point-solutions mentioned above. It includes the standard office tools (word processing, spreadsheet, presentations, email), remote conferencing, chat, meeting organizer, calendar, project management and a Wiki. All of that and more is delivered via a SaaS model through your web browser. Zoho is way ahead of any competition because of its unified user interface. However, there are still important aspects lacking in Zoho: not primarily additional modules but some key IA and UX characteristics that I outline below.
Perhaps the closest we have today is from Microsoft. Combine SharePoint, Outlook and the Office suite and this provides remarkably effective functionality for team management, scheduling meetings, communication and shared workspaces. Our organization makes heavy use of this combination, and it pushes teamwork and information sharing a long way ahead of where we once were. On the down-side, the Task management in that environment is quite simplistic, with little support for maintaining a complex task list, or prioritization, or comprehensive status reports. The Wiki facility shipped with SharePoint is very primitive3. Microsoft has implemented a “Collaboration 1.0” approach rather than “Collaboration 2.0”, by which I mean it requires a large degree of centralized control rather than drawing on the power of social networking. Of course, the content of email, announcements, uploaded documents, and so on is completely open to freedom of expression, but the constrained environment and heavy IT infrastructure make the system as a whole feels complex and unwieldy.
Multi-user editing
Perhaps something specific needs to be said about one type of so-called collaborative software – the type that enables multi-user editing of electronic documents. Most of these applications are primarily interested in version control: they maintain a repository of documents and control access to that repository. Authorized people can view documents and a subset of those can edit the documents. The software provides some process for giving each editor a copy of the document and when the changes have been made, the software merges the changes back into the master copy, while keeping some form of historical change log. Examples are clearspace and the various text-based code-management tools such as Subversion.
While revision control has an important role, it is a meager offering in terms of the extent of collaboration that it enables. In most cases, such applications assume that individuals work independently of each other. One user edits this part of the document and, as a quite separate task, another user may edit another part of the same document. Two people editing the same part of the document is treated as a problem, and typically the last person to submit changes trumps any previous changes.
A more significant level of collaboration requires the assumption that multiple people will be working together to edit the document simultaneously. That requires a single shared document rather than separate copies of a master document for each editor. See Wikipedia article for a list of such real-time collaborative editors.
XMPP (the Extensible Messaging and Presence Protocol) has extensions for both multi-user text editing and multi-user whiteboarding, so there is at least discussions about how such interaction can be standardized. But tools that use that protocol are few and far between.
The Challenge for IA and UX
There are many human and business activities mediated by computer systems where IA and UX practitioners have provided design guidance to make the interaction more effective. Given that collaboration is fundamentally about interacting effective to jointly achieve some goal, IA and UX can play an even more substantial role than usual.
So, what principles would you apply to collaboration software? Here are my suggestions:
1. Build the user interface around a consistent, unifying metaphor.
The metaphor should be goal-oriented. That is, a stated goal should take center-stage, with the Team, Tasks, Calendar, Resources, and Artifacts being other players in the drama.
The user interface needs to enable and encourage interactions between collaborators. Perhaps the metaphor of a sport team would be effective.
A “portal”/dashboard pattern allows simple movement between team management, task list, calendar, documentation management and the like. That approach can collate the answers to core concerns like: What collaboration projects am I part of? What’s the current status of each? What’s on my To Do list?
2. Build an open, extensible, modular framework: a collaboration platform rather than a single application.
The scope of collaboration is too extensive to expect that a single vendor will be able to provide all the pieces. It is important to allow modules to be gathered from multiple sources and plugged into a shared framework.
For instance, Jira might be the first choice for the maintaining the Task list, but the framework should allow that to be substituted with alternatives. Similarly, in a basic system there may be a limited reporting feature (e.g. to view the change history for the Artifact), but it should be possible to plug in a more substantial reporting application later on.
Most importantly, it will be important to provide a standard API to the Artifact repository, so that any number of applications can view, add and modify Artifacts.
3. Include at least the following functions “out of the box:”
Team management: functions to define and authorize team members, and for individuals to update their personal profiles
Task management: functions to add and prioritize tasks, allocate responsibilities to team members, and maintain current status
Calendar management: all team members can add events to a single shared calendar
Communication: integration with email, IM, and other technologies
Meetings: ability to schedule a meeting and invite specific team members, publish an agenda, record notes and decisions from the meeting.
4. The platform itself should maintain a collaboration history rather than leave that function to the plug-in components. All meetings, decisions, changes to Artifacts, Task status changes and other events are recorded in that history. The history should be displayed as a journal along a time-line as well as being exposed as a life-stream via RSS/Atom.
5. Connect to other enterprise applications and data stores. A collaboration application will gain significant value if it can interact with existing databases, content management systems, security mechanisms, and if it can exchange data with other applications via some standard like Web Services.
6. Implement all this as a Rich Internet Application. The complexity of interactions between team members who are potentially geographically scattered indicates the platform needs to be web-based. The complexity of interactions between users and the system indicates that the user interface needs to be very dynamic, with near-real-time synchronization between all concurrent users and a shared Artifact repository.
Conclusion
Maybe all I’ve done here is scratch an itch. But I hope that the itch is contagious.
Collaboration is an essential part of human endeavor and information technology is at a stage where it should be able to add value to collaboration in more ways that just connecting people in a social network. We have many web-based applications that address parts of the process, but who’s going to create the framework to bring it all together?
Footnotes
1This model was first presented at BarCamp Sydney in August 2007.
3 Lawrence Liu comments that the SharePoint Wiki is not intended to be best-of-breed, just something that “is sufficient for a very large percentage of our customer base”. Even that is wishful thinking, but fortunately, the guys at Atlassian have made a SharePoint Connector for Confluence that can easily replace the default SharePoint Wiki.