UX Design-Planning Not One-man Show

Written by: Holger Maassen

A lot of confusion and misunderstanding surrounds the term "user experience." The multitude of acitivities that can be labeled with these two words span a vast spectrum of people, skills and situations. If you ask for UX design (UXD), what exactly are you asking for? Similary, if someone tells you they are going to provide you with UXD for an application, website or intranet or extranet, what exactly are you going to get?

Is it just one person who is responsible or is it a team of people who are in charge of UXD? In this story I´ll sketch my ideas of UXD based on my experiences and at the end of this story I will give you my answer.

Let us start at the beginning – UXD starts with experience – experience of the users. And so I will talk about the users first.

 

 

UXD-P – every person is an individual

Every person is an individual. Every person is in possession of different roles. For each individual there will be many roles and each person adopts a different role depending on the circumstances.

roles of experiences

User Roles

Sometimes the individual person holds one role, but mainly he will hold quite a few roles like consumer, customer, user, client, investor, producer, creator, participant, partner, part of a community, member, and so on.

 

 

UXD-P – network of expectations, experiences and knowledge

Every user is multi-faceted – and is considerably more complex than they themselves can imagine – so it´s not very helpful just to talk to the user or ask him what he needs. We have to watch what people do; we have to listen to what people say and to recognize what decisions people make – and by observing we have to evaluate and understand why they do this and that. Why and what kind of visual elements will the user like, prefer and or understand? Why and what kind of mental model, navigation or function do they respond to?

Jakob Nielsen said “To design an easy-to-use interface, pay attention to what users do, not only what they say. Self-reported claims are unreliable, as are user speculations about future behaviour.” (Jakob Nielsen – Alertbox) and I agree – I think no statement can be objective. Perhaps the circumstances are not realistic or not reasonable for the person. Or maybe the person himself is not really in the “situation,” or he is being influenced by other factors (trying to please the tester for example). Or maybe he is trying to succeed with the test rather than trying and failing, which tells us so much more.

When all three perspectives (do, say, make) are explored together, we are able to realize the experience spectrum of the “normal” user/customer we are working for.

Jesse James Garrett said: “User experience is not about how a product works on the inside. User experience is about how it works on the outside, where a person comes into contact with it and has to work with it” (J.J.Garrett – The Elements of User Experience) .

areas of experiences

Experiences

Areas of experiences: different areas which effect the quality of communication

 

 

UXD-P – personal and individual

When we talk about experiences, we take the individual into consideration, including the subjective occurrences felt by the person who has the “confrontation” with what we want them to use. Experiences are momentary and brief – sometimes they are part of a multi-layered process or they are on their own.

Normally such know-how has been learned as a part of something or by itself and will be remembered in the same way – but that’s not always the case – and the person deals with the situation in a different way. If we look at their exeperience as a continuum, the user brings their experiences of the past to the interaction in the present and adds their hopes for the future. That future could be: to interact with their banking in a safe and secure way.

flow of experiences

Flow of Experience

Flow of experience: the individual user/customer is always in the present – they act in the present. They are influenced by former experiences and current expectations.

UXD-P is taking the users’ views, behavior, and interactions, to figure out the emotional relationship between them and the thing we have built. For the most part these "people" and their experiences are unknown. It requires an appreciation of the customer: their journey, their personal history and their experiences.

It is the collective set of experiences, in the online-world, the offline-world, or even tiny little things (i.e. My coffee was cold this morning) that affects their experience of the products and the companies that represent them. It is about appreciating the individual user’s unmet needs, wants, capabilities and desires in a contextual way. It´s a box of experiences including the things the user saw, acted and felt. (BBC Two [12th February 2008, 9pm, BBC Two] had a program on rational thought. Highlights of the program include: Loss complex, Post-decision rationalization, Priming, Precognition. Watch highlights from the programme : http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/tx/decisions/highlights/ )

Experiences and expectations meet in the present. Both are inseperably combined, and every action we take takes both parts into consideration. When a person uses an application, he tries to understand what happens. He will always try to reference this to his past experiences. The moment is also tightly coupled to his expectations of his personal outlook.

At this point of “present” I think of the UX honeycomb of Peter Morville [1] …

honeycromb – Peter Morville

Morville’s "honeycomb"

honeycomb – Peter Morville (P.Morville – Facets of the User Experience)

In the present we have to deliver to the individual user and his specific task the best answers to following questions.

  • Is the application useful for the individual user and his specific task?
  • Is the application usable for the individual user and his specific task?
  • Is the application desirable for the individual user and his specific task?
  • Is the application valuable for the individual user and his specific task?
  • Is the application accessible? Available to every individual user, regardless of disability?
  • Is the target findable for the individual user and his specific task?
  • Is the application credible for the individual user and his specific task?

In the UXD-P the whole team has to take the users’ views of the GUI and the interactions to figure out the emotional relationship between the brand and potential customers. It requires a common appreciation of the customer journey and their personal history: not only with the product and similar products, but also with similar experiences.

 

 

UXD-P – teamwork and cooperation

The first stage in discovering – to invent or design for the experience – is to take a new viewpoint about the people who buy and use the products and services we are designing. This is a birdseye view and from step to step we have to use the "mouseview," which is to say a detailed view from the user’s perspective, as we develop the application we have to switch between these views. Our main desire is to to respect, value, and understand the continuum of experience and expectations our users have .

UXD-P can sometimes be a slippery term. With all the other often used terms that float around: interaction design, information architecture, human computer interaction, human factors engineering, usefulness, utility, usability and user interface design. People often end up asking, “What is the difference between all these fields and which one do I need?” If the UXD is aimed to describe the user’s satisfaction, joy or success with an application, product or website, however we specify it, there are a few key essentials which need to be tackled and I have to point out the UX honeycomb of Peter Morville [1] a second time. Each of these points, as enlightened above, makes up a considerable component of the user experience. Each is made effective due to the design offerings from each of the following elements:

Usefulness is based upon utility and usability. This means the product is able to give exactly the right kind of service and what the user is expecting from it. And it´s the joy of reaching my aims and the joy of doing so easily. The information architecture is in charge of clarity of the information and features, lack of confusion, a short learning curve and the joy of finding. The designing of the interaction is essential for a successful and overall satisfying experience. So the interaction design has to answer the questions of workflow, logic, clarity, and simplicity of the information. Visual design is responsible for the clarity of the information and elements, simplicity of tools and features, pleasant or interesting appearance of the interface, the visual hierarchy, and the joy of look and feel. Accessibility is a common term used to describe how easy it is for people to use applications or other objects. It is not to be mixed up with usability which is used to describe how easily an application, tool or object can be used by any type of user. One meaning of accessibility specifically focuses on people with disabilities: physical, psychological, learning, among others. Even though accessibility is not an element of its own, it is important to notice that accessibility also plays a role on the whole user experience to increase the likelihood of a wide-ranging user experience. People tend to gravitate to something that is easier to use regardless of who it might have been designed for.

The UXD innovation process is a nonlinear spiral of divergent and convergent activities that may repeat over time. Any design process begins with a vision. This applies particularly to the UX process. A vision, however, is not enough to start design. As I mentioned before, we always have different circumstances, users and roles. Therefore, it is critical to accurately understand the end user’s needs and requirements – his whole experience and expectations. The UX process relies on iterative user research to understand users and their needs. The most common failure point in UX processes is transferring the perspective of users to UI design. The key is to define interaction first, without designing it. First, all the research (the user, product and environment) have to be organized and summarized in a user research composition. These lead to user profiles, work activities, and requirements for the users. The user research composition feeds directly into use cases. The use cases show steps to accomplish task goals and the content needed to perform interactions. Completed use cases are validated with the intended user population. This is a checkpoint to see if the vision is being achieved and the value is clear to users and the project team. The next step is to design the user interface, generating directly from the interaction definition. A primary concern with design is to not get locked into a single solution too early. To keep the project on time, this step should be split into two parts: rough layout and exact and detailed design. The rough layout allows experimentation and rapid evaluation. Detailed design provides exacting design and behavior previews of the final application that specify what is to be coded. Iterative user evaluations at both stages are connected to be fast and effective in improving GUI, design feedback, rapid iterative evaluations, and usability evaluations.

UX workflow cycle

Image_7

design workflow – workcycle – workspiral

 


 

 

UXD-P – Gathering the elements

The diagram below presents the relationship of the elements above:

elements of UXD-P

Elements of UXD-P


Lewin’s equation

Lewin’s Equation, B = f (P,E) ( B – Behaviour; f – Function; P – Person; E – Environment ), …

… is a psychological equation of behaviour developed by Kurt Lewin. It states that behaviour is a function of the person and his or her environment [2].
There is a desired behaviour that we need to encourage, but we have no control over the person, so via interaction design, information architecture and interface design we control the environment and therefore generate the desired behavior. (see reference: books.google.com ).

 

 

UXD-P – many steps to go but every step is worth it

How do we involve our team, customer and our user/consumer? We can start at different points, but I like to think about the circumstances first. Where do we come from? Where are we? Where will we go? And who is “we”? “We” means each person who is involved in the project. Iin the centre of each effort stands the user. To get the user with his personal experiences and expectations into the project, the design team and the customer needs a combining glue / tool / instrument. I believe these are the personas of the “target users/consumers” in the process of UXD-P. If there are no personas the second or third choice are scenarios or the workflows (based on a certain user/person).

The management point of view for the most cases is also the view of our customer. It includes the user’s/consumer’s age, income, gender and other demographics. The perspective of UXD-P is to look at behaviour, goals and attitude.

To get a realistic persona we have to identify first of all the target users. Out of my experiences this is not only the task of our client to define the users and consumers – we have to support him. During the identification and characterization we have to go beyond demographics and try to understand what drives individual users and consumers to do what they do and these as detailed in quantity and quality as necessary and possible – like I mentioned above. The approach and the complexity of the characterization depend on the tasks, project and functionalities. Parallel to the very personal description we need a “picture” of the environment. For each persona we must define their business and/or their private concerns and aims. Do they want to research a product for purchase later? Are they concerned about not wasting time primarily? Do they just want to purchase something online as easy and quick as possible?

Depending on these personas we can formulate, discuss and prove scenarios – from the very beginning of the project, during the project and as check or analysis at the end of the project.

 

 

 

 

 

UXD-P – my blueprint of schedule – "todos" and deliverables

We are always asking and being asked: what are the deliverables. Throughout my career as an IA, UX-planner and designer, as well as during my study of architecture and town planning, I have constantly asked myself following the questions:

  • What kind of project is it? What are the key points?
  • What should our steps and milestones be in the project?
  • What should our/my deliverables be?
  • How can we/I explain the main idea?

I have realized that if I do not answer these questions previous to creating a deliverable, I waste more time and deadlines slip.

The deliverables are not for us. The deliverables are a means of communication with several people: manager, decision maker, client, designer, front-end developers, back-end developers, etc. Sometimes I have the feeling we overlook this from time to time. After I think about the project I have to ask myself, where will my deliverables and other efforts fit within the process of design? The following diagram describes different lines of work that will lead us to some questions each line must accomplish. Depending on these questions and topics I will outline the basis, basics and deliverables for which each skill and ability which is necessary.

Image_6___schedule of UXD-P_small version

Image_6

schedule of UXD-P ___ better view – schedule 1238px x 1030px

 

UXD-P – my conclusion

I studied architecture and town planning. And just like town planning and architecture isn’t just for architects and art lovers, the internet isn’t just for computer users and developers. Similarly, just as the towns and the cities are for the inhabitants and architecture is for the users of a building, so products and applications are for the user, the customer, the member and not for the people who build them.

In every kind of process we should act in a team but in the process of UXD-P it is absolutely essential that we have to think parallel, with the same focus . We have to act in a team, although every team member is a kind of lawyer: lawyer of budget, of the client, of utility, of usability, of look and feel, of brand and finally of the user himself. Because at the end of the project, our user/customer is the final judge.

Good design is not only interface, or look and feel, or technology, or hardware, or how it works. It is every detail, like the structure, the labelling, the border of a button or a little icon. Finally, it is the sum of every element. I believe that a shared vision of a group of creators will have more potential than individual creativity. And that is the point where creativity meets expectation. The point of view on IA and design and the process to get to a well-designed product will be changed by UXD-P.

The persons who use the application or other object that we invent are the real “architects” of the “architecture” – the real “inventor” of the design. The more we know about our users, the more likely we are to meet their needs.

As the capabilities of interactive applications and the internet go forward and grow, more and more consumers use the applications and the various possibilities in new and different ways. We must think about all aspects of user experience.

And I will ask you once again: Is it just one who is responsible or is it the team which is in charge of UXD-P?
Personally, I believe it is the process of planning and designing for User Experiences (and so I think it’s the team which is in charge), but the overview has to have an experienced planner as a kind of captain.

 

The most common cause of an ineffective website (one that doesn’t deliver value to both the business and its intended constituents) is poor design. The products have to follow, to cover the functions and the experiences. The lack of clear organization, navigation and values of brand and mood mean that people will have an unintentional and maybe bad experience, rather than one that will meet the business’s relationship objectives for each individual. User experience design and planning is a fundamental component to the creation of successful digital products, applications and services.

UXD-P is UXdesign and planning- – In my estimation there are distinctions between Design and Planning.

Design is usually considered in the context of arts and other creative efforts. When I think of design in the UX process it focuses on the needs, wants, and limitations of the end user of the designed goods, but mainly on the visual parts and the mood. A designer has to consider the aesthetic-functional parts and many other aspects of an application or a process.

The planning part provides the framework. The term "planning" describes the formal procedures used in such an endeavors, such as the creation of documents, diagrams etc. to discuss the important issues to be addressed, the objectives to be met, and the strategy to be followed. The planning part is responsible for organizing and structuring to support utility, findability and usability.

I strongly believe that both parts – design and planning – have to work closely together. Every team member should have the ability to think cross-functionally and to anticipate consequences of activities in the whole context.

I’ve often seen timelines like this …

Image_8___

and this doesn´t work for UXdesign and planning …

I give a timeline the preference which looks like this:

Image_9___

… to develop a UXdesign and UXplanning.

And in the center of this team and of this process should stand the leading person – the user!

Image_9___basis points of UXD-P

 

 

 

[1] _ UX honeycomb of Peter Morville

semanticstudios.com-publication

 

 

[2] _ The Sage Handbook of Methods in Social Psychology _ by Carol Sansone, Carolyn C. Morf, A. T. Panter

google-books (The Sage Handbook of Methods in Social Psychology)

amazon.com (The Sage Handbook of Methods in Social Psychology)

 

Extreme User Research

Written by: Daniel Lafreniere

What is the biggest problem I face almost every time a client hires me to do something about a web project going awry? They don’t know a thing about their users. They don’t have a clue, whatsoever. Unbelievable but true!

Good designers will certainly argue that THEY don’t need user data to do proper design. That if THEY like it, EVERYBODY will… sure! This probably explains why so many web projects fail in so many levels: Usability, aesthetics, emotions, and profitability.

What’s the remedy to this world-wide infection? User research… but not in the typical version, meaning lengthy ethnographic studies that seem to take forever before obtaining some data. I’m talking about a simpler way, a faster way of doing it. I call it "extreme user research." What’s so extreme about it? Well, it can be done in 30 minutes per interviewee, and it generates loads of useful data that will have a real impact on design, thus making your website more profitable.

Getting information from surrogate users

Doing user research doesn’t have to be tedious and cost lots of money. In many cases, you should be able to do it in a few days, even a few hours, depending of the scope your project. The main idea behind extreme user research is that instead of going for the real users, we go for surrogate users. Those are the ones within a company who talk directly to the customers. We want to talk to the people who talk to the people.

For instance, let’s say we do an e-commerce project for a cable company. The surrogate users would be those in the call centers—the first-line personnel who provide information about products and the second-line personnel who provide customer support for billing issues and technical problems. Talking to those people means having access to tens, hundreds, or even thousands of clients. Not bad at all!

Doing extreme user research is simple. We simply perform individual semi-structured interviews that last no longer than 30 minutes. This time limit is a profitable constraint. It adds a stress that forces the interviewee to focus on the core, on the essentials. During those 30 minutes, we want to know as much as possible about the customers:

  • What triggered the call? For example, was it a problem, advertisement, word of mouth, season, news in the media, life event like a birth, the first job, moving to another place?
  • What is the whole purpose of the call?
  • What are the callers’ main concerns? Are there any misconceptions or incomprehensions about the company’s products or services?
  • What words do the callers use to express their needs?

Do what people do during speed-dating sessions: Focus on the essence. Ask for the top ten questions from customers. Ask for the five things you should know if you’d have to replace a surrogate user for an afternoon. You’ll see, it works!

Go for the individuals. Don’t, I repeat don’t go for group interviews, the infamous focus groups. Otherwise, you’ll have to deal with strong-minded individuals whose influence biases the group and thus the whole process.

How many surrogate users should you interview? About five per job description. You want a certain degree of repetition among the interviewees to avoid anecdotes or personal perceptions. Because of the speed factor, you can interview up to 12-14 people in a single day, which means more than 60 interviews in a week! Yes, these will be long days. Yes, at a certain point, it will be tedious to hear the same thing over and over again. But that’s the whole point. We want to make sure we have solid data based on facts, not perceptions.

Okay. Now, you have done your 40 interviews. You’re swamped with data. What’s next? Well, all you have to do is:

  1. Extract all the facts that you’ve found.
  2. Write them on sticky notes.
  3. Tag each note appropriately using a word or a symbol. I usually use words like user, goal, trigger, concern, FAQ, love factor, hate factor, and incomprehension. These tags will really help you later on for documentation. You can also use different colored sticky notes for this purpose.
  4. Find patterns and create groups around types of users.
  5. Create some first-version personas and then refine them.
  6. Show and tell everyone about your findings.

Designing using facts, not opinions

During the Québec city website redesign (which is not yet online), we interviewed five call-center employees and discovered that citizens interact mostly with city hall for:

  • their home (garbage collection and recycling, permits, taxes)
  • their street (parking, lighting, pavement and road works, snow removal)
  • public services schedules (library, swimming pools, skating rinks, etc.)

Knowing these interactions really helped us focus on what citizens really need. Garbage collection by definition is not very sexy, but when 30% of the calls are about this topic (based on interviews and call log analysis), it becomes clear that the city website has to address this subject before anything else!

Call-center personnel also told us that citizens always ask the same four questions about a topic:

  1. How to get a service from the city? They want to know the procedure (for example, how to get rid of an old sofa).
  2. When is it going be done? What is the schedule?
  3. How much does it cost?
  4. Who’s in charge?

Having this information helped us design page templates with placeholders answering those four questions. It’s that simple and straightforward.

Conclusion

Knowing, I mean really knowing your users has great benefits. Your design will be based on facts, not on suppositions or false perceptions.

Knowing your users means that you’ll spend money on what users really need, NOT on what you suppose they would need or like. It usually leads to simpler solutions. Having facts reduces those never-ending discussions where everybody has his own solution based on his own personal needs and preferences. It has been said before but I’ll say it again: We, the designer and the client, are NOT the users.

Get out of your cubicle. Get out of your meeting room. Go and get those surrogate users and know as much as you can about your users. You’ll see: Your users are not who you think they are.

Cues, The Golden Retriever

Written by: Jamie Owen

In every waking moment, our brains are processing the stimuli in our environment and responding, consciously and unconsciously, to what is going on around us. This may mean something simple like stopping automatically at a crosswalk based on the color of the traffic signal. Or it may mean something more deliberate, like deciding to turn left after orienting yourself by reading a street sign.

Both consciously and unconsciously, we also make decisions while interacting in an onscreen environment. We move automatically during routine tasks and through familiar interfaces. But what do we do when the interaction onscreen requires a very deliberate and thoughtful interaction—how do we determine the correct response to the stimulus? We need cues to help us draw from our experience and carry out an acceptable response. Cues are like little cognitive helper elves who prompt us toward a suitable interaction, reminding us of what goes where, when, and how. Cues can be singular reminders, like a string tied around your finger, or they can be contextual reminders, like remembering that you also need carrots when you are shopping for potatoes and onions in a supermarket.

When we’re arranging content and designing interactions for the onscreen environment, providing cues for users helps them interact more effectively and productively. Increased customer satisfaction, job performance, e-commerce, safety, and cognitive efficacy rely on deliberate interaction with the technology and thus easily benefit from the smart use of cues.

I’d like to frame a discussion of cues by touching on a mixture of topics including memory, a few theories from cognitive psychology, and multimedia research. It may get a little dry, but stick with me. The integration of these three areas not only affects how information is encoded and retrieved, it influences how and when cues might best be used.

Remembering Memory

Let’s refresh your memory on the topic of memory—stuff you probably already know. This is the foundation of how and why cuing is effective.

First, there’s the idea of encoding and retrieval (or recall). Encoding is converting information into a form usable in memory. And we tend to encode only as much information as we need to know. This is a safety valve for over-stimulation of the senses as well as a way of filtering out what we don’t need for later retrieval. Retrieval is bringing to mind for specific use a piece of information already encoded and stored in memory.

Memory is generally labeled long-term memory and short-term memory (or working memory, in cognitive psychology parlance). Our working memory holds a small amount of information for about 20 seconds for the purpose of manipulation—deciding what to do with sensory input from one’s environment or with an item of information recently retrieved from long-term memory. The familiar rule is that humans have the capacity to hold seven items (plus or minus two) in working memory. In contrast, long-term memory is considered limitless and information is stored there indefinitely. Information from working memory has the potential to become stored in long-term memory.

The Integration of Multimedia and Memory

Ingredient 1
By its nature, interaction in an onscreen environment can be considered multimedia. At the very least, visual elements (images, application windows, the cursor, etc.) are combined with verbal elements (semiotics, language, aural narration, etc). These are called modalities and they are processed differently in the human mind using different neurological channels: this process is called dual coding and it’s when images and words create separate representations for themselves in the brain[3]. This is important because cues unique to a given modality can be used to better retrieve information originally processed with that modality. For example, color coding the shapes of the states on a map as red or blue helps us store for later recall the political leanings of a given state—the shape of the state triggers our remembering the color.

In a “real world” environment, stimuli from the visual and verbal modalities (among others) guide the way we interact with that environment—influencing our working memory and long-term memory. These stimuli can get to be a lot of work for the little grey cells and it helps when the two modalities share the load—the cognitive load—of processing information. The same is true for the onscreen environment as well.

Ingredient 2
Cognitive load[1] describes the tasks imposed on working memory by information or stimuli from the environment, in our case the onscreen environment. How much information can be retained in working memory—how much can we encode before our working memory is full and new information has no place to go? And if it escapes working memory, chances are slim that the information will make it into long-term memory.

So what happens when a modality is limited by cognitive load? In short, the working memory gets full fast. Encoding, cuing, and retrieval are affected. The interaction onscreen impacts the encoding necessary for later recall, particularly when different modalities are vying for attention. A limited working memory makes it difficult to absorb multiple modes of information simultaneously[2].

But if the modalities compliment one another, more information can be processed when they work in tandem than would be possible using a single modality. A large body of research exploring the use of multimedia and computers yields a couple of useful general guidelines:

  1. When presenting information onscreen, text and visuals are not as effective as seeing visuals and hearing narration.
  2. If text is the chosen way to convey verbal information, it should be in close proximity to the visual element it is related to (like labels on a map).

A big no-no is narration which is redundant to the text visible onscreen. This is a bad practice because the brain works too hard mediating continuity between the two cognitive channels; the reader is distracted from the content because of the mechanics of constant comparison of text and voice. It actually detracts from successful encoding. Naturally, if the encoding is faulty any use of cues used for later recall of that information is compromised.

Cuing

Okay, now let’s look at cuing a bit more closely. The idea of cues and cuing is a theory more formally known as encoding specificity by its pioneer Endel Tulving. Memories are retrieved from long-term memory by means of retrieval cues; a large number of memories are stored in the brain and are not currently active, but a key word or visual element might instantly call up a specific memory from storage. In addition, the most effective retrieval cues are those stimuli stored concurrently with the memory of the experience itself[5]. (This implies that most cues are external to the individual and we’ll accept this characteristic for the sake of this discussion.) Citing a popular example, the words “amusement park” might not serve to retrieve your memory of a trip to Disneyland because during your visit you didn’t specifically think of it as an amusement park. You simply thought of it as “Disneyland.” So the word “Disneyland” is the cue that retrieves the appropriate gleeful memory from all the other memories warehoused in your brain.

It’s important to note two chief categories of cues—discrete or contextual. In other words, it may be that a user is being asked to respond directly to an onscreen prompt, or she may be interacting with the technology in a certain way because of the elements present in her onscreen setting. Most of us are probably familiar with the Visio interface and can recognize it instantly. When we’re working in it, we automatically use its features without thinking about the act of using its features. When concentrating on a project, we grab an item from a stencil, move it onto the workspace, size it, label it, etc. We don’t use Visio to try to re-sample a photograph’s resolution or check a hospital patient’s vital signs—we “remember” that Visio is capable of certain functionality because of the cues surrounding us in the Visio environment. This is an example of contextual cuing.

Reminiscing about Disneyland is one thing, but some tasks and interactions require more cognitive load to complete and the cues should be employed appropriately. For example, onscreen controls for a large piece of machinery, one which is dangerous when used incorrectly, require an operator’s focused attention. Cues provided in such an onscreen environment need to be deliberate and explicit. For example, a large red stop sign icon appears onscreen to warn the operator that he has forgotten a safety procedure.

External cues such as work environment, physical position, or teaming around a table may also affect interaction onscreen. If we anticipate the physical environment in our designs, we can control the cues onscreen to accommodate the users in that environment. In our large machinery example, perhaps onscreen cues are related to observing its movement or the sounds it makes. Or if crucial interaction needs to take place in a busy or noisy environment, like punching your numbers into an ATM, discrete and/or contextual cues which accommodate that external environment appear onscreen.

Cues also need to be salient and germane—they need to have meaning and relevance appropriate for the situation, task, or environment. They need to fit into the schema[4] of the interaction. Schema can also be regarded as a semantic network[6], where information is held in networks of interlinking concepts; it’s linked to related things that have meaning to us. Tim Berners-Lee says “a piece of information is really defined only by what it’s related to, and how it’s related.” So naturally the cue that recalls such a piece of information will need to be related to it, too.

The use of meaningful cues is tied to how memory functions. Memory is bolstered when its meaning is more firmly established by linking it to related things. This is because it’s less work for the short-term memory to plug new information into an existing schema: if the new information is encoded relative to its context, the cue that retrieves the information should also be related to that context. A rather glib example might be memorizing several new varieties of wine using colored grape icons to represent different flavors. When recalling those wines, cues in the form of smiling farm animals would do no good in helping you select a wine that goes well with spaghetti.

Humans are fallible, though, and sometimes even the best thought-out cues may not be effective. For example, if the context or subject matter is unfamiliar, cues which rely on it will not be helpful. In fact, sometimes the context is so unfamiliar that cues are not recognized for what they are; if information is not recognized as relevant or meaningful, it will be disregarded. People are better at recalling information that fits into their own existing schemas. There’s a semantic network unique to each of us. Fortunately, Tulving (1974) assures us, “even when retrieval of a target event in the presence of the most ‘obvious’ cue fails, there may be other cues that will provide access to the stored information” (p. 75). One preventative measure against designing ineffective cues is a thorough usability study. Or we may provide cues that address more than one modality. Each situation is as unique as its context, so it’s not possible to make recommendations here; the issue of ineffective cues can arise and it is important for us to acknowledge the risk (and any potential fallout!).

One general prescription for the symptom of ineffective cues is to provide the cue immediately before the desired recall, either immediately preceding interaction or positioned near the recall artifact (e.g., password field or bank account number field). In other words, cues need to prime the information they are designed to help retrieve. Another strategic method of cuing is pattern completion—the ability to recall complete memory from partial cues. The simple act of grouping items may be a sufficient retrieval cue. It may even help establish a context or schema for the user, thus increasing the subsequent effectiveness of your cuing system.

Related form and function in the onscreen environment can also act as cues. Context dependent menus are a perfect example of this, like the grouping of drawing tools in Word. The four-sided icon represents the function for drawing boxes. The same icon indicates very different functions in other Word tool palettes (or in other applications)—the user doesn’t have to remember exactly what each of the four-sided icons does: their context is the cue for reminding the user of their function. An easy text-based example might be placing an arts festival event with an ambiguous title in the same column onscreen that lists similar events.

Jason Withrow’s B&A article Cognitive Psychology & IA: From Theory to Practice explores this idea in greater detail.

Another cuing strategy is one mentioned above in passing, the use of mixed-modality cues. This strategy draws on the advantages of splitting the cognitive load between two encoding systems.2 , 3 Cues for one modality can be presented in another modality if the original encoding matches that set-up (i.e., an image-text mix is the cue for recall of the same image-text mix). A perfect example is discussed in Ross Howard’s article on what he terms ambient signifiers. Audio is piped over the PA of a large transportation network. Each train station in a large city has a unique audio melody associated with it. As Howard points out, not only is the destination station’s audio a cue to get off the train, the commuters memorize the melody for the station prior to their destination, priming them for their actual destination. This is an interesting example because it also takes into account the environment in which the stimulus-response cue is introduced. With preoccupied or bored daily commuters crowding onto a train stopping at homogenous-looking stations, what cues might help them successfully get home? The computer game Myst used a similar technique by using sound cues to help the intrepid player solve puzzles.

But what happens when elements of the onscreen environment are really similar (or ubiquitous)? Our brains err toward efficiency: events and elements that are similar are generally encoded in terms of their common features rather than their distinctive characteristics. This is great for helping us fold new information into existing schemas and contexts. But it interferes when the IAs and designers need the user to distinguish between the similar events or elements. This situation is described in the interference theory, which states that the greater the similarity between two events, the greater the risk of interference. So it becomes a balancing act: maintain continuity across the interactive environment while at the same time establish a distinction between elements you want the user to retain. Something as simple as color-coding might be a means of distinguishing information onscreen. Position may be another. Think of a process being taught or conveyed on a training website, a process whose stages have big bold numbers respectively highlighted across the top or side of an interface. Not only does this help with chunking (breaking the information into digestible bits to avoid an unreasonable cognitive load), but when enacting the process later, like on a factory floor, it’s easier to visualize the numbers and remember the correct procedure.

Two notable phenomenon are related to using position onscreen as a cuing strategy. Primacy effect is the increased tendency to recall information committed to memory first and recency effect allows that items memorized last are also easier to recall. This may influence how the information is organized on a web page and how the cues might be used. (By the way, recency items fade sooner than do primacy items). One example might be a corporate intranet website with crucial information buried in a feature article. If you place that information in a single sentence synopsis at the top of the home page, you may plant the important points more permanently than forcing the readers to sift through the longer article. Any cues related to that information will likely be more effective.

Philosophy from 10,000 Feet Up

There’s a Chinese proverb that says “the palest ink is better than the sharpest memory.” I include this proverb because the palest ink serves as metaphor for how even the most understated of cues employed in an onscreen environment can be an effective recall or feedback strategy. And this strategy nurtures the perception that the computing technology is in concord with what is natural for the human user.

It’s been encouraging to watch the evolution of computing technology move away from forcing the human user to adapt to its form, function, architecture, and singularity. The continued momentum toward a more human-centered, ubiquitous interaction environment is encouraging. Humans are very dependent on the dynamics of stimulus-response cues in their natural environment; it’s important to establish a similar dynamic as we take part in designing interaction within their technological environment. The conscientious use of cues is not a panacea, of course. Because the use of cues onscreen mirrors the common stimulus-response paradigm which humans are used to in the natural world, however, it’s one of the more effective tools we can use when we design interactions.

References

fn1. Sweller, J., & Chandler, P. (1994). “Why some material is difficult to learn.” Cognition and Instruction 12(3): 185-233.

fn2. Mayer, R. E., & Moreno, R. (2003). “Nine Ways to Reduce Cognitive Load in Multimedia Learning.” Educational Psychologist 38(1): 43-52.

fn3. Paivio, A. (1986). “Dual coding theory.” Mental representations; a dual coding approach. New York, Oxford University Press: 53-83.

fn4. Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals and understanding; An inquiry into human knowledge structures. Hillsdale, NJ, Lawrence Erlbaum Associates.

fn5. Tulving, E. (1974). “Cue-dependent forgetting.” American Scientist 62(1): 74-82.

fn6. Collins, A. M., & Quillian, M. R. (2004). “The structure of semantic memory.” In Douglass Mook (ed.) Classic experiments in psychology. Westport, Conn.: Greenwood Press: 209-216.

Enhancing Dashboard Value and User Experience

Written by: Joe Lamantia

This article is the fifth in a series sharing a design framework for dashboards and portals.

Part 1 of this series, The Challenge of Dashboards and Portals, discussed the difficulties of creating effective information architectures for portals, dashboards and tile-based information environments using only flat portlets, and introduced the idea of a system of standardized building blocks that can effectively support growth in content, functionality, and users over time. In enterprise and other large scale social settings, using such standardized components allows for the creation of a library of tiles that can be shared across communities of users.

Part 2 of the series, Introduction to the Building Blocks, outlined the design principles underlying the building block system and the simple guidelines for combining blocks together to create any type of tile-based environment.

Part 3 of the series, Building Block Definitions (Containers), described the Container components of the Building Block system in detail.

Part 4 of the series, Connectors for Dashboards and Portals, described the Connector components of the Building Block system in detail.

In Part 5, we look at ways to enhance the long-term value and user experience quality of portals created with the building blocks by encouraging portability and natural patterns of dialog and interaction around aggregated content.

For the reader’s convenience, this article is divided into the following sections:

A Portal Design Vision: Two-Way Experiences

Recommendations

Metadata

Presentation Standards and Recommendations

Manage Functionality By Creating Groups

Enterprise 2.0 and the Social Portal

A Portal Design Vision: Two-Way Experiences

Portals gather and present content from a wide variety of sources, making the assembled items and streams more valuable for users by reducing the costs of content discovery and acquisition. By placing diverse content into close proximity, specialized forms of portals, such as the dashboard, support knowledge workers in creative and interpretive activities including synthesis, strategy formulation, decision making, collaboration, knowledge production, and multi-dimensional analysis.

At heart, however, aggregation is a one-way flow. In the aggregation model common to many portals, content is collected, organized, and perhaps distributed for use elsewhere, but nothing returns via the same channels. Savvy users quickly see that the greatest value of aggregative experiences and tools lies in their potential contributions to two-way flows. They understand that experiences capable of engaging direct and indirect audiences transform portal and dashboard content into a broadly useful resource for communities of much greater scope and impact. Further, business staff and IT users comfortable in the new world of Enterprise 2.0, DIY / mashups and shadow IT now often create their own information technology solutions, assembling services and tools from many sources in new ways that meet their individual needs.

Accordingly, portal designers should create experiences that support increased discussion, conversation, dialog and interaction, and allow for the potential value of remixing content in innovative ways. We might summarize a broad design vision for two-way portals that synthesizes these audiences, environmental factors and imperatives as follows:

  • Provide users with rich contextual information about the origin and nature of dashboard or portal content; context is crucial, especially in a fragmented and rapidly moving enterprise environment.
  • Improve the quality and consistency of the user experience of aggregated content.
  • Improve the portability of content, making it useful outside the boundaries of the dashboard.
  • Allow dashboard users to take advantage of other tools available outside the immediate boundaries of the portal.

Operatively, this means providing two-way channels that make it easy to share content with others or even "take it with you" in some fashion. The building block framework is ideal as a robust foundation for the many kinds of tools and functionality – participatory, social, collaborative – that support the design vision of two-way flows within and outside portal boundaries.

Recommendations

Based on this vision and my experience with the long-term evolution and usage of many portals, I recommend five ways to enhance two-way capabilities and the overall quality of user experiences designed with the building blocks framework:

  1. Define standardized Convenience functionality that could apply to all blocks. This will provide a baseline set of common capabilities for individual blocks such as export of Container content and printing.
  2. Define Utility functionality offered at the Dashboard or Dashboard Suite level. This captures common productivity capabilities for knowledge workers, linking the dashboard to other enterprise resources such as calendars and document repositories.
  3. Define common metadata attributes for all Container blocks, to support administration and management needs.
  4. Define presentation standards that appropriately balance flexibility with consistency, both within Container blocks and across the user experience.
  5. Define user roles and types of blocks or content to allow quick management of items and functionality in groups.

As with the rest of the building blocks design framework, these recommendations are deliberately neutral in terms of business components and processes, technology platforms, and development frameworks (RUBY, AIR, Silverlight, etc.), and design methods. They describe capabilities and / or functionality that design, business, and technology decision-makers can rely on as a common language when deciding together what a given portal or dashboard must accomplish, and how it should do so. (Besides allowing extension and reuse of designs, neutrality is consistent with the principles of Openness, Independence, Layering, and Portability that run throughout the building blocks system.)

Convenience Functionality

Convenience functions make it easier for users to work with the content of individual Container blocks. Good examples of Convenience functionality include printing the contents of Containers for use outside the Dashboard, or subscribing to an RSS feed that syndicates a snapshot of the contents of a block. Convenience functionality is associated with a single Container, but is not part of the content of the Container.

This collection is a suggested set of Convenience functionality meant to help establish a baseline that you can adapt to the particular needs of your users. Assign Convenience functions to individual blocks as appropriate for circumstances and as endorsed by users, business sponsors, and technologists. Some of these features make sense at all levels of the block hierarchy, and some do not (how would one print an entire Dashboard in a way that is useful or readable?).

The collection is broken into five groups:

  1. Understanding Content Sources and Context
  2. Making Dashboard Content Portable
  3. Controlling the User Experience
  4. Staying Aware of Changes / Subscriptions
  5. Social and Collaborative Tools

The illustration below shows Convenience functionality associated with a Tile.

convenience_functionality_c.jpg

Figure 1: Tile Convenience Functionality (By Group)

Group 1: Understanding Content Sources and Context

Preserving accurate indication for the source of each block’s content is critical for the effective use of heterogeneous offerings. Dashboards that syndicate Tiles from a library of shared assets may contain conflicting information from different sources, so users must have an indication of the origin and context of each block.  (Wine connoisseurs use the term "terroir.")

Show detailed source information for a block. For business intelligence and data content, the source information commonly includes the origin of the displayed data in terms of operating unit, internal or external system (from partners or licensed feeds), its status (draft, partial, production, audited, etc.), the time and date stamp of the data displayed, the update or refresh cycle, and the time and date of the next expected refresh.

For widgets, web-based applications, and content that takes the form of transactional functionality such as productivity or self-service applications delivered via an intranet or web-service, source information commonly includes the originating system or application, its operating status (up, down, relevant error messages), and identifying information about the group, operator, or vendor providing the functionality.

Send email to source system owner / data owner. This allows portal users to directly contact the "owners" of a content source. In enterprises with large numbers of internal data and functionality sources that frequently contradict or qualify one another, the ability to ask clarifying questions and obtain additional or alternative content can be critical to making effective use of the content presented within the Dashboard.

Show performance data and metrics. If standard performance data and measurements such as key performance indicators (KPIs) or balanced score cards (which have risen and then fallen out of favor in the past five years) affect or determine the contents of a block, presenting them readily at hand is good practice. 

Such performance indicators might take the form of KPIs or other formally endorsed metrics, and require:

  • Showing displayed KPIs
  • Showing supporting KPIs (rolled up or included in the summary KPI on display)
  • Showing related KPIs (parallels by process, geography, industry, customer, etc.)
  • Showing dependent KPIs (to illuminate any "downstream" impact)

For performance indicators defined by number and name—perhaps they are recognized and used across the enterprise or operating unit as a comparative baseline, or for several different measurement and assessment goals—provide this important contextual information as well.

Show related documents or assets. Whether automated via sophisticated information management solutions or collected by hand, related documents and assets increase the range and applicability of dashboard content. Bear in mind that less is often more in a world drowning in electronic assets and information.

Show source reports or assets. If the contents of a block are based on an existing report, then providing direct access to that item—bypassing document repositories, collaboration spaces, or file shares, which often have terrible user experiences and searching functions—can be very valuable for dashboard users.

Show related blocks. In large portals or Dashboards that aggregate Tiles from many different sources—perhaps several Tile Libraries—providing navigable links to related Pages or Sections of the Dashboard increases the density and quality of the connections between pieces of content. Whether mapped by hand or automated, these links can further enhance the value of the dashboard by exposing new types of relationships between informational and functional content not commonly placed in proximity in source environments.

Search for related items and assets. If individual Container blocks carry attached metadata, or metadata is available from the contents of the block, search integration could take the form of pre-generated queries using terms from local or enterprise vocabularies, directed against specifically identified data stores.

Group 2: Making Dashboard Content Portable

These capabilities enhance the portability of content, supporting the two-way communication and social flows that make content so useful outside the boundaries of the dashboard. The items below include several of the most useful and commonly requested portability measures:

  • Print contents of block
  • Email contents of block (HTML / text)
  • Email a link to block
  • Create a .pdf of block contents
  • Create a screenshot / image of block contents
  • Download contents of block (choose format)
  • Save data used in block (choose format)
  • Download source report (choose format)

Group 3: Controlling the User Experience

Individual blocks may offer users the ability to change their on-screen layout, placement, or stacking order, collapse them to smaller sizes, or possibly activate or deactivate them entirely. If designers have defined standard display states for Containers (see Presentation Standards and Recommendations below), blocks may also allow users to customize the display state:

  • Change layout or position of block on screen
  • Collapse / minimize or expand block to full size
  • Change display state of block
  • Deactivate / shut off or activate / turn on block for display

Group 4: Staying Aware of Changes / Subscriptions

Aggregation models lower information discovery and acquisition costs, but do not obviate the costs of re-finding items, and do little to help users manage flows and streams of content that change frequently. Many portals and dashboards aim to enhance users’ awareness and make monitoring the status of complex organizations and processes simpler and easier. This group includes functionality allowing users to subscribe to content through delivery channels such as RSS or to receive notices when dashboard content changes:

  • Send email on block change (it is optional to include contents)
  • Subscribe to RSS feed of block changes (it is optional to include contents)
  • Subscribe to SMS message on block change
  • Send portal Page on block change

Group 5: Social and Collaborative Tools

This group includes social features and functions that engage colleagues and others using social mechanisms. Introducing explicitly social mechanisms and capabilities into one-way dashboard and portal experiences can dramatically enhance the value and impact of dashboard content.

When designed properly and supported by adoption and usage incentives, social mechanisms can encourage rapid but nuanced and sophisticated interpretation of complex events in large distributed organizations. Social functions help preserve the insight and perspective of a diverse community of users, an intangible appreciated by many global enterprises.

Annotate block. Annotation allows contributors to add an interpretation or story to the contents of a block. Annotation is typically preserved when blocks are syndicated or shared because annotations come from the same source as the block content.

Comment on block. Commentators can provide a locally useful interpretation for a block originating from elsewhere. Comments are not always portable, or packaged with a block, as they do not necessarily originate from the same context, and their relevance will vary.

Tag blocks. Tagging with either open or predetermined tags can be very useful for discovering unrecognized audiences or purposes for block content, and quickly identifying patterns in usage that span organizational boundaries, functional roles, or social hierarchies.

Share / recommend blocks to person. Combined with presence features, sharing can speed decision-making and the growth of consensus.

Publish analysis / interpretation of block content. Analysis is a more thorough version of annotation and commenting, which could include footnoting, citations, and other scholarly mechanisms.

Publish contents of block. Publishing the contents of a block to a team or enterprise wiki, blog, collaboration site, or common destination can serve as a communication vehicle, and lower the opportunity costs of contributing to social or collaborative tools.

Rate block. Rating blocks and the ability to designate favorites is a good way to obtain quick feedback on the design / content of blocks across diverse sets of users. In environments where users can design and contribute blocks directly to a Tile Library, ratings allow collective assessment of these contributions.

Send contents of block to person (with comment). Sending the contents of a block – with or without accompanying commentary – to colleagues can increase the speed with which groups or teams reach common points of view.  This can also provide a useful shortcut to formal processes for sharing and understanding content when time is important, or individual action is sufficient.

Send link to block to person (with comment). Sending a link to a block – with or without accompanying commentary – to colleagues can increase the speed with which groups or teams reach common points of view.  This can also provide a useful shortcut to formal processes for sharing and understanding content when time is important, or individual action is sufficient.

Commenting and annotation, coupled with sharing the content that inspired the dialog as a complete package, were the most requested social capabilities among users of many of the large enterprise dashboards I have worked on.

Stacking Blocks

Some combinations of Convenience functionality will make more sense than others, depending on the contents of blocks, their purpose within the larger user experience, and the size of the blocks in the stacking hierarchy (outlined in Part 2). Figure 2 illustrates a Page composed of several sizes of Containers, each offering a distinct combination of Convenience functionality.

combinations_convenience.jpg

Figure 2: Combinations of Functionality

Convenience…or Connector Component?

Several of the Connector components (described in Part 4 of this series) – especially the Control Bar and the Geography Selector – began life as examples of Convenience functionality. Over the course of many design projects, these pieces were used so frequently that their forms standardized, and they merited independent recognition as defined building blocks. (The change is a bit like receiving a promotion.)

With sustained use of the blocks framework, it’s likely that designers will identify similar forms of Convenience functionality that deserve identification as formal building blocks, which can then be put into the library of reusable design assets. This is wholly consistent with the extensible nature of the blocks system, and I encourage you to share these extensions!

Utility Functionality

Utility functionality enhances the value of content by offering enterprise capabilities such as calendars, intranet or enterprise searching, and colleague directories, within the portal or dashboard setting. In practice, Utility functionality offers direct access to a mixed set of enterprise resources and applications commonly available outside portal boundaries in a stand-alone fashion (e.g. in MS Outlook for calendaring).

Common Utility functions include:

  • Team or colleague directories
  • Dashboard, intranet or enterprise searching
  • Dashboard personalization and customization
  • Calendars (individual, group, enterprise)
  • Alerting
  • Instant messaging
  • Corporate blogs and wikis
  • Licensed news and information feeds
  • RSS aggregators
  • Attention streams
  • Collaboration spaces and team sites
  • Profile management
  • Document repositories
  • Mapping and geolocation tools
  • Business intelligence tools
  • Supply Chain Management (SCM), Enterprise Resource Planning (ERP), and Customer Relationship Management (CRM) solutions

My Experience or Yours?

One important question designers must answer is where and how portal users will work with Utility functionality: within the portal experience itself or within the user experience of the originating tool? Or as a hybrid of these approaches?

Enterprise productivity tools and large software packages such as CRM and ERP solutions often provide consumable services via Service-Oriented Architecture (SOA) or Application Programming Interfaces (APIs), as well as their own user experiences (though they may be terrible). The needs and goals of users for your portal may clearly indicate that the best presentation of Utility functionality syndicated from elsewhere is to decompose the original experiences and then integrate these capabilities into your local portal user experience. Enterprise tools often come with design and administration teams dedicated to supporting them, teams which represent significant investments in spending and credibility. Carefully consider the wider political ramifications of local design decisions that affect branding and ownership indicators for syndicated Utility functionality.

utility_ux.jpg

Figure 3: Local vs. Source Experiences

Metadata

In portals and dashboards, aggregation often obscures origins, and content may appear far outside the boundaries of its original context and audiences. The Convenience and Utility functionality suggested above is generally much easier to implement and manage with the assistance of metadata that addresses the dashboard or portal environment.

The attributes suggested here establish a starting set of metadata for Container blocks managed locally, or as part of a Tile Library syndicated across an enterprise. The goal of this initial collection is to meet common administrative and descriptive needs, and establish a baseline for future integration metadata needs. These attributes could be populated with carefully chosen values from a series of managed vocabularies or other metadata structures, or socially applied metadata provided by users as tags, keywords, facets, etc.

Administrative Attributes:

  • Security / access level needed for content
  • System / context of origin for content
  • System / context of origin contact
  • Data lifecycle / refresh cycle for content
  • Most recent refresh time-date
  • Effective date of data
  • Block version #
  • Block release date

 Structural Attributes:

  • Container blocks stacked in this block
  • Crosswalk Connectors present within block
  • Contextual Crosswalk Connectors present within block

 Descriptive Attributes:

  • Title
  • Subtitle
  • Subject
  • Audience
  • Format
  • Displayed KPIs (defined by number / name)
  • Supporting KPIs (defined by number / name)
  • Related KPIs (defined by number / name)
  • Related Documents / Assets
  • Source Report / Assets
  • Related Blocks
  • Location

Metadata Standards

The unique needs and organizational context that drive the design of many portals often necessitates the creation of custom metadata for each Tile Library or pool of assets. However, publicly available metadata standards could serve as the basis for dashboard metadata. Dublin Core, with a firm grounding in the management of published assets, offers one useful starting point. Depending on the industry and domain for the users of the dashboard, system-level integration with enterprise vocabularies or public dictionaries may be appropriate. Enterprise taxonomies and ontologies, as well as metadata repositories or registries, could supply many of the metadata attributes and values applied to building blocks.

Presentation Standards and Recommendations

Visual Design and Style Guidelines, Page Layouts, Grid Systems

The neutrality of the building blocks framework allows architects and designers tremendous flexibility in defining the user experience of a dashboard or portal. The system does not specify any rules for laying out Pages, defining grid systems, or applying design styles or guidelines. Responsibility for these design questions should devolve to the local level and context; the architects and designers working on a given user experience must make these critical decisions.

Standards for Containers and Connectors. One of the paramount goals for the building blocks system is to minimize the presence of unneeded user experience elements (no excess chrome for designers to polish!), and maintain the primacy of the content over all secondary parts of the dashboard experience. Even so, aspects of the building blocks themselves will be a direct part of the user experience. Thus setting and maintaining standards for those aspects of Containers and connectors that are part of the user experience is essential.

The many renderings and examples of Tiles and other components seen throughout this series of articles show a common set of standards that covers:

  • Location and relationship of Tile components (Tile Body, Tile Header, Tile Footer)
  • Placement of Convenience functionality
  • Placement of Utility functionality
  • Treatment of Connector components
  • Boundary indicators for Tiles and Containers
  • Boundary indicators for mixed content (block and free-form)

Figure 4 shows one set of standards created for the Container and Connector components of an enterprise dashboard.

standards_view_border.jpg

Figure 4: Presentation Standards for Containers and Connectors

This is a starting set of elements that often require design standards. Architects and designers working with the building blocks will need to decide which block elements will be part of the user experience, and create appropriate standards. (If using lightweight and modular user experience development approaches, relying on standards and structured components, it’s possible to effect quick and easy design iteration and updates.)

Standards For Content Within Containers. Setting standards and defining best practices for layout, grid systems, and visual and information design for the contents of Container blocks will increase the perceived value of the dashboard or portal. In the long term, offering users a consistent and easy-to-understand visual language throughout the user experience helps brand and identify Tile-based assets that might be syndicated or shared widely. A strong and recognized brand reflects well on its originators. Figure 5 shows example standards for chart content in Container blocks.

standards_block_content_border.jpg

Figure 5: Presentation Standards for Chart Content in Container Blocks

Standards For Mixed Building Block and Freeform Content. Setting standards for layouts, grid systems, and information design for the freeform content that appears mixed with or between Containers makes sense when the context is known. When the eventual context of use is unknown, decisions on presentation standards should devolve to those designers responsible for managing the local user experiences.

Container States

The core principles of openness and portability that run throughout the building blocks framework mean the exact context of use and display setting for any given block is difficult for designers to predict. Defining a few (three or four at the most) different but standardized presentation states for Containers in a Tile Library can help address the expected range of situations and user experiences from the beginning, rather than on an ad-hoc basis. This approach is much cheaper over the long-term, when considered for the entire pool of managed Tiles or assets.

Since the on-screen size of any element of the user experience is often a direct proxy for its anticipated value and the amount of attention designers expect it to receive, each standard display state should offer a different combination of more or less content, tuned to an expected context. Using a combination of business rules, presentation logic, and user preferences, these different display states may be invoked manually (as with Convenience functionality) or automatically (based on the display agent or surrounding Containers), allowing adjustment to a wide range of user experience needs and settings. In practice, states are most commonly offered for Tiles and Tile Groups, but could apply to the larger Containers with greater stacking sizes, such as Views, Pages, and Sections.

One of the most commonly used approaches is to assume that a Container will appear most often in a baseline or normal state in any user experience, and that all other states cover a sliding scale of display choices ranging from the greatest possible amount of content to the least. The four states described below represent gradations along this continuum.

Normal state is the customary presentation / display for a Container, the one users encounter most often.

Comprehensive state is the most inclusive state of a Container, offering a complete set of the contents, as well as all available reference and related information or Containers, and any socially generated content such as comments, annotations, and collective analyses. Figure 6 shows a Tile in comprehensive display state.

states_comprehensive.jpg

Figure 6: Tile: Comprehensive Display

Summary state condenses the block’s contents to the most essential items, for example showing a single chart or measurement. The summary state hides any reference and related information, and places any socially generated content such as annotation or comments in the background of the information landscape. Figure 7 shows a Tile in summary display state.

states_summary.jpg

Figure 7: Tile: Summary Display

Snapshot state is the most compact form of a Container block, offering a thumbnail that might include only the block’s title and a single highly compressed metric or sparkline. Snapshot states often represent the Container in discovery and administrative settings, such as in search experiences, in catalogs of assets in a Tile Library, or in dashboard management interfaces. Figure 8 shows a Tile in snapshot display state.

states_snapshot.jpg

Figure 8: Tile: Snapshot Display

Convenience and Utility Functionality

New platforms such as Adobe Integrated Runtime (AIR) and Microsoft Silverlight, and the freedom afforded by Asynchronous Javascript and XML (AJAX) and Rich Internet Application (RIA) based experiences in general, offer too many possible display and interaction behaviors to discuss in detail here.

Accordingly, I suggest designers keep the following principles in mind when defining the interactions and presentation of Convenience and Utility functionality:

  • Convenience functionality is meant to improve the value and experience of working with individual blocks.
  • Utility functionality addresses the value and experience of the portal as a whole.
  • Convenience functionality is less important than the content it enhances.
  • Convenience functionality is always available, but may be in the background.
  • Utility functionality is always available, and is generally in the background.
  • Convenience functionality does not replace Utility functionality, though some capabilities may overlap.
  • Usability and user experience best practices strongly recommend placing Convenience functionality in association with individual blocks.
  • Usability and user experience best practices strongly recommend presenting Utility functionality in a way that does not associate it with individual Container blocks.

Manage Functionality By Creating Groups

Most users will not need the full set of Convenience and Utility functionality at all times and across all Tiles and types of Container blocks. Usage contexts, security factors, or content formats often mean smaller subsets of functionality offer the greatest benefits to users. To keep the user experience free from the visual and cognitive clutter of un-needed functionality, and to make management easier, I recommend designers define groups of functionality, users, and content. Create groups during the design process, so these constructs are available for administrative use as soon as the portal is active and available to users.

Other recommendations include:

  • Define bundles of Convenience and Utility functionality appropriate for different operating units, business roles and titles, or access levels of users.
  • Allow individual users to select from bundles of Convenience and Utility functionality. Customization commonly appears in a profile management area.
  • Create roles or personas for dashboard users based on patterns in content usage, and match roles with relevant and appropriate functionality bundles.
  • Define types of user accounts based on personas, or usage patterns and manage functionality at the level of account type.
  • Define types of Tiles or Containers based on content (informational, functional, transactional, collaborative, etc.). Apply bundles of Convenience functionality to all the Tiles or Containers of a given type.
  • Define standard levels of access for social features and functionality based on sliding scales of participation or contribution: read, rate, comment, annotate, write, edit, etc. Manage access to all social functions using these pre-defined standard levels.

Larger portals may warrant the creation of a dedicated administrative interface. The building blocks make it easy to define an administrative console accessible via a Page or Section apparent only to administrators.

Enterprise 2.0 and the Social Portal

Portals and dashboards that augment one-way aggregation of information with Convenience and Utility functionality can offer diverse and valuable content to savyy users – customers who expect Enterprise 2.0, Web 2.0, and social software capabilities from all their experiences and tools. As these recommendations demonstrate, the building blocks can serve as an effective design framework for portals that serve as two-way destinations.

Many of these recommended Convenience and Utility capabilities now come "out of the box" in portal or dashboard platforms, and the interactions that make them available to users follow standard behaviors in the resulting user experiences.When first identified as valuable for users (almost five years ago), these capabilities almost universally required teams to invest considerable amounts of time and money into custom design, development, and integration efforts. Thankfully, that is no longer the case.

Part Six of this series will explore how the Building Blocks framework solved recurring problems of growth and change for a series of business intelligence and enterprise application portals.  We will review the evolution of a suite of enterprise portals constructed for users in different countries, operating units, and managerial levels of a major global corporation.

Personas and the Role of Design Documentation

Written by: Andrew Hinton
I’d seen hard work on personas delivered in documentation to others downstream, where they were discussed for a little while during a kick-off meeting, and then hardly ever heard from again.

In User Experience Design circles, personas have become part of our established orthodoxy. And, as with anything orthodox, some people disagree on what personas are and the value they bring to design, and some reject the doctrine entirely.

I have to admit, for a long time I wasn’t much of a believer. Of course I believed in understanding users as well as possible through rigorous observation and analysis; I just felt that going to the trouble of "creating a persona" was often wasted effort. Why? Because most of the personas I’d seen didn’t seem like real people as much as caricatured wishful thinking.

Even the personas that really tried to convey the richness of a real user were often assimilated into market-segment profiles — smiling, airbrushed customers that just happened to align with business goals. I’d see meeting-room walls and PowerPoint decks decorated with these fictive apparitions. I’m ashamed to say, even I often gave in to the illusion that these people — like the doe-eyed "live callers" on adult phone-chat commercials — just couldn’t wait for whatever we had to offer.

More often than not, though, I’d seen hard work on personas delivered in documentation to others downstream, where they were discussed for a little while during a kick-off meeting, and then hardly ever heard from again.

Whenever orthodoxy seems to be going awry, you can either reject it, or try to understand it in a new light. And one way to do the latter is to look into its history and understand where it came from to begin with — as is the case with so much dogma, there is often a great original idea that, over time, became codified into ritual, losing much of the original context.

The Origin of Personas

When we say "persona", designers generally mean some methodological descendant of the work of Alan Cooper. I remember when I first encountered the idea on web-design mailing lists in 1999. People were arguing over what personas were about, and what was the right or wrong way to do them. All most people had to go on was a slim chapter in Cooper’s "The Inmates are Running the Asylum" and some rudimentary experience with the method. You could see the messy work of a community hammering out their consensus. It was as frustrating as it was interesting.

Eventually, practitioners started writing articles about the method. So, whenever I was asked to create personas for a project, I’d go back and read some of the excellent guides on the Cooper website and elsewhere that described examples and approaches. As a busy designer, I was essentially looking for a template, a how-to guide with an example that I could just fill in with my own content. And that’s natural, after all, since I was "creating a persona" to fulfill the request for a kind of deliverable.

It wasn’t until later that Alan Cooper himself finally posted a short essay on "The Origin of Personas." For me it was a revelation. A few paragraphs of it are so important that I think they require quoting in full:

I was writing a critical-path project management program that I called “PlanIt.” Early in the project, I interviewed about seven or eight colleagues and acquaintances who were likely candidates to use a project management program. In particular, I spoke at length with a woman named Kathy who worked at Carlick Advertising. Kathy’s job was called “traffic,” and it was her responsibility to assure that projects were staffed and staffers fully utilized. It seemed a classic project management task. Kathy was the basis for my first, primitive, persona.

In 1983, compared to what we use today, computers were very small, slow, and weak. It was normal for a large program the size of PlanIt to take an hour or more just to compile in its entirety. I usually performed a full compilation at least once a day around lunchtime. At the time I lived in Monterey California, near the classically beautiful Old Del Monte golf course. After eating, while my computer chugged away compiling the source code, I would walk the golf course. From my home near the ninth hole, I could traverse almost the entire course without attracting much attention from the clubhouse. During those walks I designed my program.

As I walked, I would engage myself in a dialogue, play-acting a project manager, loosely based on Kathy, requesting functions and behavior from my program. I often found myself deep in those dialogues, speaking aloud, and gesturing with my arms. Some of the golfers were taken aback by my unexpected presence and unusual behavior, but that didn’t bother me because I found that this play-acting technique was remarkably effective for cutting through complex design questions of functionality and interaction, allowing me to clearly see what was necessary and unnecessary and, more importantly, to differentiate between what was used frequently and what was needed only infrequently.

If we slow down enough to really listen to what Cooper is saying here, and unpack some of the implications, we’re left with a number of insights that help us reconsider how personas work in design.

1. Cooper based his persona on a real person he’d actually met, talked with, and observed.
This was essential. He didn’t read about "Kathy" from a market survey, or from a persona document that a previous designer (or a separate "researcher" on a team) had written. He worked from primary experience, rather than re-using a some kind of user description from a different project.

2. Cooper didn’t start with a "method" — or especially not a "methodology"!
His approach was an intuitive act of design. It wasn’t a scientific gathering of requirements and coolly transposing them into a grid of capabilities. It came from the passionate need of a designer to really understand the user — putting on the skin of another person.

3. The persona wasn’t a document. Rather, it was the activity of empathetic role-play.
Cooper was telling himself a story, and embodying that story as he told it. The persona was in the designer, not on paper. If Cooper created a document, it would’ve been a description of the persona, not the persona itself. Most of us, however, tend to think of the document — the paper or slide with the smiling picture and smattering of personal detail — as the persona, as if creating the document is the whole point.

4. Cooper was doing this in his "spare time," away from the system, away from the cubicle.
His slow computer was serendipitous — it unwittingly gave him the excuse to wander, breathe and ruminate. Hardly the model of corporate efficiency. Getting away from the office and the computer screen were essential to arriving at his design insights. Yet, how often do you see design methods that tell you to get away from the office, walk around outside and talk to yourself?

5. His persona gained clarity by focusing on a particular person — "Kathy".
I wonder how much more effective our personas would be if we started with a single, actual person as the model, and were rigorous about adding other characteristics — sticking only to things we’d really observed from our users. Starting with a composite, it’s too easy to cherry-pick bits and pieces from them to make a Frankenstein Persona that better fits our preconceptions.

Personas are actually the designer’s focused act of empathetic imagination, grounded in first-hand user knowledge.

The biggest insight I get from this story? Personas are not documents, and they are not the result of a step-by-step method that automagically pops out convenient facsimiles of your users. Personas are actually the designer’s focused act of empathetic imagination, grounded in first-hand user knowledge.

It’s not about the documents

Often when people talk about “personas” they’re really talking about deliverables: documents that describe particular individuals who act as stand-ins or ‘archetypes’ of users. But in his vignette, Cooper isn’t using personas for deliverables — he’s using them for design.

Modern business runs on deliverables. We know we have to make them. However, understanding the purposes our deliverables serve can help us better focus our efforts.

Documentation serves three major purposes when designing in the modern business:

1. Documentation as a container of knowledge, to pour into the brains of others.

By now, hopefully everyone reading this knows that passing stages of design work from one silo to the next simply doesn’t work. We all still try to do it, mainly because of the way our clients and employers are organized. As designers, though, we often have to route around the silo walls. Otherwise, we risk playing a very expensive version of "whisper down the lane," the game you play as kids where the first kid whispers something like "Bubble gum is delicious" into another’s ear, and by the end of the line it becomes "Double dump the malicious."

Of course there are some kinds of information you can share effectively this way, but it’s limited to explicit data — things like world capitals or the periodic table of elements. Yet there are vast reservoirs of tacit knowledge that can be conveyed only through shared experience.

If you’ve ever seen the Grand Canyon and tried to explain it to friends back home, you know what I mean. You’d never succeed with a few slides and bullet points. You’d have to sit down with them and — relying on voice, gesture and facial expression — somehow convey the canyon’s unreal scale and beauty. You’d have to essentially act out what the experience felt like to you.

And even if you did the most amazing job of describing it ever, and had your friends nearly mirroring your breathless wonderment, their experience still wouldn’t come close to seeing the real thing.

I’m not saying that a persona description can’t be a useful, even powerful, tool for explaining users to stakeholders. It can certainly be highly valuable in that role. I’m only saying that if you’re doing personas only for that benefit, you’re missing the point.

2. Documentation as a substitute for physical production.

Most businesses still run on an old industrial model based on production. In that model, there’s no way to know if value is being created unless there are physical widgets coming off of a conveyor belt — widgets you can track, count, analyze and hold in your hand.

In contrast, knowledge work – and especially design – has very little actual widget-production. There is lots of conversation, iteration, learning, trying and failing, and hopefully eventual success. Design is all about problem solving, and problems are stubbornly unmeasurable — a problem that seems trivial at the outset turns out to be a wicked tangle that takes months to unravel, and another that seemed insurmountable can collapse with a bit of innovative insight.

Design is messy, intuitive, and organic. So if an industrial-age management structure is to make any sense of it (especially if it’s juicing a super-hero efficiency approach like Six-Sigma), there has to be something it can track. Documents are trackable, stackable, and measurable. In fact, the old "grade by weight" approach is often the norm — hence the use of PowerPoint for delivering paper documents attenuated over two hundred bulleted slides, when the same content could’ve fit in a dozen pages using a word processor. The rule seems to be that if the research and analysis fill a binder that’s big enough to prop your monitor to eye level, then you must’ve done some excellent work.

In the pressure to create documents for the production machine, we sap energy and focus away from designing the user experience. Before you know it, everything you do — from the interviews and observations, to the way you take notes and record things, the way you meet and discuss them after, and the way you write your documentation — all ends up being shaped by the need to produce a document for the process. If your design work seems to revolve mainly around document deadlines, formatting, revision and delivery, stop a moment and make sure you haven’t started designing documents for stake-holders at the expense of designing experiences for users.

Of course, real-world design work means we have to meet the requirements of our clients’ processes. I would never suggest that we all stop delivering such documentation.

Part of the challenge of being a designer in such a context is keeping the industrial beast happy by feeding it just enough of what it expects, yet somehow keeping that activity separate from the real, dirty work of experiencing your users, getting them under your skin, and digging through design ideas until you get it right.

3. Documentation as an artifact of collaborative work and memory.

While the first two uses are often necessary, and even somewhat valuable, this third use of documentation is the most effective for design — essentially a sandbox for collaboration.

These days, because systems tend to be more interlinked, pervasive and complex, we use cross-disciplinary teams for design work. What happened in Cooper’s head on the golf course now has to somehow happen in the collective mind of a group of practitioners; and that requires a medium for communication. Hence, we use artifacts — anything from whiteboard sketches to clickable prototypes.

The artifacts become the shorthand language collaborators use to "speak design" with one another, and they become valuable intuitive reminders of the tacit understanding that emerges in collaborative design.

Personas, as documents, should work for designers the way scent works for memories of your childhood.

Because we have to collaborate, the documentation of personas can be helpful, but only as reminders. Personas, as documents, should work for designers the way scent works for memories of your childhood. Just a whiff of something that smells like your old school, or a dish your grandmother used to make, can bring a flood of memory. Such a tool can be much more efficient than having to re-read interview transcript and analysis documents months down the road.

A persona document can be very useful for design — and for some teams even essential. But it’s only an explicit, surface record of a shared understanding based on primary experience. It’s not the persona itself, and doesn’t come close to taking the place of the original experience that spawned it.

Without that understanding, the deliverables are just documents, empty husks. Taken alone, they may fulfill a deadline, but they don’t feed the imagination.

Playing the role

About six months ago, my thoughts about this topic were prompted by a blog post from my colleague Antonella Pavese. In her post, she mentions the point Jason Fried of 37 Signals makes in +Getting Real+ that, at the end of the day, we can only design for ourselves. This seems to fly in the face of user-centered design orthodoxy – and yet, if we’re honest, we have to realize the simple scientific fact that we can’t be our users, we can only pretend to be. So what do we do, if we’re designing something that doesn’t have people just like us as its intended user?

Antonella mentions how another practitioner, Casey Malcolm, says to approach the problem:

To teach [designers] how to design usable products for an older population, for example, don’t tell designers to take in account seniors’ lower visual acuity and decreased motor control. Let young designers wear glasses that impair their visual acuity. Tie two of their fingers together, to mimic what it means to have arthritis or lower motor control."

Antonella goes on:

So, perhaps Jason Fried is completely on target. We can only design for ourselves. Being aware of it, making it explicit can make us find creative ways of designing for people who are different from us… perhaps we need to create experience labs, so that for a while we can live the life of the people we are designing for."

At UX Week in Washington, DC this summer, Adaptive Path unveiled a side project they’d been working on — the Charmr, a new design concept for insulin pumps and continuous monitors that diabetics have to constantly wear on their bodies. In order to understand what it was like to be in the user’s skin, they interviewed people who had to use these devices, observed their lives, and ruminated together over the experience. Some of the designers even did physical things to role-play, such as wearing objects of similar size and weight for days at a time. The result? They gained a much deeper feel for what it means to manage such an apparatus through the daily activities the rest of us take for granted — bathing, sleeping, playing sports, working out, dancing, everything.

Personas aren’t ornaments that make us more comfortable about our design decisions. They should do just the opposite.

One thing a couple of the presenters said really struck me — they said they found themselves having nightmares that they’d been diagnosed with diabetes, and had to manage these medical devices for the rest of their lives. Just think — immersing yourself in your user’s experience to the point that you start having their dreams.

The team’s persona descriptions weren’t the source of the designers’ empathy — that kind of immersion doesn’t happen from reading a document. Although the team used various documentation media throughout their work – whiteboards and stickies, diagrams and renderings – these media furthered the design only as ephemeral artifacts of deeper understanding.

And that statement is especially true of personas. They’re not the same as market segmentation, customer profiling or workflow analysis, which are tools for solving other kinds of problems. Neither do personas fit neat preconceptions, use-cases or demographic models, because reality is always thornier and more difficult. Personas aren’t ornaments that make us more comfortable about our design decisions. They should do just the opposite — they may even confound and bedevil us. But they can keep us honest. Imagine that.

References:

  • Alan Cooper, “The Origin of Personas”:http://www.cooper.com/insights/journal_of_design/articles/the_origin_of_personas_1.html
  • Jason Fried, “Ask 37 Signals: Personas?”:http://www.37signals.com/svn/posts/690-ask-37signals-personas
  • Antonella Pavese, “Get real: How to design for the life of others?”:http://www.antonellapavese.com/archive/2007/04/249/
  • Dan Saffer, “Charmr: How we got involved”:http://www.adaptivepath.com/blog/2007/08/14/charmr-how-we-got-involved/

_*Author’s Note:* In the months since the first draft of this article, spirited debate has flared among user-experience practitioners over the use of personas. We’ve added a few links to some of those posts below, along with links to the references mentioned in the piece. I’d also like to thank Alan Cooper for his editorial feedback on my interpretation of his Origins article._

  • Peter Merholz, “Personas 99% bad?”:http://www.peterme.com/?p=624
  • Joshua Porter, “Personas and the advantage of designing for yourself”:http://bokardo.com/archives/personas-and-the-advantage-of-designing-for-yourself/ and “Personas as tools”:http://bokardo.com/archives/personas-as-tools/
  • Jared Spool, “Crappy personas vs. robust personas”:http://www.uie.com/brainsparks/2007/11/14/crappy-personas-vs-robust-personas/ and “Personas are NOT a document”:http://www.uie.com/brainsparks/2008/01/24/personas-are-not-a-document/
  • Steve Portigal, “Persona Non Grata”:http://interactions.acm.org/content/?p=262, interactions, January/February 2008