Through social psychology and cognitive science, we now know a great deal about our own frailties in the way that we seek, use, and understand information and data. On the web, user interface design may work to either exacerbate or counteract these biases. This article will give a brief overview of the science then look at possible ways that design and implementation can be employed to support better judgements.
Fast and slow cognitive systems: How we think
If you are even remotely interested in psychology, you should read (if you haven’t already) Daniel Kahneman’s master work “Thinking Fast and Slow.”1 In it, he brings together a mass of findings from his own and others’ research into human psychology.
The central thesis is that there are two distinct cognitive systems: a fast, heuristic-based and parallel system, good at pattern recognition and “gut reaction” judgements, and a slower, serial, and deliberative system which engages more of the processing power of the brain.
We can sometimes be too reliant on the “fast” system, leading us to make errors in distinguishing signal from noise. We may incorrectly accept hypotheses on a topic, and we can be quite bad at judging probabilities. In some cases we overestimate the extent of our own ability to exert control over events.
The way of the web: What we’re confronted with
We are increasingly accustomed to using socially-oriented web applications, and many social features are high on the requirements lists of new web projects. Because of this, we need to be more aware of the way people use social interface cues and how or when these can support good decision-making. What we do know is that overreliance on some cues may lead to suboptimal outcomes.
Social and informational biases
Work with ecommerce ratings and reviews have noted the “bandwagon” effect, where any item with a large number of reviews tends to be preferred, often when there is little knowledge of where the positive reviews come from.2 A similar phenomenon is the “Matthew” effect (“whoever has, shall be given more”), where items or users with a large number of up-votes will tend to attract more up-votes, regardless of the quality of the item itself.3
Coupled with this is an “authority” effect, when any apparent cue as to authenticity or expertise on the part of the publisher is quickly accepted as a cue to credibility. But users may be poor at distinguishing genuine from phony authority cues, and both types may be overridden by the stronger bandwagon effect.
A further informational bias known as the “filter bubble” phenomenon has been much publicized and can be examined through user behavior or simple link patterns. Studies of linking between partisan political blogs, for instance, may show few links between the blogs of different political parties. The same patterns are true in a host of topic areas. Our very portals into information, such as the first page of a Google search, may only present the most prevalent media view on a topic and lack the balance of alternative but widely-held views.4
Extending credibility and capability through the UI (correcting for “fast” cognitive bias)
Some interesting projects have started to look at interface “nudges” which may encourage good information practice on the part of the user. One example is the use of real-time usage data (“x other users have been viewing this for xx seconds”), which may–through harnessing social identity–extend the period with which users interact with an item of content, as there is clear evidence of others’ behavior.
Another finding from interface research is that the way the user’s progress is presented can influence his willingness to entertain different hypotheses or reject currently held hypotheses.5
The mechanism at work here may be similar to that found in a study of the deliberative online application ConsiderIt. Here, there was a suggestion that users will seek balance when their progress is clearly indicated to have neglected a particular side of a debate–human nature abhors an empty box!6
In online reviews, much work is going on to detect and remove spammers and gamers and provide better quality heuristic cues. Amazon now shows verified reviews; any way that the qualification of a reviewer can be validated helps prevent the review count from misleading.
To improve quality in in collaborative filtering systems, it is important to understand that early postings have a temporal advantage. Later postings may be more considered, argued, and evidence-based but fail to make the big time due never gaining collective attention and the early upvotes.
In any sort of collaborative resource, ways to highlight good quality new entries and rapid risers are important, whether this is done algorithmically or through interface cues. It may also be important to encourage users to contribute to seemingly “old” items, thereby keeping them fresh or taking account of new developments/alternatives. On Stack Overflow, for instance, badges exist to encourage users to contribute to old threads:
Designing smarter rather than simpler
We know that well-presented content and organized design makes information appear more credible. Unfortunately, this can also be true when the content itself is of low quality.
Actual interaction time and engagement may increase when information is actually slightly harder to decipher or digest easily. This suggests that simplification of content is not always desirable if we are designing for understanding over and above mere speedy consumption.
Sometimes, perhaps out of the fear of high bounce rates, we might be ignoring the fact that maybe we can afford to lose a percentage of users if those that stick are motivated to really engage with our content. In this case, the level of detail to support this deeper interaction needs to be there.
Familiarity breeds understanding
Transparency about the social and technical mechanics of an interface is very important. “Black boxing” user reputation or content scoring, for instance, makes it hard for us to judge how useful it should be to decision making. Hinting and help can be used to educate users into the mechanics behind the interface. In the Amazon example above, for instance, a verified purchase is defined separately, but not linked to the label in the review itself.
Where there is abuse of a system, users should be able to understand why and how it is happening and undo anything that they may have inadvertently done to invite it. In the case of the “like farming” dark pattern on Facebook, it needed a third party to explain how to undo rogue likes, information that should have been available to all users.
There is already evidence that expert users become more savvy in their judgement through experience. Studies of Twitter profiles have, for instance, noted a “Goldilocks” effect, where excessively high or low follower/following numbers are treated with suspicion, but numbers more in the middle are seen as more convincing.7 Users have come to associate such profiles with more meaningful and valued content.
In conclusion: Do make me think, sometimes
In dealing with information overload, we have evolved a set of useful social and algorithmic interface design patterns. We now need to understand how these can be tweaked or applied more selectively to improve the quality of the user experience and the quality of the interaction outcomes themselves. Where possible, the power of heuristics may be harnessed to guide the user rapidly from a to b. But in some cases, this is undesirable and we should look instead at how to involve some more of the greater deliberative power of the mind.
Do you have examples of interface innovations that are designed either to encourage “slow” engagement and deeper consideration of content, or to improve on the quality of any “fast” heuristic cues? Let me know through the comments.
1 Kahneman D. Thinking, fast and slow. 1st ed. New York: Farrar, Straus and Giroux; 2011.
2 Sundar SS, Xu Q, Oeldorf-Hirsch A. Authority vs. peer: how interface cues influence users. CHI New York, NY, USA: ACM; 2009.
3 Paul SA, Hong L, Chi EH. Who is Authoritative? Understanding Reputation Mechanisms in Quora. 2012 http://arxiv.org/abs/1204.3724.
4 Simpson TW. Evaluating Google as an Epistemic Tool. Metaphilosophy 2012;43(4):426-445.
5 Jianu R, Laidlaw D. An evaluation of how small user interface changes can improve scientists’ analytic strategies. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems New York, NY, USA: ACM; 2012.
6 Kriplean T, Morgan J, Freelon,D., Borning,A., Bennett L. Supporting Reflective Public Thought with ConsiderIt. CSCW 2012; 2012; .
7 Westerman D, Spence PR, Van Der Heide B. A social network as information: The effect of system generated reports of connectedness on credibility on Twitter. Computers in Human Behavior 2012; 1;28(1):199-206.
Dear Paul: Great article and a clever title. Your point about nudges to “encourage good information practice on the part of the user” is well taken.
1. A colleague and I designed a complex UI for a manager-level view. To improve the quality of fast, heuristic cues, we added numbers to each tab in a series so Tab A (4), Tab B (8), etc… Innovative? No. But, in the context of this internal-facing app, one of a series of dramatic improvements to an app that was, originally, cumbersome at best. This feature and the manager view tested well.
2. As you know, the literature around bias refers to cognitive trip wires as one way to encourage people to slow down when conducting analysis and making decisions. I’ve often wondered about the UI equivalent of a trip-wire. Something I’d like to try, where appropriate, would be a humorous message or illustration that would show users what they need before delving into a complex web app (error prevention). Or, perhaps a humorous error message or some other creative way to draw attention to errors (error correction).
Thanks for this article. Useful and enjoyable.
Comments are closed.