Three Ways to Improve Your Design Research with Wordle

Written by: Jeff Tang

“Above all else show the data.”
–Edward Tufte

Survey responses. Product reviews. Keyword searches. Forums. As UX practitioners, we commonly scour troves of qualitative data for customer insight. But can we go faster than line-by-line analysis? Moreover, how can we provide semantic analysis to project stakeholders?

Enter Wordle. If you haven’t played with it yet, Wordle is a free Java application that generates visual word clouds. It can provide a compelling snapshot of user feedback for analysis or presentation.

Using Wordle for content strategy

Wordle excels at comparing company and customer language. Here’s an example featuring one of Apple’s crown jewels, the iPad. This text comes from the official iPad Air web page. After common words are removed and stemmed:

iPad Air Wordle

Apple paints a portrait of exceptional “design” with great “performance” for running “apps.” Emotive adjectives like “incredible,” “new,” and “Smart [Cover]” are thrown in for good measure. Now compare this to customer reviews on Amazon.com:

image02

To paraphrase Jakob Nielsen, systems should speak the user’s language. And in this case, customers speak more about the iPad’s “screen” and “fast[er]” processor than anything else. Apps don’t even enter the conversation.

A split test on the Apple website might be warranted. Apple could consider talking less about apps, because users may consider them a commodity by now. Also, customer lingo should replace engineering terms. People don’t view a “display,” they look at a “screen.” They also can’t appreciate “performance” in a vacuum. What they do appreciate is that the iPad Air is “faster” than other tablets.

What does your company or clients say in its “About Us,” “Products,” or “Services” web pages? How does it compare to any user discussions?

Using Wordle in comparative analysis

Wordle can also characterize competing products. For example, take Axure and Balsamiq, two popular wireframing applications. Here are visualizations of recent forum posts from each website. (Again, popular words removed or stemmed.)

Axure Wordle

Balsamiq Wordle

Each customer base employs a distinct dialect. In the first word cloud, Axure users speak programmatically about panels (Axure’s building blocks), widgets, and adaptive design. In the Balsamiq cloud, conversation revolves more simply around assets, text, and projects.

These word clouds also illustrate product features. Axure supports adaptive wireframes; Balsamiq does not. Balsamiq supports Google Drive; Axure does not. Consider using Wordle when you want a stronger and more immediate visual presentation than, say, a standard content inventory.

Beyond comparative analysis, Wordle also surfaces feature requests. The Balsamiq cloud contains the term “iPad” from users clamoring for a tablet version. When reviewing your own Wordle creations, scan for keywords outside your product’s existing features. You may find opportunities for new use cases this way.

Using Wordle in iterative design

Finally, Wordle can compare word clouds over time. This is helpful when you’re interested in trends between time intervals or product releases.

Here’s a word cloud generated from recent Google Play reviews. The application of interest is Temple Run, a game with over 100 million downloads:

Temple Run Wordle

As you can see, players gush about the game. It’s hard to imagine better feedback.

Now let’s look at Temple Run 2, the sequel:

Temple Run sequel Wordle

Still good, but the phrase “please fix” clearly suggests technical problems. A user researcher might examine the reviews to identify specific bugs. When comparing word clouds over time, it’s important to note new keywords (or phrases) like this. These changes represent new vectors of user sentiment.

Wordle can also be tested at fixed time intervals, not just software versions. Sometimes user tastes and preferences evolve without any prompting.

Summary

Wordle is a heuristic tool that visualizes plaintext and RSS feeds. This can be quite convenient for UX practitioners to evaluate customer feedback. When seen by clients and stakeholders, the immediacy of a word cloud is more compelling than a typical PowerPoint list. However, keep the following in mind when you use Wordle:

  • Case sensitivity. You must normalize your words to lower (or upper) case.
  • Stemming. You must stem any significant words in your text blocks.
  • Accuracy. You can’t get statistical confidence from Wordle. However, it essentially offers unlimited text input. Try copying as much text into Wordle as possible for best results.
  • Negative phrases. Wordle won’t distinguish positive and negative phrasing. “Good” and “not good” will count as two instances of the word “good.”

That’s it. I hope this has been helpful for imagining text visualizations in your work. Good luck and happy Wordling.

Guerrilla Usability at Conferences

Written by: Nick Cawthon

Does your company have display booths at trade shows and conferences? Typically, these are marketing-dominated efforts, but if you make the case to travel, working the booth can be used for user research. Here’s how I’ve done it.

Positioning and justification

At times it can be a hard internal sell to justify the costs and diversions to take your one- or two-person show on the road, all the while piggybacking off of another department’s efforts. Yet, standing on your feet for 12 hours a day doubles as a high-intensity, ‘product booth-camp.’ Say what you will about sales folk, but they are well trained on knowing how to (or finding someone who can) answer any question that comes their way. As an in-house UX professional, the more I can technically understand about our SaaS product, the more context I can have about our user’s needs.

I’ve found that having prospective customers participate in a usability session is a great way to show that we were taking the time to invest in them and their opinions of the product. As a result, there have been specific features that have been rolled into our application during the next sprint, which were proposed as small sound bites of feedback during these sessions. It shows we were listening, and makes a great justification for a follow-up phone call.

Recruiting and screening

To recruit, I scan Twitter to find those who tweet that they are excited about attending the upcoming conference. I cross-reference the Twitter handles to the names in LinkedIn to see if, based on job title and industry, they would be good participants.

I reach out to them to see if they’d be willing to sign up for a slot, proposing times between presentation sessions or before/after lunch to not conflict with their conference attendance.

Because the expo halls are generally open the entire day, even if there is no one booked on the calendar in specific spots, I also grab people just milling about to keep the sessions going. If you do this, be sure to quickly do a visual scan of their badge, as you can get a good sense of what they do and what knowledge they might have by where they work.

Booking

For the time bookings, I find that Calendly.com is a flexible, free, user-friendly way to book time slots with random people, using just a URL with no account sign-ups needed. In addition to custom time buckets (18 minutes, anyone?), Calendly also provides the option of a buffer increment after every session, so I can take notes and regroup.

Screen shot of a calendar with appointments booked.
Pick a time, (most) anytime.

Calendly does a good job of reminding participants when to show up and how find me–all the important things, including integrating well with all the major calendaring applications.

Come conference time, I have a slate of appointments along with contact information and reminders when they were coming. Couldn’t be easier. If expo hall hours change, I can easily message participants to let them know of the reschedule.

Duration

In a normal, controlled setting, I would typically want to go a full hour with a participant to properly delve into the subject matter and go through a number of different tasks and scenarios. “Pick a few and grade on a curve,” as Neilsen once said.

However, with the participant’s attention scattered given the sensory overload of the conference floor, anything more than 20 minutes gets to feel too long. At conferences, you’re going for quantity over quality. An advantage to this staccato method is when you find a vein of usability that you want to continue to explore in further depth and detail, there’s likely another participant right around the corner (either scheduled or random) to confirm or refute that notion.

Script and tone

The main challenge of this technique is that you’re not supposed to ‘sell’ in the role of testing moderator but rather to guide and respond. I wear many hats when working a booth; when not conducting these sessions, I sell the product alongside marketing.

As a result, 90% of the conversations in the booth are indeed sales, and switching roles so quickly is sometimes hard. I try to check myself when the testing script bleeds into ‘did you know that there are these features…’, because after 3+ days and what feels like a thousand conversations, I tend to put my conversations on a programmed sales loop, letting my brain rest a bit by going off of a script.

A pre-written task list helps keep me on point as a moderator. However, with the variety in participant group, I use the script much more as a guide than a mandate.

As with any usability session, I let the participants veer into whatever area of the app interest them the most and try to bring them back to the main road ever so subtly. With so many participants in such a short period of time, sometimes these unintended diversions became part of the next participant’s testing script, as it is easy to quickly validate or refute any prior assumptions.

Tools

Following the ‘guerrilla gorilla’ theme of this article, I use Silverback for my recording sessions. Silverback is a lightweight UX research tool that is low cost and works very well.

At one event, without my Bluetooth remote to use Silverback’s built-in marker/highlights, I paired an iPhone with an app called HippoRemote. Meant initially to provide ‘layback’ DVR/TV functionality, Hippo can also be written with custom macros to allow you to develop third-party libraries.

In the case of integrating with Silverback, this meant Hippo marked the start of new tasks, highlights of sound bytes, and starting/stopping recording–all the things that the Apple Remote should have done natively.

Despite some of the challenges in peripherals, Silverback is absolutely the right tool for the job. It’s lightweight, organized, and marks tasks and highlights efficiently.

Screen grab of the Silverback UI
Silverback UI

I recommend a clip-on microphone or directional mic given the background noise from the conference floor. Any kind of isolation that you can do for the participant’s voice will save you time in the long run, because you won’t have to try to scrub the audio in post-processing. Moving the sessions to somewhere quiet is a hard proposition, as the center of activity is where the impromptu recruitment tends to occur.

Wi-Fi

As a data-intensive SaaS product, the biggest challenge comes when trying to use the conference wi-fi. With the attendees swamping access points, there is no guarantee that I can pair the testing laptop and the iPhone used for marking, because they both need to be on the same network router for integration with with Silverback.

An ad-hoc network for the Mac won’t work, because I still need web access to use the application. Using my mobile phone as an access point has bandwidth constraints, and choppy downloads are not a good reflection on the speed of our application.

Unfortunately, then, every session begins with an apology on how slow the application is performing due to the shared conference wi-fi. A high-speed, private access point or a hardline into your booth cures all of these issues and would be worth the temporary investment for sales demonstrations and usability sessions alike.

Summary

There are a few adaptations we, as usability professionals, have to make from a traditional sit-down, two-sided-glass setting. Conference booth testing is a much more informal process, with an emphasis on improvisation and repetition. Some of the tools and methods used in guerilla testing certainly are not as proven or stable, but the potential recruitment numbers outweighs the inconveniences of a non-controlled setting.

From an educational standpoint, being inside the booth for days at a time will raise your knowledge-level considerably. You’ll hear again and again the type of questions and responsive dialog that prospective customers have around the product, and you’ll start to recognize the pain points coming from the industry.

After a half-dozen conferences, you’ll start to understand the differences in the average participant. In the case of the technology-centric attendees, some conferences provide a recruitment base of high-level generalists, with others being much executionally closer to the ground and detail-oriented. I tend to tailor my scripts accordingly, focusing on principles and concepts with the generalists, and accomplishment of specific tasks with the more programmatic participant.

One good thing about working for Loggly o’er here in the startup world is the ability to create paths and practices where there were none before. Pairing with the marketing team, using a portion of the presentation table to recruit participants off the expo hall floor, and sitting them down for a quick walkthrough of the product is a great way to become inspired about what you do and who you’re working for. As someone who still gets excited to travel, meet new people, and play off crowds, these sessions are always a highlight for me to conduct guerilla usability in front of my customers, peers, and my co-workers.

Context matters

Written by: Maciej Płonka

What makes a marketing e-mail or newsletter efficient? One can judge, for instance, by the number of users that opened the message or clicked on a specified element representing primary action, such as a product link or button.

Those indicators measure user engagement precisely; however, they are limited to the last phase of interaction with e-mail or newsletter. The act of clicking certain element in a marketing e-mail is a result of a longer process of identifying, assimilating, and analyzing its content. It is in those three steps that the decision is made to take action or not, and it is those three steps that are not analyzed or included in standard efficiency measurement, such as CTR or open-rate.

Therefore, click-through-rate or open-rate measures only completed processes, not taking into account those interrupted. Moreover, those parameters do not inform us about “why” a certain user decided to click or abandon the message.

Methodology

One way to understand what is happening in users’ minds is to observe what they really see, which cannot be done using the traditional methods of e-mail research. Instead, we used eye tracking on a desktop computer to record the person’s gaze while looking at the e-mail message, checking which objects they looked at, for how long, and which elements, among the whole field of the vision, attracted their attention the most.

To check what kind of impact some of the characteristics of e-mails have on users, some of the stimuli were transformed by our team. For instance, we modified location of logo and the calls-to-action, changed size of prices, or flopped photos change the direction the person in the photo is facing.

Each of the stimuli used in the study had two versions–an original and a modified one. Each version was seen by 27 participants. All of the heat maps in the report are derived from the averaging of 10 second long scan paths of 27 subjects.

Observations: Testing known principles and their variations

Our different observations confirm some of the generally known design principles, such as users’ deep-rooted dislike of homogenous blocks of text.

At the same time, some of our hypotheses were disproved. For instance, reducing the length of introductory text did not result in an increased number of users reading it. In fact, introductory text was so rarely read that a general recommendation from our research is to remove it all together in favor of items that really matter.

Text and reading

Learning how to read and gaining experience in this activity shapes our perception since early childhood. In our (Western) culture, we read from left to right and from top to bottom. This becomes a strong habit and this strategy of scanning a visual stimulus is executed automatically, even if the viewed stimulus does not contain text.1

What is more, readers on the web are very selective.2 They constantly search for valuable content, but when the required amount of effort increases, their motivation plummets. Below, we describe further and illustrate those phenomena with the examples from our study.

Blocks of text

It may sound like a truism, but it is always good to have in mind that a homogenous block of text is not a good way to communicate with the Internet users.2 One can often observe in eyetracking studies that users tend to skip this kind of content, without making even the slightest attempt to read it.

Fortunately, there are some tips and tricks which can make the text more attractive to the user’s eye. First, formatting which includes clearly distinguishable headlines and leads often results in a phenomenon called F-pattern.

Fig. 1: A heat map showing an F-pattern

 

Readers have a strong tendency to scan headlines briefly, and they usually start to read from the top of the page. Their motivation to focus their attention on a written content decreases gradually, so you may expect that the first few headlines (counting from the top) will be read, and that the lower the headline is located, the less attention it will get.

Introduction text in an e-mail message

Reading requires time and effort, and the recipients of a newsletter want to quickly get exactly the information they are interested in (which usually means the special offers). It did not surprise us that introductory text in a newsletter would be ignored most of the time.3

But what to include in the marketing message instead of introductory blah-blah text? The answer seems obvious–more valuable content, such as the products we want to present.

Our study confirmed that hypothesis: After cutting most of the introductory text out, the amount of attention focused on it did not change much. On the other hand, the products presented in the message benefited greatly in terms of attracting users’ gaze.

Fig. 2: Scan paths. Left, without introductory text. Right, with introductory text.
Fig. 2: Scan paths. Left, without introductory text. Right, with introductory text

Properties of numbers

The next thing we wanted to focus on was if numbers caught a human’s eye. Nielsen4 suggested that numbers written as numerals are eye-catching, whereas numbers written with letters are not, because they are indistinguishable from an ordinary piece of text.

Fig. 3: Heat maps. Left, the original version with large numbers. Right, the modified version, with downsized prices.
Fig. 3: Heat maps. Left, the original version with large numbers. Right, the modified version, with downsized prices

We studied how long the participants focused their gaze on numbers, depending on their size. The difference between small and large digits turned out to be statistically significant. The average difference between small and large number approximated 200 and 400 ms for both prices depicted in the stimulus. From the psychophysiological perspective, this is a long time. The longer we fixate on an object, the deeper the processing and understanding of the visual information.5

Communication through images

Pictures: What’s worth it, and what’s not

One of the widely known phenomena which can be observed in eyetracking and usability studies is so-called banner blindness. In short, web users tend to act as if they were blind to advertisements or other types of redundant information, which can only distract them from completing the task. This adaptive mechanism applies as well to stock photos and to pictures which do not present the real products or people. Pictures without informational value may even pull the viewers’ attention away from the valuable content because they may be easily classified as an advertisement, which is usually neither informative nor relevant.

Directing users’ attention by faces

Some types of pictorial stimuli are almost always classified as important. One of them is certainly a human face. We are social animals, so we are perfectly wired to automatically read the subtle social cues, for example those connected with decoding where the attention of another human being is directed at the moment.

Fig. 4: Scan path
Fig. 4: Scan path

And example of how this reflexive mechanism works can bee seen on the picture above. The participant automatically followed the gaze of the model right after noticing her face.

In the original version of this newsletter the model looked straight forward. We have created the modified version in which the model is looking at the logo. We tested both versions with our participants, and then we examined whether there is a significant difference in the amount of time the participants fixated on the logo. In the modified version, the average time of focused gaze on the logo was significantly longer.

Fig. 5: Heat maps. Left, the original version. Right, the modified version, with gaze direction diverted
Fig. 5: Heat maps. Left, the original version. Right, the modified version, with gaze direction diverted

Conclusion

Our observations and recommendations are rooted in a number of studies focused on what recipients do really see while looking at advertisements in email campaigns. Some of the effects repeated in our 2011 and 2013 studies; some of them were also confirmed in studies on the perception of the e-mails and newsletters carried out by other teams.

But we should not forget that those are general laws, which, however, in particular creation may be not fulfilled due to various mitigating factors, such as the content of the e-mail, its size, and the level of the audience engagement.

References

1Ziming Liu, (2005) “Reading behavior in the digital environment: Changes in reading behavior over the past ten years”, Journal of Documentation, Vol. 61 Iss: 6, pp.700 – 712

2 Nielsen, J., (1997), How Users Read on the Web, Retrieved 15 June, 2013, from http://www.nngroup.com/articles/how-users-read-on-the-web/

3 Nielsen, J., (2007), Blah-Blah Text: Keep, Cut or Kill? Retrieved 15 June, 2013,rom http://www.nngroup.com/articles/blah-blah-text-keep-cut-or-kill/ Ros Hodgekiss, (2011),

Email usability: The science of keeping it short and sweet, Retrieved 15 June, 2013, from http://www.campaignmonitor.com/blog/post/3383/email-usability-keeping-your-email-newsletters-short-and­-sweet/ ]

4 Nielsen, J., (2007), Show Numbers as Numerals When Writing for Online Readers, Retrieved 15 June, 2013, from http://www.nngroup.com/articles/web-writing-show-numbers-as-numerals/

5 Poole, A., and Ball, L. J. Eye tracking in human-computer interaction and usability research., Encyclopedia of human computer interaction. Idea Group, Pennsylvania, 2005, 211-219.

Clicking Fast and Slow

Written by: Paul Matthews

Through social psychology and cognitive science, we now know a great deal about our own frailties in the way that we seek, use, and understand information and data. On the web, user interface design may work to either exacerbate or counteract these biases. This article will give a brief overview of the science then look at possible ways that design and implementation can be employed to support better judgements.

Fast and slow cognitive systems: How we think

If you are even remotely interested in psychology, you should read (if you haven’t already) Daniel Kahneman’s master work “Thinking Fast and Slow.”1 In it, he brings together a mass of findings from his own and others’ research into human psychology.

The central thesis is that there are two distinct cognitive systems: a fast, heuristic-based and parallel system, good at pattern recognition and “gut reaction” judgements, and a slower, serial, and deliberative system which engages more of the processing power of the brain.

We can sometimes be too reliant on the “fast” system, leading us to make errors in distinguishing signal from noise. We may incorrectly accept hypotheses on a topic, and we can be quite bad at  judging probabilities. In some cases we overestimate the extent of our own ability to exert control over events.

The way of the web: What we’re confronted with

We are increasingly accustomed to using socially-oriented web applications, and many social features are high on the requirements lists of new web projects. Because of this, we need to be more aware of the way people use social interface cues and how or when these can support good decision-making. What we do know is that overreliance on some cues may lead to suboptimal outcomes.

Social and informational biases

Work with ecommerce ratings and reviews have noted the “bandwagon” effect, where any item with a large number of reviews tends to be preferred, often when there is little knowledge of where the positive reviews come from.2 A similar phenomenon is the “Matthew” effect (“whoever has, shall be given more”), where items or users with a large number of up-votes will tend to attract more up-votes, regardless of the quality of the item itself.3

Coupled with this is an “authority” effect, when any apparent cue as to authenticity or expertise on the part of the publisher is quickly accepted as a cue to credibility. But users may be poor at distinguishing genuine from phony authority cues, and both types may be overridden by the stronger bandwagon effect.

A further informational bias known as the “filter bubble” phenomenon has been much publicized and can be examined through user behavior or simple link patterns. Studies of linking between partisan political blogs, for instance, may show few links between the blogs of different political parties. The same patterns are true in a host of topic areas. Our very portals into information, such as the first page of a Google search, may only present the most prevalent media view on a topic and lack the balance of alternative but widely-held views.4

Extending credibility and capability through the UI (Correcting for “fast” cognitive bias)

Some interesting projects have started to look at interface “nudges” which may encourage good information practice on the part of the user. One example is the use of real-time usage data (“x other users have been  viewing this for xx seconds”), which may–through harnessing social identity–extend the period with which users interact with an item of content, as there is clear evidence of others’ behavior.

Another finding from interface research is that the way the user’s progress is presented can influence his willingness to entertain different hypotheses or reject currently held hypotheses.5

Screen grab from ConsiderIt showing empty arguments
Screen grab from ConsiderIt showing empty arguments

The mechanism at work here may be similar to that found in a study of the deliberative online application ConsiderIt. Here, there was a suggestion that users will seek balance when their progress is clearly indicated to have neglected a particular side of a debate–human nature abhors an empty box!6

In online reviews, much work is going on to detect and remove spammers and gamers and provide better quality heuristic cues. Amazon now shows verified reviews; any way that the qualification of a reviewer can be validated helps prevent the review count from misleading.

Screen grab showing an Amazon review.
Screen grab showing an Amazon review.

To improve quality in in collaborative filtering systems, it is important to understand that early postings have a temporal advantage. Later postings may be more considered, argued, and evidence-based but fail to make the big time due never gaining collective attention and the early upvotes.

In any sort of collaborative resource, ways to highlight good quality new entries and rapid risers are important, whether this is done algorithmically or through interface cues.  It may also be important to encourage users to contribute to seemingly “old” items, thereby keeping them fresh or taking account of new developments/alternatives. On Stack Overflow, for instance, badges exist to encourage users to contribute to old threads:

Screen grab from Stack Overflow showing a call to action.
Screen grab from Stack Overflow showing a call to action.

 

Designing smarter rather than simpler

We know that well-presented content and organized design makes information appear more credible. Unfortunately, this can also be true when the content itself is of low quality.

Actual interaction time and engagement may increase when information is actually slightly harder to decipher or digest easily. This suggests that simplification of content is not always desirable if we are designing for understanding over and above mere speedy consumption.

Sometimes, perhaps out of the fear of high bounce rates, we might be ignoring the fact that maybe we can afford to lose a percentage of users if those that stick are motivated to really engage with our content. In this case, the level of detail to support this deeper interaction needs to be there.

Familiarity breeds understanding

Transparency about the social and technical mechanics of an interface is very important. “Black boxing” user reputation or content scoring, for instance, makes it hard for us to judge how useful it should be to decision making. Hinting and help can be used to educate users into the mechanics behind the interface. In the Amazon example above, for instance, a verified purchase is defined separately, but not linked to the label in the review itself.

Where there is abuse of a system, users should be able to understand why and how it is happening and undo anything that they may have inadvertently done to invite it. In the case of the “like farming” dark pattern on Facebook, it needed a third party to explain how to undo rogue likes, information that should have been available to all users.

There is already evidence that expert users become more savvy in their judgement through experience. Studies of Twitter profiles have, for instance, noted a “Goldilocks” effect, where excessively high or low follower/following numbers are treated with suspicion, but numbers more in the middle are seen as more convincing.7 Users have come to associate such profiles with more meaningful and valued content.

In conclusion: Do make me think, sometimes

In dealing with information overload, we have evolved a set of useful social and algorithmic interface design patterns. We now need to understand how these can be tweaked or applied more selectively to improve the quality of the user experience and the quality of the interaction outcomes themselves. Where possible, the power of heuristics may be harnessed to guide the user rapidly from a to b. But in some cases, this is undesirable and we should look instead at how to involve some more of the greater deliberative power of the mind.

Do you have examples of interface innovations that are designed either to encourage “slow” engagement and deeper consideration of content, or to improve on the quality of any “fast” heuristic cues? Let me know through the comments.

References

1 Kahneman D. Thinking, fast and slow. 1st ed. New York: Farrar, Straus and Giroux; 2011.

2 Sundar SS, Xu Q, Oeldorf-Hirsch A. Authority vs. peer: how interface cues influence users. CHI New York, NY, USA: ACM; 2009.

3 Paul SA, Hong L, Chi EH. Who is Authoritative? Understanding Reputation Mechanisms in Quora. 2012 http://arxiv.org/abs/1204.3724.

4 Simpson TW. Evaluating Google as an Epistemic Tool. Metaphilosophy 2012;43(4):426-445.

5 Jianu R, Laidlaw D. An evaluation of how small user interface changes can improve scientists’ analytic strategies. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems New York, NY, USA: ACM; 2012.

6 Kriplean T, Morgan J, Freelon,D., Borning,A., Bennett L. Supporting Reflective Public Thought with ConsiderIt. CSCW 2012; 2012; .

7 Westerman D, Spence PR, Van Der Heide B. A social network as information: The effect of system generated reports of connectedness on credibility on Twitter. Computers in Human Behavior 2012; 1;28(1):199-206.