By looking at the data on what users do on the site, however, you can enhance your effectiveness as a specialist in the user. You already have information and knowledge gained through observation and direct questioning of individual users. Now, you can add to that insights gained from the broad swath of information pulled during their actions on the site. These numbers represent the real-world behavior and interests of the user.
By looking at this information you will get a fuller picture of the user’s behavior, not in a lab, but in the true user environment. Looking at this information in conjunction with user experience research can yield tangible benefits. It can transform your role as the “research guy/gal” at your organization into a strategic position. It can provide quantitative data that backs up your qualitative findings, since there are some people out there who still don’t “believe” qualitative research. Finally, web analytics can help you learn whether your recommendations really work and further cement your value as a user experience expert.
“But where does data come from?”
This real behavior data is generated from the software programs that read and interpret the many browser “requests” that pour into the site server every second of every day. Placing Web Traffic Analysis (WTA) in a broad category, it can be considered a form of data mining. Data mining is the process of delving into large databases and looking for patterns and relationships in order to better understand customer behavior. Other types of data that can be processed to glean customer insights include: loyalty program data (such as frequent flyer or club rewards), purchase data, customer service contacts, and response to promotional activity (i.e., web registration, email response, mail-in response).
Web traffic analytics refers specifically to looking at the data that comes from user activity on a given website. It may be used in combination with other data but, for the sake of simplicity, we’re just going to look at user data on a website in order to focus the most-popular area of user experience research.
In most cases, web data comes from log files, which were originally used by webmasters to monitor the level of traffic on their servers. Today, log file software serves more of a business and marketing purpose, catching all (or most of) the requests that users make to the website’s servers, and then dumps those requests into a database. The software then translates the coded requests into tables, charts, and graphs that give a picture of what users are doing on the site. Other types of web measurement tools exists that embed a “tag” into the code of each page, then track the action to each page through the server. This data is then pulled into a software program or through an Application Service Provider (also known as an ASP) that allows the analyst to interpret the data.
Where to begin? At the beginning with objectives
Don’t worry I won’t end this article without providing some suggestions on what metrics user experience people should measure. However, it’s always critical to keep in mind the main purpose of the website you’re working on. Is it a marketing site, mainly existing to provide basic information to customers about the company? Is it a content site, deep with information and articles? Is it a commerce site? Are there any key calls to action that you want site visitors to answer? Business to business? Customer service? Websites can and usually do have multiple objectives. Your future analysis will be tied directly back to these objectives, so it is critical to gain agreement on the website’s specific objectives prior to measurement efforts.
Write down the key objective(s) of your site. As user experience folks, that should be a breeze. Of course, sometimes website management gets so diffuse that it can be difficult to get everyone to agree on what the site needs to achieve. Setting up a measurement plan provides a good opportunity to revisit and confirm those objectives. Without objectives, you will be looking at piles and piles of reports, graphs and numbers, and you will have a tough time culling down the data into something manageable.
Once you have the objectives in hand, it is much easier to identify what you need to measure. For example, if a “call-to-action” objective is getting visitors to sign up for email updates, then you want to be sure to measure all the site activity that leads up to and includes that sign-up.
A goal that is common to most web measurement programs is to keep track of the data on an ongoing basis. This is typically called “trending,” which is usually done on a month-by-month basis, but depending on the issues you are working with you may want to trend data on a daily or weekly basis. Trending is useful to be able to track the baseline activity, and then to note when there are ups and downs. These can often be used to link to other online efforts (such as advertising or promotions) as well as to outside or “offline” marketing or business efforts that might affect website traffic.
Some very useful metrics
There are some basic metrics that I always like to look at, no matter what site I’m analyzing. Typically, you will receive data in the form of monthly summary reports. This is standard practice, so unless you have some reason to look at shorter or longer period of time, start out with using monthly reports.
I’ve sorted these general metrics into two types:
Overall site metrics: This is data that aggregates all the site activity over the time period. It is looked at on an ongoing basis and is the core overview of data trending. It also helps in deriving metrics.
Page-level metrics: This is data that is examined for specific pages. Usually you would look at page-level metrics for all the important pages of the site, such as the homepage, key sub-navigation pages, and all the pages identified in your objectives as important to achieving the site’s goals.
The table below describes some of the metrics typically available for website analysis.
Derived metrics
Once I’ve looked at these general metrics, I start doing some very simple number crunching. I call this analysis derived metrics, since it involves using the data that already exists and doing some calculations and comparisons.
Derived metrics involves a bit of spreadsheet finesse. At this point, I copy the page-level and site-level metrics from the web-based report and paste them into a spreadsheet. This method provides all the raw data about the number of site visits, page visits, page views, and so on. What I can then do is calculate percentages in order to get an idea of the relative importance of each page view. This weighting helps because often the raw numbers are hard to interpret, but percentages are easy to compare to each other and to rank.
Derived metrics can provide some amazing insights into what’s really going on with your site. Typically, this is the methodology that is most proprietary and protected in the world of web traffic analytics. But, as a way to illustrate this technique, I have created data for a fictional site that shows a point-in-time analysis, as well as how data could be trended over time.
However, in June, the Giveaway Promo Page received 5,000 visits, which means that half of all visits to the site went to this page. This is good news, as it was the key promotional event. But if we go down the page, we see that the percentage of visits to the Registration Form went from only 1 percent in May to 5 percent in June. We would of course want to look at the actual number of registrations, but even without access to that data, we could assume that the promotion was encouraging visitors to register.
There also seems to be a lift from the promotion to the product areas of the site: in May the product page received only 16 percent of visits, and the two featured products each reached only 2 percent of visits. But in June, the product page reached 22 percent of visits, and the single featured product reached 6 percent. This shows that not only did the promotion bring in more registrations, but it also generated a higher interest in the products featured on the site.
Graph 1 makes the point visually. This type of trend graph would be important to keep up as part of an ongoing effort. In fact, for any registration page, it is wise to keep track of the visits and page views each time period for that month, and to know what might contribute to its rise and fall. Graph 2 shows an example of this for the first half of 2003.
OK, so now that you’ve put all this data together, it’s time to do the analysis. The first time you look at website data, it will be hard to know exactly what to look at. I usually look for patterns, and then look for breaks or exceptions in those patterns. I look at the data from those pages that were established in the objective-setting stage as critical to the website’s success efforts.
I also like to understand whether there is big jump or drop-off in the entry or exit page. This helps me understand how the site retains visitors, and where visitors leave, whether they’ve met their objective or they lost interest. For example, if out of 1,000 people visiting your site, you find only 20 people (2 percent) get beyond the homepage, this suggests that the interior content of the site is not drawing people in.
I look to see if the percentage of people leaving the site from a particular page is higher than the average percentage of people leaving the site from an average page. So, if typically about 30 percent of visitors leave the site on average on any given page, and 60 percent of visitors leave on a particular page, I want to know more about what’s happening on that page. I may then do some clickstream analysis to see how people arrived at that page. It may be that people completed a task at that page, and then left the site. Or, it may be that the page was too complex, and users were frustrated.
Even though this is working with numbers, there are no hard and fast answers. I do a lot of comparisons. I sort and resort. I reexamine the site content and strategy. I check the site objectives. And usually some interesting things emerge.
A good example of a useful look at web logs prior to redesign was a project I worked on for a nonprofit organization. The site’s mission is to provide help to parents on early childhood development. They provide online content, and produce and sell books and videos directly to parents. The site’s objectives could be articulated as: 1) to provide compelling online content for the target audience; and 2) to sell the organization’s books and videos. The organization already knew they weren’t selling a lot of books and wanted to improve in that area.
The website itself had a high average visit time just over nine minutes per visit. In looking at the data, we saw that people spent a lot of time on three key pages: one page on parenting information for dads, one on advice about getting children to sleep, and one other that contained a parenting quiz. We also found out that a large percentage of people were arriving and exiting through these pages.
This data suggested at least two critical insights: First, the site had compelling content about child-rearing that people enjoyed interacting with. So we could check off objective one almost. The data showed that people would read a single article, then leave the site. This pattern indicated the site was not doing the best job it could at either suggesting additional interesting content, or of connecting the content to the product they offered.
There was a strong opportunity to feature more links and navigational elements that would draw the site visitor to similar content, and point them to related books or videos on the topic. Without the web traffic analysis, we may have addressed some of these issues, but the WTA gave us specific direction as well as backup for our design recommendations. The results were impressive an increase in traffic up to three times the level prior to the redesign, and a 300 percent increase in sales.
When is WTA most useful?
I’ve found that WTA helps the most in the following situations:
- Site redesigns
- Zeroing in on challenging issues
- Confirming the value of user experience
Data-driven redesign
Redesign is probably the bread and butter of the IA community. We are constantly challenged with how to make existing sites better serve current or new customers. Before undertaking traditional user experience activities for redesign (i.e., heuristic analysis, user testing, wireframe development), it’s worth checking existing log files to see what the user activity has been. The WTA can help establish specific objectives for the redesign, determine what the site is already doing well, and where it is weak. WTA often provides good evidence for what areas of a site need to be fixed; while the conclusions of a heuristic analysis can be seen as subjective, web traffic logs are more objective.
Zeroing in
WTA also helps when you are struggling to understand a particular problem or issue on your site, or when there may be disagreement about how to address it.
I faced this problem some time ago when I worked on a measurement project for a car manufacturer. In this situation, the page featured two large and awkwardly designed buttons that linked to a brochure download and a test drive, respectively. We felt strongly that the goal of the site should be to drive visits to the dealers, and that the large brochure link would siphon users off from the more critical link to sign up for a test drive. This thought did occur to us before looking at the data, and in fact was discussed as the page was in development. A quick deadline kept the team from pushing the client on this issue. But once the page launched, everyone wanted to make sure it was as successful as possible, and we came back to the initial question.
The WTA confirmed our suspicions. It showed that only a handful of users were signing up for the test drive, but the brochure download was getting a lot of activity. We also learned that after the brochure was downloaded, users typically left the site, rather than coming back to sign up for a test drive.
Our recommendation was to make the test drive button much more prominent, and to feature the brochure download deeper within the site. We were convinced that the number of links to the test drive page would increase.
The nice thing about WTA when combined with user experience smarts? We could measure it! And indeed, after the redesign, we did see an increase in the percentage of users who registered. Which brings us to the last area where WTA comes in handy…
Showing that a focus on user experience works
Nobody said that UX and IA folks are perfect. Fortunately, we have the ability to check whether our recommendations have been successful or not. The critical factor in knowing whether what you’ve done has worked is to design a simple pre-design and post-design measurement.
Be sure to hold onto those log files that you looked at prior to your redesign. Look at the recommendations you made and what aspect of the design was changed. Then, after the site is redone, wait a month or so and check the log files again. Did you solve that problem where users dropped off before registering for the site? The data should tell you.
This can be useful when either internal team members or clients start to question the value of user experience. If you can pull up a little bit of information that shows how registration levels improved following user testing and redesign, then you have a powerful argument that is hard to ignore.
You can also maintain a small web traffic database that notes how site activity changes when designs change. You may want to check the website traffic each time the home page is tweaked or the navigation is adjusted to see how site activity changes. After any design decision is made, it is useful to keep some data at hand to note corresponding changes in user activity. With this data, you can increase your understanding of how design affects user behavior and response.
Make it so
Web Traffic Analytics not only keeps us honest, it can, in the best possible sense, help justify our existence and our value. When clients can see that user experience recommendations lead to specific, positive changes in the behavior on their site, that’s powerful stuff. And the user experience expert becomes someone to consult not just when testing is needed, but as an integral part of the design and development process.
Good article! There are definitely benefits in doing web analytics. We recently did a study of wikipedia users and used a freely available database dump. Data logs gave us an ability to look at behavior of thousands of users, something not easily done with user testing (time consuming). I think that for the sites that have an existing user pool, web analytics is a very useful method. Google also uses data logs to evaluate changes to the site and finds the method quite effective (Google UX video: http://video.google.com/videoplay?docid=-6459171443654125383)
Very good article. I’ve successfully used WTA in conjunction with multivariate platform testing to demonstrate how planned user experience changes affected site conversion (two-different page designs, current vs. new delivered randomly). The “why” came from a priori hypotheses about design and the effects on user behavior. WTA and multivariate testing were used to prove the hypotheses correct. The only time I didn’t know “why” was if the hypotheses didn’t pan out in testing. WTA is not going to provide that for you. That’s what you bring to the table.
WTA can tell us WHAT, but not WHY. WTA is important–and your article nicely describes its importance as well as some practical approaches. Yet these statistics don’t tell us why users did something, only that they did it. Observations of real users in real settings helps us understand why.
I totally agree that WTA is about the what – I am huge advocate for the WHY as well. When I had a job that was 90% “what” and 10% “why” in terms of user analysis, I was desperate for more “why.”
Understanding the actual user behavior on websites provides a good balance to user testing — and can help frame what questions to ask, and understand what areas need work.
Fran
Thanks for this article. In fact both of this week’s features in B&A eLetter are exactly what I’ve been looking for just lately.
Of course, I’m always reminded of Mark Twain’s famous quip that “there are lies, there are damned lies and then there are statistics”
I would far rather ask users about their experience than try to figure them out at arm’s length by reading the entrails of a chicken 😉
Nonetheless, the ability to define a baseline and get a sense of movement from that point forward is obviously useful.
Still I wonder if slicing and dicing all this log data isn’t vastly overrated. It’s really looking in a rear-view mirror, when well-written user surveys and testing will get right to the heart of the matter.
With regards
Peter Fraterdeus
semiotx.com
Really interesting article. Thank you, Fran 🙂
To Peter’s comment, I think slicing and dicing is one tool of many. It is vastly overrated if it is your only tool, but as a starting point, it can be very cost effective and it can yield surprising insights when carefully reviewed. I think the problem is that so few actually analyze their data. they look at hits or sessions or whatever and then it’s “have a nice day.”
It’s a classic problem of intelligence (of the CIA variety) in that when you have an overabundance of data it becomes too overwhelming to delve into it and actually learn anything. This is especially true for non-technical person or marketing department.
And I completely agree with Peter’s views on the subject of chicken entrails. I can think of far better uses for a chicken. Especially one breaded in Panko bread crumbs and oven-fried. Yum.
😉
Eric
I really enjoyed reading this article. Web-tracking can really be a valuable tool in the toolset of UX professionals.
Just one big caveat: If you’re dealing with http server logs, it is easy to overlook the influence of search engine spiders crawling the respective site.
You really need to keep your user agent exclusion list up to speed, otherwise you’ll inflate the overall quantity of accesses. What’s even more problematic: spidering patterns can be irregular, going only to the nth level of the site and excluding pages a spider cannot reach.
Bottomline: Before computing anything from log files, clean up the raw data.
This article was great because it drives home the point that analysis goes hand in hand with usability/IA.
In my opinion, there is no reason in putting any effort in making things more usable or better organized if your not going to also put effort into tracking to make sure your changes are actually working. Too many times I see people making “enhancements” to their site without ever really finding out if the changes they made are making things better or worse for the user and/or business.
I think that people assume right away that if they change something it’s always for the better. Tracking is really the only way to know for sure.
Nice article. One point I would make is that time spent on the site or on a page is not a useful metric and should be put in the discard pile along with hits.
There is nothing useful you can tell from this metric as there are too many unknowns surrounding it (e.g are your pages interesting to read or is the right content just hard to find; did the visitor go off and make a cup of coffee while on your site, etc).
Furthermore, to put another nail in the coffin of ‘stickiness’, I would suggest that it is most likely actually a better thing if visitors spend *less* time on your site, indicating that they have completed their task successfully and moved on to the next thing they want to do.
Gerry McGovern says some interesting things about this point–you could say that quality of user experience may be inversely proportional to time spent on site.
Sven makes an excellent point about approaching WTA using “clean” logs. It’s sort of beyond the purview of an introductory article, but there’s all sorts of things that should be done to scrub server logs — some are automated by the web log programs; others have to be done “manually,” so to speak.
Christian — I don’t agree that time spent should be completely thrown out as a metric. It does have limitations, which you accurately point out, but given those, I’ve seen pretty wide variations in average time spent metrics. That leads me to believe that there are differences in the way people interact with a site that is reflected in that metric.
Time spent can be a valuable data point when trending data, but it is worthless when examined in a vacuum. I think it’s less valuable for an overall site metric, than when it gets down to specific pages or site sections. If average time spent jumps in a particular week, I might want to investigate why. Maybe it’s some indeterminant problem, or maybe it could be pinpointed to new content, or some other change.
This metric, like all others, should be analyzed keeping the objective of the Web site in mind; I wholeheartedly agree that a high time spent metric could indicate a problem. In the example I used, the client wanted people to read through and digest the content, thus it was a good thing to see a higher average time spent on those pages.
Keep the comments coming — and thanks for reading!
-Fran
I find it very interesting to read about the problems of finding the *why* that lurks inside the *what* of web analytics. This was exactly my experience and was the driving force behind my decision to build an analytics tool that permitted easy exploration of the data, in an interactive and ‘right brained’ way.
Disclaimer : we’re a vendor of web analytics tools. http://www.clicktracks.com
Claim : our tool does a respectable job of helping people get to the why, within limits.
I really would like to hear from any IA folks about how close our tool comes to solving your problems. We try hard to make it do the right thing, and IAs have very demanding needs. Sometimes we need more feedback than we get.
Greetings,
One of my AI pals forwarded me this article because I have been providing expertise around web site analytics for close to 10 years now.
The most important piece that most of you have picked up on is that focusing on qualitative and quantitative analytics are both important pieces to having your own full picture of your web users.
I very much advocate that those focused on web customers and web analytics leverage both market research and statistical methods for understanding their end users.
Additionally, it is absolutely possible to gain a very good understanding about the “why” people do what the do on your site. Although I understand the analogy to chicken entrails above, I web log and a number of other ways to make sense of the worlds largest web servers is possible and sometimes a lot less challenging that you might think.
That said, if you are interested in reading about some E-metrics to help focus your effort around understanding your web users, have a look at this free white paper for ideas on how to apply e-metrics to your site.
http://www.spss.com/downloads/papers.cfm?ProductID=00067&Name=NetGenesis&DLType=WP
If you have any problem getting it, just send me an e-mail and I’d be happy to send it over.
Cheers and happy designing.
Rich
Excellent article! Very helpful for this newbie in the WTA world. We’re finding that our current log analysis doesn’t enable us to obtain good page-level metrics. I’ll be talking with our developers to look into how to put code on a page which will obtain that information for us. We use Cold Fusion – any suggestions would be helpful.
Again – thanks for writing this informative article!
We just developed a Flash analytics and user tracking package that helps with Flash-based user tracking and analysis…
http://www.exemplum.com/content/statistician/statistician.aspx
Thought it was relevant to this conversation, but would like to add that it will always require human interpretation and action.
We just developed a Flash analytics and user tracking package that helps with Flash-based user tracking and analysis…
http://www.exemplum.com/content/statistician/statistician.aspx
Thought it was relevant to this conversation, but would like to add that it will always require human interpretation and action.