Remote Contextual Inquiry: A Technique to Improve Enterprise Software

“We have recently been working with a research technique that we call ‘Remote Contextual Inquiry’ (RCI) to fill the research gap between data collected during remote usability testing and on-site contextual inquiries with end users.”

Enterprise software usability is difficult to evaluate because the standard product shipped on a CD is almost always customized when it is implemented. How then can we learn about the design issues that actual users encounter with customized software?

At PeopleSoft, we have recently been working with a research technique that we call “Remote Contextual Inquiry” (RCI) to fill the research gap between data collected during remote usability testing (Gough & Phillips, 2003) and on-site contextual inquiries with end users (Holtzblatt & Beyer, 1993). This technique leverages the basic methodologies of remote testing to learn about how our software is customized and used after it is deployed within a company. Remote Contextual Inquiry also enables researchers to focus on finding design opportunities by learning about the tasks that actual end users attempt to accomplish, and the product training that was provided to them.

This article describes unique attributes of enterprise software that make typical usability testing a challenge, our use of Remote Contextual Inquiry, and some considerations when using this methodology.

What is so different about enterprise software?

Generally, enterprise software can be characterized by large-scale software deployments to a large number of end users. This type of software can provide complex functionality that supports how a company does business. By its nature, enterprise software is sold to companies of differing sizes representing a wide range of industries. As a result, the delivered feature set may not align perfectly with how the purchasing customer operates their business.

When this happens, consultants are retained to customize the software to fit the direct needs of the company. Evaluating the usability of enterprise software is challenging because the following variables alter the experience that end users have:

  • Software customizations
    It is common practice for enterprise software customers to: 1) change the text, layout, and behavior of delivered features; 2) remove features from the UI; and 3) build their own functionality.

  • System configurations
    Customers configure the software to perform in different ways, depending on the options that they select during setup.

  • User roles
    Individual users access a subset of the overall functionality based on their system-defined role.

  • User preferences
    The software has built-in features that allow end users to personalize many attributes of the interface.

  • Default data
    Forms can be pre-populated with data that is set up for end users by their roles.

  • Domain knowledge
    Many enterprise software products are used by people with deep knowledge and experience in their field.

  • Training
    Users of enterprise software may receive some combination of in-person product training, web-based instruction, email instructions, FAQs, and cheat sheets.

Because each software implementation creates unique relationships between users and the software, the resulting real-life usability issues can go undetected when performing usability tests on standard software. That’s where Remote Contextual Inquiry can be used to uncover usability issues by researching the relationships between end users and the implemented software that they use.

Who should be tested?

There are two key types of people that provide unique and complementary perspectives on software usage and customization within a company: the business analyst and the end user. It is helpful to conduct this technique with at least one representative of each type.

Business analyst

The business analyst may have coordinated the initial software installation or recent upgrade. This person has intimate knowledge of the product in its originally delivered configuration, has been involved with customization decisions, and is connected with training and support issues. You can ask a business analyst for the following:

  • Customizations that have been made and the reasons for making the changes
  • Known issues with using the software (possibly from internal Customer Support records)
  • Rough statistical information about the number of users and descriptions of user types
  • Domain expertise of users and typical duration of employment
  • The business process that the software supports
  • The tasks performed with the software, frequency of task performance, and variables, such as calendar-driven tasks (e.g., quarterly or annual tasks) and event-driven tasks (e.g., approvals or sales orders)
  • Corporate standards, such as monitor resolution settings and supported browsers
  • Information about training methods and materials provided to users
  • Plans for software upgrades
  • Access to end users for further research

Software end users

We can collect a wealth of contextual information from software end users, including:

  • Information about their goals and the tasks performed to accomplish those goals
  • Measurement of task performance data, including task completion time and the number of mouse clicks to perform a task
  • Feedback on layout, content, and behavior in the user interface
  • Personalization or individual-level customizations made to the software
  • Background on the type and effectiveness of the training they received

Remote Contextual Inquiry description and benefits

Remote Contextual Inquiry captures the computer screen of a person working with their version of the software on their own computer. To get started, the usability professional contacts the end user via telephone and web conferencing. The end user is granted the “presenter-level” control of the web conference so that she can then share her desktop with the observers. At this point, the usability professional asks the end user’s permission to record the session, then starts the recording software (e.g., Camtasia). Once the formalities are out of the way, the end user completes typical tasks and talks aloud as he/she does so. This allows the usability professional to: 1) observe and record the actual software being used; 2) gather information of real-life tasks relevant to that end user; and 3) probe and discuss the end user’s interaction with her system.

RCI is particularly useful for examining end user behavior and software customizations. This technique provides an opportunity to view exactly what our end users see – their version of the software, customizations, feature access, personalizations, and the actions they actually perform to accomplish their goals. In addition, interactions with supporting software also can be captured, including email clients, file folder systems, and data storage systems. This information can help usability professionals identify problem areas, features to consider, and inform usability and design solutions.

RCI is cost-effective while yielding rich contextual information about users’ behaviors within their desktop environment. Because end users of enterprise software are located worldwide, this approach allows one to tap a wider geographical range of participants. This is particularly effective when:

  • A project requires feedback from regional or international users.
  • Project cost is an issue. User experience professionals will not have to travel to where the users are located. Many times, contextual inquiry requires traveling to the end user’s location, which potentially involves airplane travel, hotel, and food expenses. These expenses are eliminated with this technique, and it still allows greater contextual information than a traditional usability test.
  • Preparation time is limited. It takes less time to set up and conduct the Remote Contextual Inquiry than it does to conduct a standard contextual inquiry at an end user’s location. The set-up time is also shorter than for a traditional usability test because there is less front-end work, such as software setup or configuration.
  • Participants have limited time. A typical session can take place in as little as 30 minutes.

Conducting the Remote Contextual Inquiry

The session is a great opportunity to invite developers, functional analysts, and strategists to view end users interacting with their software. It opens a direct dialog between those who create the software and those who use it, allowing developers and interested parties to observe user behavior and to ask and answer questions without having to leave the office. In addition, it bridges communication gaps between developers and user experience professionals by allowing everyone to observe end users in a similar context.

The introductory correspondence with the business analyst is a good opportunity to set expectations about your goals for the research, the time involved, and any deliverables or milestones key to the success of the project. During your discussion about ideal participants, their key tasks, and common usability issues, we also recommend asking about the training provided to end users and the availability of training materials. In practice, we ask for a copy of end user training materials so that we can analyze them for design opportunities because they often describe mainstream tasks, define software terminology, and describe how to avoid errors.

It is good practice to have a list of questions drafted prior to the session, especially if there are a number of stakeholders in the room, so that you can ask directed questions throughout or at the end of the session. Make sure to identify the session moderator to all observers and to follow her directions as to when and how to participate. Depending on the research focus, this can be a roundtable discussion or more formal interview.

The type of interaction between moderator and end user can also be flexible. One approach is to ask the end user to talk aloud as she performs typical tasks through the system. In this circumstance, the moderator might have the end user comment on specific details, such as why she has hidden certain fields or customized new labels or fields. Another variation is to minimize task discussions to understand how long it takes a user to do a typical task (i.e., a task that the end user performs often). This can be of interest if you are revising a design and would like to have baseline performance data.

Lastly, a general reminder when conducting any type of research, it is best to write up your notes immediately after the session or soon thereafter. It is handy to take screen shots of the captured web conference and then highlight those images with user comments and your notes. This working document can be the basis for identifying areas that would benefit from design improvements.

What new things does RCI capture?

The exciting aspect of this approach is that it allows us to observe the user’s desktop context. In addition, a user’s actual behaviors can be observed and recorded. It allows us to identify the end user’s task flow and the features and functions that may already be available to the user, but are not being used.

Based on our use of RCI, this technique has allowed us to:

  • Recognize fields that have been hidden or disabled.
    For example, if a field had been removed from a page, we can ask why this was the case and find out if it had been moved elsewhere, or if it was unnecessary for their business process. One participant said, “Oh we don’t use that, so we hide it from our users.” This is a data point to compare with other customers–perhaps this field should be removed entirely from the product. Sometimes responses were business-related (e.g., using other software to do that feature) or the field was not part of the user’s process.
  • Identify particular words and labels that are ambiguous or misleading.
    An end user responded, “This word here (holds cursor over it), don’t know what it refers to.” When asked how they solved that problem, the end user replied, “In our training guide, we had to write out and correlate certain words used in your system with internally used words here.”
  • Identify customized functionality and features.
    End users may need printing capabilities, for example, and consultants may have custom-created the functionality to do so. In addition, new fields are sometimes added to store data that is customer-specific. An end user stated, “Oh, one thing that was missing that we had to add was business unit. We classify everything by that.” This technique allows us to uncover such customizations.
  • Evaluate possible layout changes.
    Usability professionals can learn about which tasks are primary, which are secondary, and the UI elements that get in the way. The content and amount of default data that appears in a form can be captured, as well as the personalization that make the user more efficient. “We always need to see the employee’s salary rate next to service type. So we moved it to this section,” responded an end user.
  • Obtain real-life baseline metrics.
    General time-on-task measures can be recorded, and through post-processing of the recording, the number of mouse clicks required to perform tasks can be noted.
  • Identify non-intuitive learnability issues.
    Participants can describe learnability issues and can comment on the level of training that they received.

Some things to keep in mind

Based on our use of Remote Contextual Inquiry, here are some considerations for conducting this type of study:

  • As the first step in any UCD effort, you need to understand the target markets for your product so that you can obtain representative data from appropriate market segments.
  • This technique applies to software that is already deployed to a customer base. In enterprise software markets, that may mean collecting data on software that is one or more versions old. The pros and cons of this technique must be weighed against a more traditional usability method.
  • Some customers may deal with highly sensitive information and may not want you to make video recordings of their interactions. As is typical in our field, you must always ask and receive permission to record audio or video with your participants.
  • It must be clear to the customer where the recorded information will go, how you plan to use it, and any next steps.
  • It is helpful for researchers to understand the functionality of the product that they are viewing so that users do not have to explain the basics.
  • This technique is not a full contextual inquiry. The UE professional will not be seeing the person’s desk area or other external influences using this method. However, supplemental documentation could be gathered to further understand the environment. For example, a researcher could request that the user take a few pictures of her work area, desk, and surroundings.

Conclusions

Remote Contextual Inquiry gives us an opportunity to view our end users’ desktops to observe how they are using their current products in a cost and time-efficient manner. It is a marriage between the remote usability lab test and contextual inquiry, allowing us to transcend geographical boundaries without actually having to travel to distant locations. We gain contextual insights such as personalized settings, hidden fields, and added functionality that are typically not obtained during a usability test. It is truly a flexible method that provides a wealth of knowledge about the use of customized enterprise software.

For more info:

Camtasia software
http://www.camtasia.com

Gough, D. & Phillips, H. (2003). “Remote Online Usability Testing: Why, How, and When to Use It.”
http://www.boxesandarrows.com/archives/remote_online_usability_testing_why_how_and_when_to_use_it.php

Holtzblatt, K. & Beyer, H. (1993). “Making Customer-Centered Design Work for Teams.” Communications of the ACM.
http://www.incent.com/pubs/customer_des_teams.html



Jeff English manages the User Experience Team at PeopleSoft in the Supply Chain Management product line. Prior to PeopleSoft, Jeff led research projects for clients including McDonald’s, Medtronic and Evenflo while at IDEO Product Development in Palo Alto, California. Jeff holds a Masters degree in Human Factors and Ergonomics from San Jose State University.

Lynn Rampoldi-Hnilo is a Usability Engineer/Researcher at PeopleSoft. She initiates and leads the user experience research efforts for the Supply Chain Management product line. Prior to PeopleSoft, she worked as a market researcher at Cheskin on media (e.g., Internet effects and transmedia trends), product development of communication technologies, and usability and interface design studies. Lynn completed her Ph.D. at Michigan State University with an emphasis on technology and cognition. She has taught communication theories and social science methodologies at Stanford, Michigan State University, and St. Mary’s College.

John Stickley is an Information Designer with over 10 years experience translating complex systems, concepts, and experiences into accessible visual solutions. His site Visual Vocabulary shows a complete overview of his work.

4 comments

  1. Hi,

    Contextual Inquiry is a method developed by Karen Holzblatt and Hugh Beyer. Unfortunately, I see no relationship to their great stuff; no evidence that the authors have looked there for guidance. I wonder just how Karen – because she teaches workshops on the method – feels about the idea of remote contextual inquiry. I always thought the point of contextual inquiry was the *context* part. How do you get that if you’re remote?

    What you’re describing here is remote usability testing, which a lot of people are doing, and that Nate Bolt and Tony Tulathimutte have written an excellent book on.

    Good luck,
    Dana
    dana@usabilityworks.net

  2. User research method that is described in this article is a remote usability testing, but not a contextual inquiry by any means.

  3. Thank you so much for your article!

    As you stated, remote contextual inquiry is a real valuable tool. With usability budgets getting cut and not so great economy, everybody should really think about using remote testing or conferencing softwares as part of their arsenals.

    In my current and past positions designing enterprise business software, we used remote usability testing/CI to supplement our UI design and user research.
    Key word here is supplement: It should be used in conjunction with traditional CI (Contextual Inquiry) methods – meaning, actually visiting customers at their work. This is especially important for new designers or researchers with limited real world experience.
    Also in enterprise software world, especially in supply chain/CRM/Portal/etc., there are multiple types of users with different levels of experience. As your article stated, it’s important to really identify who your target users will be…. Business Analysts can help you identify the users – but not always. I often find myself helping my customers really identify their workflow – both in government and private industries. In past, through series of interviews and site visits, my team started to see some workflow patterns that even business analysts (customer side) or my company’s product managers couldn’t really figure out. Identifying product usage through workflow can be pretty valuable in enterprise softwares. So do this first – well at least some of it. After awhile, you can see some base patterns that can be categorized by customer’s vertical industry, size, culture, deployment, etc.

    Remote usability testing/CI again is a really useful tool. But in the end, just my personal opinion, it will only help you identify some GUI level problems – e.g. labeling, basic interaction,etc. Changing one area of the UI can lead to problems for other users (for programmers, similar analogy is fixing one bug sometimes leads to addional bugs…). Unlike consumer softwares, it’s very hard to figure correct workflow and design intuitive UI for each customer in enterprise softwares. Then again, these customers are paying lot of money for your enterprise software. My recommendation to decisions makers: hire some good UI people in your professional services team.

    Some simple recommendations for conducting remote CI:
    1. This is basic, but often left out. Print out time zone for different parts of the world, or have access to time zone converter application.
    2. If you are using webEX or similar software/services, try following setup: computer that you will use to connect with your user remotely, and have another computer connected also to conferencing session. But have the other computer record the session. This is more reliable method – since recording screen sessions can be CPU intensive (might crash your computer).
    3. From my personal experience, having session over hour can be tiring for the users. Unless your users has a webcam, it’s hard to tell if your user is getting tired…..you tend to get fuzzy responses then your users get tired 🙂
    4.I don’t work for camtasia, but recording the session over camtasia like software over webconferecing software is recommended. I don’t recommend using NetMeeting for various reasons – I just found it easier to get people to use webex or placeware.
    5. Email some pre-interview questions in advance. I also usually add a link in the email to get users to test if their computer will work with remote conferencing software.
    6. Expect about 10 -15 minute delay when you start the remote CI session – often it will take users 10-15 minutes to get their computers configured to use web conferencing services…….

    Of course get permission to record the session and etc..

    Ji

  4. Hi David
    We are not using this technique for evaluative purposes – it is very exploratory in nature. This technique helps us to understand the software customizations made to suit business needs and to find out how people accomplish tasks with it. We are not using this technique for design validation. The use of the word “usability” in the earlier post was a separate point to describe what we might expect to get out of relationships with customers that have deep customizations, without regard to the techniques available. Sorry for the confusion!

Comments are closed.