Uncovering Users In Your Own Organization

by:   |  Posted on
“It’s still research, but from an internal perspective. There is a wealth of information at your fingertips in your own office, and surprisingly, some of it is usability-related.”This article is the first of a two-part series that recommends internal resources that you can assess. In Part One, we examine customer databases. In Part Two of the series, we will consider other internal resources, such as product managers, call support centers, field consultants, and corporate surveys.

Buying new clothes and looking at current fashions is usually much more interesting and exciting than digging through one’s closet or laundry hamper. However, there is a lot one can learn by stopping and taking a minute to examine one’s own clothes. Such a review can tell you history (past fashions), document what you’ve been doing (attending a ball game, as evidenced by that ketchup stain), and indicate what you need (oh, yeah, I need a shirt to go with those pants). As user experience professionals it is crucial to your success that you learn who your users are, understand their environments and business needs, determine current user interface (UI) problems with the products, and review design solutions with users.

Like most of you, I can’t wait to get into the field to observe users working with my products. Yet, I’m going to pull on the reins, and do a 180-degree turn. Don’t worry – it’s still research, but from an internal perspective. There is a wealth of information at your fingertips in your own office, and surprisingly, some of it is usability-related. You can optimize your internal resources by understanding where and how you can find UI information about your users within your own company. To provide context and practical guidelines, this article presents examples of how to mine internal resources at a large enterprise software company.

Beginning with the end in mind
Successful research initiatives begin by first considering the research questions that need to be answered, then by selecting sources and methods to obtain those answers. Many user experience (UE) questions can be answered by examining internal resources. These questions may include:

  • Who are the product users? What types of industries do they represent?
    Although these might sound like marketing questions, they are important for UE professionals to know as well. Defining the user base is key for research purposes, as well as for understanding the usability patterns emerging in different markets. One would expect that a hospital, for example, might use purchasing software differently than a manufacturing company.

  • Are usability issues already being documented within your organization?
    Before running numerous usability studies and visiting customer sites, first find out what customers are already reporting. These reports can be a low-cost way to collect and summarize feedback and can direct your future research initiatives.

  • Where is the UI becoming an issue in your organization’s business? What are the usability hotspots?
    You can ask questions to determine whether the UI is affecting:

    • Revenue-generating sources: new sales, upgrades, and competitive pressures
    • Customer relations: implementation, training, documentation, and help desk calls

    Assessing internal documentation can help the UE team react to some of these problems and make the appropriate changes before problems escalate.

Internal resources
Most companies–small, medium, and large–maintain their product, sales, and customer knowledge within some sort of information documentation system. This knowledge organization can range from Excel spreadsheets to multi-database knowledge management software solutions such as Lotus Notes (Zorn & Taylor, 2004). Every company categorizes and prioritizes information differently. Likewise, not all companies have the same resources written about in this article and many have additional sources not addressed here. At a minimum, this diversity provides directions for exploration.

PeopleSoft’s UE team has found it valuable to look at the following sources:

  • Customer Database
    An application that supports all customer-facing operations. This application feeds into a database that contains all customer information, including customer contacts, business details, market details, products implemented, and licenses sold.

  • Support Call Center
    Most companies with a customer call center have a tracking system to log the issues reported by customers. Many log issues into categories, such as “usability.”

  • Consultancy and Implementation Services
    Consultants have day-to-day experience with product implementation and customization. They document customizations made to the UI based on customer needs, such as accommodating changes to infrastructure or business processes.

  • Product Managers
    These individuals have ongoing dialogues with customers that influence the future direction of the product. Product Managers can alert UE teams to design issues, such as a major redesign needed due to newly added functionality.

  • Company Survey Data
    Most mid- to large-scale companies evaluate their customers’ satisfaction with their products. Usability may be one of the dimensions surveyed.

Examining and synthesizing information from your own corporate resources will allow you to have a deeper understanding of customers and users prior to conducting field and usability research, incorporate usability findings into the product release cycle phases, and leverage internal relationships for future research and design reviews.

When to assess your internal resources?
Assessing internal resources is a continuous task. Information is constantly being updated, in many cases daily. Reevaluation of data may occur quarterly, yearly, and/or at salient times during the product release schedule, such as the product planning phase.

Another key time to investigate internal resources is when you start a new job. This is a great time to understand the knowledge management tools and information collected within your organization. Identifying your key contacts and information sources as early as possible helps you be more productive, efficient, and better at decision-making. Questions you may ask yourself include: who does what, when, and how? Who has liaisons with the customers? Which interest groups have an impact on the product designs? How is customer feedback typically handled?

Findings first! What types of usability results can you expect to find by doing internal research? Table 1 below provides a description of each resource, denotes the format, and identifies key usability issues that can be uncovered.

Data Format
Types of UI Issues
Customer Database
Database with customer contact info, company stats,
product and license details, etc.
Call Support Center
Customers call product support staff to report specific
application-related problems.

Labeling issues.

Error messages incorrect or ambiguous.

Need to change displayed defaults.

Interaction difficulties.

Not able to complete task due to process.

Product Managers
Customer-facing liaisons who provide guidance, feedback,
and strategic direction into the product.
Interpersonal communication

Customer-specific feedback.

Customer hotspots.

Feature- and functionality-specific issues.

Field Consultants
Consultants determine business requirements and set
up and implement products at customer location. Customizations are made
on the basis of needs.
Interpersonal communication

Inconsistency within products, across modules.

Mismatch of terminology with real world terms.

Documentation not enough, need more clarification.

Technical response times too slow, batch processes run at inappropriate times.

Interaction design not portrayed as customer expects or needs.

Corporate Surveys Customers are asked to fill out surveys regarding products, satisfaction,
and company loyalty.

Overall sense of usability satisfaction with products. Generally, high-level information.

Table 1: Usability issues uncovered by internal resources

Mining customer data
The role of market research in the user-centered design (UCD) process (Norman & Draper, 1980; Norman, 1988) is not explicit. Usability professionals need to be responsible for the data they collect and need to understand how their customer base is distributed to ensure that key industry customer types will be properly represented in their future research. Analyzing your customer databases, therefore, should be the first step-before running usability studies, before field research, and before design-to truly understand who your current customers are. Do not rely solely on others for this information, as most departments have their own competing priorities, but verify the information for your team.

“Once you understand the customer database, you can identify which markets to recruit users for research.”Getting started
The process of accessing customer databases may take more time than you initially expect. The raw data might be classified as company-sensitive information and, therefore, may have usage restrictions. Usability teams are not always given access to customer databases and you may have to work with the marketing or sales departments to gain data access. Furthermore, you may have to take classes to learn how to use the data application, how to run queries (commands for pulling the data you want), and understand how the data is entered into the system (how columns and rows are defined).

As you gain access to the system, you will want to cultivate relationships with key people who work with the datasets. We recommend identifying:

  • The initial gatekeeper who can help you gain entry to the system. In our case, we needed buy-in from an expert user of the system to request that we be given entry-level access.
  • A database expert who understands the source and classification of the data. We found that differing database queries led to result sets that had to be reconciled. (Either the queries were not matching up or the polled datasets were different.) Experts can answer questions about the data structure, queries, and problems you may run into while using the system.
  • Key product experts who can verify the synthesized data that you produce. Once you identify the percentages of key industries per product, you need product managers to verify that the numbers actually reflect their products.

In the software domain, customer data can be classified by a number of dimensions. If our research is to inform our UCD initiatives, it is important to consider customers who currently use our software and to identify categories from which we can easily recruit.

To begin, from your larger database create a sample of customers for which you are interested in gathering more detailed facts. For example, our UE team initially ran a query that selected customers based on: 1) product type (e.g., inventory customers); 2) customer status (e.g. actual license holder vs. prospective customer) ; and 3) status of product usage: must have implemented the product and be a current user. How your data is classified will influence how you can filter it.

Caption: Key industries for Product XYZ.

Once the sample is defined, you can run crosstabs by key variables you would like to learn more about. In our UE group, we really wanted to know which industries were purchasing certain products. We created pie charts and lists of key customers to reflect the core industries for each product type (e.g., purchasing software, inventory software). This segmentation was important for three key reasons. First, we wanted to be able to identify various user roles within an industry. To do this, we needed to know the core industries for any given product. In our case, the core industries for Product XYZ are: healthcare (25 percent), education (15 percent), utilities (20 percent), and financial services (40 percent). Second, the industries can define the type of user for whom we are developing our software. For example, is the user base primarily expert or more self-service?

Having a picture of those for whom we are designing at an industry level helps us then select users who are more representative of our user group for additional research. This leads us to the third reason: we can use those top industries as the first places we recruit users to create user profiles and then abstracted personas. Our long-term UCD goals were to create personas for our design and development teams. Understanding the industries where our users work was the first step of this research agenda.

One side note: future sales projections or targeted markets should also be considered when interviewing individuals for user profiles. Projected information cannot be collected from the customer database but needs to be gathered from talking with product managers, sales, and marketing. This information will indicate new user groups to consider and possible changes to the distribution of potential and current users.

Advantages, disadvantages, and considerations of the data
Now what? This customer demographic and sales information alone will not increase the usability of your product. It is, however, a step toward understanding your users. This deeper understanding will inform both the recruiting for usability testing and the process of designing the product itself.

There are two challenges that you should be aware of during these database analyses:

  • Customer versus User dilemma
    From a usability standpoint, we design for the user (the person using the application on a regular basis), but the data collected is based on the customer (the management or business analysts who make application selections and purchases). Does this fact then make the information less useful? No, it is still useful to know the industries that we are selling to. However, we recommend ensuring that subsequent research focuses on the user and that you understand their use of and perspective on the software.
  • Data validity
    Salespeople often enter customer information. As with any human-managed process, there is bound to be some error involved. This can be through mis-keying of data, not collecting and reporting on all key questions/fields, or through mis-categorizing or misunderstanding what the customer was saying. In addition to how data is entered, there is also the concern of how current the data is. Realize that there is some percentage of outdated information in these databases. Always validate data with the product manager and the person who entered the data.

After identifying the core industries your products serve, you will want to understand the types of user roles within each main industry, which most likely will require additional research, such as interviews with customers from those industries. From there, you’ll want to make sure that you recruit an appropriate sampling of users from the key industries and identify user roles to truly understand how each user group and/or industry is using your application.

The User Centered Design process (Norman & Draper, 1980) is well-accepted, endorsed, and integrated into many software corporations. Users provide context, feedback, and validation to designs using a spectrum of methodologies ranging from contextual inquiries (Holtzblatt & Beyer, 1993) to usability benchmark studies. To do this type of research and design can take considerable time, resources, and financial investment. One of the most overlooked areas to inform user interface considerations is within our own companies. Corporate resources are seriously undervalued and underused, but are at our fingertips. As a first step, we should determine where usability issues are already being documented within our organizations and where they could be used to inform our design decisions.

This article is the first of a two-part series that recommends five internal resources that should be assessed. In this article, I examined customer databases. In part 2 of this series, I will consider the following internal resources: product managers, call support centers, field consultants, and corporate surveys.

Mining customer databases is an essential first step to really identifying who user experience professionals are designing for. The information is at a high level, but extremely valuable when determining who to recruit and where to focus additional research efforts. Furthermore, this information is a level of data most other stakeholders in the company (such as marketing, sales, and strategy) can understand. It also helps in starting dialogues with these other working teams. UCD teams should take responsibility for performing, or at a minimum overseeing these analyses, to ensure accurate results. In sum, begin with the end in mind by optimizing your own internal resources.

For More Information

  • English, J., & Rampoldi-Hnilo, L. Remote contextual inquiry: A technique to improve enterprise software, 2004.
  • Holtzblatt, K. & Beyer, H. (1993). Making customer-centered design work for teams. Communications of the ACM, 36, 93-103. http://www.incent.com/pubs/customer_des_teams.html
  • Norman, D. A., & Draper, S. W. (Eds.). User centered system design: New perspectives on human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates, 1986.
  • Spradley, J.P. The ethnographic interview. New York, NY: Holt, Rinehart and Winston, 1979.
  • Wood, L. E. Semi-structured interviewing for user centered design. Interactions, March + April 1997, 48 – 61.
  • Zorn, T., & Taylor. Knowledge management and/as organizational communication. In D Rourish & O. Hargie, Ed., Key Issues in Organizational Communication. Routledge, in press, 2004.

Lynn Rampoldi-Hnilo is a Usability Engineer/Researcher at PeopleSoft. She initiates and leads the user experience research efforts for the Supply Chain Management product line. Prior to PeopleSoft, she worked as a market researcher at Cheskin on media (e.g. Internet effects and transmedia trends), product development of communication technologies, and usability and interface design studies. Lynn completed her Ph.D. at Michigan State University with an emphasis on technology and cognition. She has taught communication theories and social science methodologies at Stanford, Michigan State University, and St. Mary’s College.

John Stickley (Illustrator) is an Information Designer with over 10 years experience translating complex systems, concepts, and experiences into accessible visual solutions. His site Visual Vocabulary shows a complete overview of his work.

Remote Contextual Inquiry: A Technique to Improve Enterprise Software

by:   |  Posted on
“We have recently been working with a research technique that we call ‘Remote Contextual Inquiry’ (RCI) to fill the research gap between data collected during remote usability testing and on-site contextual inquiries with end users.”

Enterprise software usability is difficult to evaluate because the standard product shipped on a CD is almost always customized when it is implemented. How then can we learn about the design issues that actual users encounter with customized software?

At PeopleSoft, we have recently been working with a research technique that we call “Remote Contextual Inquiry” (RCI) to fill the research gap between data collected during remote usability testing (Gough & Phillips, 2003) and on-site contextual inquiries with end users (Holtzblatt & Beyer, 1993). This technique leverages the basic methodologies of remote testing to learn about how our software is customized and used after it is deployed within a company. Remote Contextual Inquiry also enables researchers to focus on finding design opportunities by learning about the tasks that actual end users attempt to accomplish, and the product training that was provided to them.

This article describes unique attributes of enterprise software that make typical usability testing a challenge, our use of Remote Contextual Inquiry, and some considerations when using this methodology.

What is so different about enterprise software?

Generally, enterprise software can be characterized by large-scale software deployments to a large number of end users. This type of software can provide complex functionality that supports how a company does business. By its nature, enterprise software is sold to companies of differing sizes representing a wide range of industries. As a result, the delivered feature set may not align perfectly with how the purchasing customer operates their business.

When this happens, consultants are retained to customize the software to fit the direct needs of the company. Evaluating the usability of enterprise software is challenging because the following variables alter the experience that end users have:

  • Software customizations
    It is common practice for enterprise software customers to: 1) change the text, layout, and behavior of delivered features; 2) remove features from the UI; and 3) build their own functionality.

  • System configurations
    Customers configure the software to perform in different ways, depending on the options that they select during setup.

  • User roles
    Individual users access a subset of the overall functionality based on their system-defined role.

  • User preferences
    The software has built-in features that allow end users to personalize many attributes of the interface.

  • Default data
    Forms can be pre-populated with data that is set up for end users by their roles.

  • Domain knowledge
    Many enterprise software products are used by people with deep knowledge and experience in their field.

  • Training
    Users of enterprise software may receive some combination of in-person product training, web-based instruction, email instructions, FAQs, and cheat sheets.

Because each software implementation creates unique relationships between users and the software, the resulting real-life usability issues can go undetected when performing usability tests on standard software. That’s where Remote Contextual Inquiry can be used to uncover usability issues by researching the relationships between end users and the implemented software that they use.

Who should be tested?

There are two key types of people that provide unique and complementary perspectives on software usage and customization within a company: the business analyst and the end user. It is helpful to conduct this technique with at least one representative of each type.

Business analyst

The business analyst may have coordinated the initial software installation or recent upgrade. This person has intimate knowledge of the product in its originally delivered configuration, has been involved with customization decisions, and is connected with training and support issues. You can ask a business analyst for the following:

  • Customizations that have been made and the reasons for making the changes
  • Known issues with using the software (possibly from internal Customer Support records)
  • Rough statistical information about the number of users and descriptions of user types
  • Domain expertise of users and typical duration of employment
  • The business process that the software supports
  • The tasks performed with the software, frequency of task performance, and variables, such as calendar-driven tasks (e.g., quarterly or annual tasks) and event-driven tasks (e.g., approvals or sales orders)
  • Corporate standards, such as monitor resolution settings and supported browsers
  • Information about training methods and materials provided to users
  • Plans for software upgrades
  • Access to end users for further research

Software end users

We can collect a wealth of contextual information from software end users, including:

  • Information about their goals and the tasks performed to accomplish those goals
  • Measurement of task performance data, including task completion time and the number of mouse clicks to perform a task
  • Feedback on layout, content, and behavior in the user interface
  • Personalization or individual-level customizations made to the software
  • Background on the type and effectiveness of the training they received

Remote Contextual Inquiry description and benefits

Remote Contextual Inquiry captures the computer screen of a person working with their version of the software on their own computer. To get started, the usability professional contacts the end user via telephone and web conferencing. The end user is granted the “presenter-level” control of the web conference so that she can then share her desktop with the observers. At this point, the usability professional asks the end user’s permission to record the session, then starts the recording software (e.g., Camtasia). Once the formalities are out of the way, the end user completes typical tasks and talks aloud as he/she does so. This allows the usability professional to: 1) observe and record the actual software being used; 2) gather information of real-life tasks relevant to that end user; and 3) probe and discuss the end user’s interaction with her system.

RCI is particularly useful for examining end user behavior and software customizations. This technique provides an opportunity to view exactly what our end users see – their version of the software, customizations, feature access, personalizations, and the actions they actually perform to accomplish their goals. In addition, interactions with supporting software also can be captured, including email clients, file folder systems, and data storage systems. This information can help usability professionals identify problem areas, features to consider, and inform usability and design solutions.

RCI is cost-effective while yielding rich contextual information about users’ behaviors within their desktop environment. Because end users of enterprise software are located worldwide, this approach allows one to tap a wider geographical range of participants. This is particularly effective when:

  • A project requires feedback from regional or international users.
  • Project cost is an issue. User experience professionals will not have to travel to where the users are located. Many times, contextual inquiry requires traveling to the end user’s location, which potentially involves airplane travel, hotel, and food expenses. These expenses are eliminated with this technique, and it still allows greater contextual information than a traditional usability test.
  • Preparation time is limited. It takes less time to set up and conduct the Remote Contextual Inquiry than it does to conduct a standard contextual inquiry at an end user’s location. The set-up time is also shorter than for a traditional usability test because there is less front-end work, such as software setup or configuration.
  • Participants have limited time. A typical session can take place in as little as 30 minutes.

Conducting the Remote Contextual Inquiry

The session is a great opportunity to invite developers, functional analysts, and strategists to view end users interacting with their software. It opens a direct dialog between those who create the software and those who use it, allowing developers and interested parties to observe user behavior and to ask and answer questions without having to leave the office. In addition, it bridges communication gaps between developers and user experience professionals by allowing everyone to observe end users in a similar context.

The introductory correspondence with the business analyst is a good opportunity to set expectations about your goals for the research, the time involved, and any deliverables or milestones key to the success of the project. During your discussion about ideal participants, their key tasks, and common usability issues, we also recommend asking about the training provided to end users and the availability of training materials. In practice, we ask for a copy of end user training materials so that we can analyze them for design opportunities because they often describe mainstream tasks, define software terminology, and describe how to avoid errors.

It is good practice to have a list of questions drafted prior to the session, especially if there are a number of stakeholders in the room, so that you can ask directed questions throughout or at the end of the session. Make sure to identify the session moderator to all observers and to follow her directions as to when and how to participate. Depending on the research focus, this can be a roundtable discussion or more formal interview.

The type of interaction between moderator and end user can also be flexible. One approach is to ask the end user to talk aloud as she performs typical tasks through the system. In this circumstance, the moderator might have the end user comment on specific details, such as why she has hidden certain fields or customized new labels or fields. Another variation is to minimize task discussions to understand how long it takes a user to do a typical task (i.e., a task that the end user performs often). This can be of interest if you are revising a design and would like to have baseline performance data.

Lastly, a general reminder when conducting any type of research, it is best to write up your notes immediately after the session or soon thereafter. It is handy to take screen shots of the captured web conference and then highlight those images with user comments and your notes. This working document can be the basis for identifying areas that would benefit from design improvements.

What new things does RCI capture?

The exciting aspect of this approach is that it allows us to observe the user’s desktop context. In addition, a user’s actual behaviors can be observed and recorded. It allows us to identify the end user’s task flow and the features and functions that may already be available to the user, but are not being used.

Based on our use of RCI, this technique has allowed us to:

  • Recognize fields that have been hidden or disabled.
    For example, if a field had been removed from a page, we can ask why this was the case and find out if it had been moved elsewhere, or if it was unnecessary for their business process. One participant said, “Oh we don’t use that, so we hide it from our users.” This is a data point to compare with other customers–perhaps this field should be removed entirely from the product. Sometimes responses were business-related (e.g., using other software to do that feature) or the field was not part of the user’s process.
  • Identify particular words and labels that are ambiguous or misleading.
    An end user responded, “This word here (holds cursor over it), don’t know what it refers to.” When asked how they solved that problem, the end user replied, “In our training guide, we had to write out and correlate certain words used in your system with internally used words here.”
  • Identify customized functionality and features.
    End users may need printing capabilities, for example, and consultants may have custom-created the functionality to do so. In addition, new fields are sometimes added to store data that is customer-specific. An end user stated, “Oh, one thing that was missing that we had to add was business unit. We classify everything by that.” This technique allows us to uncover such customizations.
  • Evaluate possible layout changes.
    Usability professionals can learn about which tasks are primary, which are secondary, and the UI elements that get in the way. The content and amount of default data that appears in a form can be captured, as well as the personalization that make the user more efficient. “We always need to see the employee’s salary rate next to service type. So we moved it to this section,” responded an end user.
  • Obtain real-life baseline metrics.
    General time-on-task measures can be recorded, and through post-processing of the recording, the number of mouse clicks required to perform tasks can be noted.
  • Identify non-intuitive learnability issues.
    Participants can describe learnability issues and can comment on the level of training that they received.

Some things to keep in mind

Based on our use of Remote Contextual Inquiry, here are some considerations for conducting this type of study:

  • As the first step in any UCD effort, you need to understand the target markets for your product so that you can obtain representative data from appropriate market segments.
  • This technique applies to software that is already deployed to a customer base. In enterprise software markets, that may mean collecting data on software that is one or more versions old. The pros and cons of this technique must be weighed against a more traditional usability method.
  • Some customers may deal with highly sensitive information and may not want you to make video recordings of their interactions. As is typical in our field, you must always ask and receive permission to record audio or video with your participants.
  • It must be clear to the customer where the recorded information will go, how you plan to use it, and any next steps.
  • It is helpful for researchers to understand the functionality of the product that they are viewing so that users do not have to explain the basics.
  • This technique is not a full contextual inquiry. The UE professional will not be seeing the person’s desk area or other external influences using this method. However, supplemental documentation could be gathered to further understand the environment. For example, a researcher could request that the user take a few pictures of her work area, desk, and surroundings.


Remote Contextual Inquiry gives us an opportunity to view our end users’ desktops to observe how they are using their current products in a cost and time-efficient manner. It is a marriage between the remote usability lab test and contextual inquiry, allowing us to transcend geographical boundaries without actually having to travel to distant locations. We gain contextual insights such as personalized settings, hidden fields, and added functionality that are typically not obtained during a usability test. It is truly a flexible method that provides a wealth of knowledge about the use of customized enterprise software.

For more info:

Camtasia software

Gough, D. & Phillips, H. (2003). “Remote Online Usability Testing: Why, How, and When to Use It.”

Holtzblatt, K. & Beyer, H. (1993). “Making Customer-Centered Design Work for Teams.” Communications of the ACM.

Jeff English manages the User Experience Team at PeopleSoft in the Supply Chain Management product line. Prior to PeopleSoft, Jeff led research projects for clients including McDonald’s, Medtronic and Evenflo while at IDEO Product Development in Palo Alto, California. Jeff holds a Masters degree in Human Factors and Ergonomics from San Jose State University.

Lynn Rampoldi-Hnilo is a Usability Engineer/Researcher at PeopleSoft. She initiates and leads the user experience research efforts for the Supply Chain Management product line. Prior to PeopleSoft, she worked as a market researcher at Cheskin on media (e.g., Internet effects and transmedia trends), product development of communication technologies, and usability and interface design studies. Lynn completed her Ph.D. at Michigan State University with an emphasis on technology and cognition. She has taught communication theories and social science methodologies at Stanford, Michigan State University, and St. Mary’s College.

John Stickley is an Information Designer with over 10 years experience translating complex systems, concepts, and experiences into accessible visual solutions. His site Visual Vocabulary shows a complete overview of his work.