Interaction Modeling

Posted by

“The relationship between actions and cognitive processes is important because it explains user behavior and translates to supportive arguments for good design solutions.”

Interaction modeling is a good way to identify and locate usability issues with the use of a tool. Several methods exist (see Olson & Olson 1990 for a review of techniques). Modeling techniques are prescriptive in that they aim to capture what users will likely do, and not descriptive of what users actually did.

Most methods—descriptive or prescriptive—fail to incorporate the relationship between user actions and cognitive processes. Models of cognitive processing, for example, might attempt to explain how or why a particular task is mentally difficult, yet the difficulty does not directly relate to observable user actions. Conversely, descriptive techniques such as path metrics, click stream analysis, and bread crumb tracking take little or no account of cognitive processes that lead to those user actions.

The relationship between actions and cognitive processes is important because it explains user behavior and translates to supportive arguments for good design solutions. Both prescriptive and descriptive techniques are necessary for characterizing the cognitive processing and user action (cog-action) relationship.

Collecting and reporting user data without presenting cog-action relationships results in poor problem definitions. In fact, practitioners often present no relationship between observed user behavior and the claim that there is a problem. Usability problems are presented as user verbalizations such as “I don’t know what I’m supposed to do here.” Although there is a cognitive reason why the user has fallen into this apparent uncertainty state, that reason is seldom presented. Further, the relationship between an identified problem and the solution to fix it is often not provided. If we don’t know why the behavior is a problem we can’t design a good solution.

This article presents a three-part method of interaction modeling where:

  • A prescriptive, preferred interaction model (PIM) is created
  • A descriptive user-interaction model (UIM) derived from an actual user study session is created
  • A model of problem solving and decision making (PDM) is used to interpret disparities between the first two models

Preferred Interaction Model (PIM)

A usability study design establishes success criteria. These criteria should be expressed as assumptions about user processes and behaviors. Creating PIM, or the process you would like the user to follow, specifies success criteria. Interaction models are a great tool for this endeavor and have enjoyed many published successes (for good review of case studies see Fischer 2000). There are three important things to remember about PIMs:

  • The PIM is created by the designer
  • The PIM should be based on task requirements, not functional requirements
  • The PIM exists in the system, not in the head of the user

Interaction models are typically quantitative frameworks requiring statistical validation. I use the term ‘model’ in more relaxed, qualitative way. The idea is to establish the PIM as a type of hypothesis or intended goal of development: “The system we designed supports X task/activity” (see e.g., Soudack, 2003). The method presented here is a structured approach for handling that hypothesis based on observation and theory. It lends itself to quantitative methods but doesn’t require them.

Creating a PIM

A PIM is a type of flow diagram that represents the user’s probable decisions, cognitive processes, and actions. Here is a simple example for retrieving a phone number from a contact database (Fig. 1).

Queen_interact_060212_fig1
Figure 1: PIM for a phone number retrieval interface of a contact database application

PIM entities are the decision (diamond), cognitive process (rounded box), action (square box), and system signal (circle). The model ends with a special action entity represented by a rhombus shape. The PIM starts with a decision (get #?) insuring that the model can fit into the context of a larger model. Notice the sequencing of “cognitive state” then “action.” This is similar to the ordering of thinking, and then acting, that we would observe while watching users perform tasks. It also cues the modeler to encode a cognitive state (either decision or mental process) on either side of an action.

Decisions are modeled as yes/no or true/false statements. If multiple outcomes of a decision are necessary, consider using sub decisions. For example you might have a decision structure that looks like Figure 2.

Queen_interact_060212_fig2
Figure 2: Nesting decisions can allow a complex outcome (choice A, B, or C, rather than just choice A or B

The granularity of the model detail can be determined based on the needs and constraints of the system. Perhaps a higher level model of the contact number example (Fig. 3) might be more useful with different study criteria.

Queen_interact_060212_fig3
Figure 3: Models can be high level and need not articulate procedural, physical actions (e.g., click on red button then move cursor to front of text, etc.)

Frequently on projects, the PIM has been loosely established and exists in some unarticulated form. Parts of the PIM might be discovered within prototype interface mockups, development requirements and/or design plans of stakeholders. The PIM can be difficult to construct from such varied sources. However, completing it makes assumptions about preferred interaction explicit and testable. Clearly defined, testable, assumptions are a necessity for this line of work.

User state-trace analysis: recording the user-interaction model (UIM)

State-trace analysis (Bamber, 1979) compares a given group’s performance under controlled conditions to performance under actual conditions. The method results in many interesting metrics and affordances. Collecting data on the UIM is somewhat similar to state-trace analysis yet differs in important ways. The UIM is collected under actual conditions (or as close as possible to actual conditions) and is then compared to is the PIM.

Rather then trying to perform traditional state-trace analysis, user state-trace analysis focuses on the goal of the method. Here, we wish to capture qualitative behavioral data while observing users as they transition from cognitive states to action states. We then use this data to “trace” the user’s path through these states as they complete the provided task. The result is a model of the user’s performance that contains valuable information about decision-making and problem-solving based on the system interface in the context of the task. The UIM can be compared to the PIM because they are similar in form and represent a similar process architecture.

Creating the UIM

User state-trace analysis is a type of coding that allows a researcher to trace the path of behavioral and cognitive states the user exhibits while completing a task. Use the same PIM entity types to create UIM diagrams. Instruct the user to “think out loud” and then trace the user’s path from cognitive processes to action states while they perform the provided task with the system.

This type of analysis has some caveats. First, real-time coding (i.e., recognizing, categorizing, and recording) of states is doomed to fail. The user might transition into states that are not well defined in terms of the task (e.g., an uncertainty process, or a stall action). The best practice is to video tape the study session and review it directly afterwards.

Expect upward of 10-20 times the video session length to complete a full-blown, accurate state-trace. Although a trace can be completed in an hour or less, plenty of extra time is spent determining salient user actions, arguing interpretations, and refining the complete trace model. As is the case in most endeavors, the more decision makers involved equates to more time spent.

If the task seems daunting, however, try restricting your level of trace detail to high level cognitive processes and actions and using the trace as an exploratory tool. This approach will drastically speed up the process.

When you start a trace diagram it is a good idea to use the PIM as a reference point for your coding. Have a printout of the PIM on a data collection sheet in order to take notes over top of it. Above all else, be honest while collecting data. You shouldn’t find yourself making rationalizations such as:

  • “Well they pretty much did that state … I’ll just mark it on the UIM.”
  • “They weren’t actually in a ‘confused’ state for very long … I don’t think that counts.”
  • “This user isn’t even sticking to the task anymore … none of this really counts anymore.”

Be aware that there is a tendency to establish an entire process-action-process relationship before writing anything down. Instead, try to first recognize and label a few states and actions on your data collection sheet. Leave these observations as labels anywhere on the sheet yet do not “link them up.” As you complete a significant phase of a task, start to organize and edit your labeled entities. Working this way allows the trace to develop while taking the mental burden off of the analyst to “guess what will happen next.”

Below is an example excerpt of a PIM and the constituent UIM retrieved from a user state-trace analysis session (Fig. 4). The representation in Figure 4 was obtained from a usability study of an interaction modeling tool. An analyst was asked to review a transcribed user study and assign models of decision making to various text passages.

Queen_interact_060212_fig4
Fig 4: Excerpt from PIM/UIM of modeling tool process study.

User state-traces provide several useful measurements:

  • Departures: the number and magnitude of states that happen outside the preferred model
  • Logical errors: the number of errors and recoverability from errors that resulted directly from departures
  • Lags: the amount of unproductive time spent in departures
  • Return steps: the number of obstructive returns to previous states
  • External processes: the dynamics of reoccurring processes that exist outside of the preferred model
  • Bandwidth constraints: the ability of the user to carry out cognitive processes and the amount of necessary resources available for them to do so
  • Action constraints: a cognitive process results in X possible actions though only a subset of these is available
  • Modal tracking: the discrepancy between application mode shifts and user mode shifts

Obtaining measurements from a user state-trace can result in a valuable dataset that reveals interesting patterns and trends. User state-trace analysis, however, is not a means of drawing inferences from these patterns nor is it a method of interpretation. A user state-trace reveals how the user performed the tasks, not why. The architecture of processes and actions exhibited by the user is generated by a cognitive mechanism the user engaged to deal with the task they were given. A better understanding of the underlying problem-solving and decision-making mechanisms will explain observed actions.

Problem-solving and decision-making model (PDM)

Cognitive mechanisms assist in solving problems and/or making decisions in order to complete a task. These tend to fall into four basic classes (Lovett 2002):

  • Rule-based: The user decides that if X situation arises then they will take Y action. Rule-based models are often engaged when the interface adheres to a dialog-driven, procedural task design. Examples are grocery store checkout systems, and operating system setup wizards.
  • Experienced-based: The user has been in this situation before. Last time they did X and it resulted in a desirable outcome, so they will do X again. Experience-based models are often engaged while performing tasks on systems users have familiarity with. In usability studies, however, participants are frequently recruited based on the criteria of limited or no experience with a system.
  • Pattern-based: The user has seen a situation that appears to have all the same elements as the current situation. In the former situation, X resulted in a positive outcome, so they will do the analogous version of X here. Pattern-based models take surrogate roles for missing experience-based models. The mechanism that handles the pattern-to-experience-based replacement can itself be expressed as a model. In fact the mechanism is regularly referred to as the users “mental model.”
  • Intuition-based: The user has a hunch that X will result in a desirable outcome so they will “follow their gut.” Intuition-based models are not well understood. Think of them as the user’s ability to distinguish patterns in the problem space that are far more detailed than the problem statement or situation will allow. Expert decision making is often categorized as intuition based.

To employ a model of problem solving and/or decision making (PDM) as an explanatory tool, it helps to diagram the model. An example of a rule-based mechanism is the satisficing model of decision making (Lovett 2002) (Fig. 5). In this model, a user chooses the first option they feel will accomplish the task without considering other options/ information. In the following example the satisficing model is recruited to interpret departures observed between a PIM and the recorded UIM:

Queen_interact_060212_fig5
Figure 5: The Satisficing model of decision making

Example Scenario: The application “Story Teller” (Fig. 6) allows a user to add characters from a story to a database. Characters are added to the database using the “add new character” function. Once a character is added, the application allows the user to count the frequency of appearance for a listed character. The PIM for adding a character to the database is illustrated in Figure 7. A user is recruited for participation in our study and given the simple task, “add a character to the character database and determine the number of times the character appears in the short story. The story file has already been loaded into the application.” Figure 8 shows the state-trace of what the user actually did.

Queen_interact_060212_fig6
Figure 6: The Story Teller application interface with add character dialog
Queen_interact_060212_fig7
Figure 7: PIM of adding a character to the database
Queen_interact_060212_fig8
Figure 8: UIM obtained from a usability study session

The UIM shows large departures. It appears that the user tried to “add a new character” for every time they saw the character in the story. We might be tempted to explain this as a problem with labeling in the interface or poor task clarification during the study. We could instead employ the satisficing model to explain the departures for a more rich interpretation:

  • Claim: There is a problem with the current interface.
  • Evidence: Large disparities between the PIM and state trace data (i.e., UIM) were observed.
  • Explanation: The user may be adhering to a satisficing model of decision making. Therefore, the user continually adds the same character as a new entry to the database because the text field that allows the user to enter a character is the first available entry point to the data input process. The text field also signals the user to recruit an experience-based model of problem solving: they copy and paste text to save time when the task is thought to require repetitive data entry. Additionally, the editable dropdown menu functions like a textbox therefore invoking the “data-entry model” to prompt the copy-paste action.
  • Solution: Place a list of available (previously entered) characters below the “New Character” input box. If the list has the name of the character and a clearly identifiable number representing the current number of instances next to it, the user will be allowed to select the character name in the list box and press submit. This will fix the problem of re-adding the same character in addition to allowing the user to include character aliases in the count (e.g., Matt, and/or Matthew).

The satisficing model is an example of a rule-based model yet assumes that the user is affected by cognitive bias. Common examples of cognitive-bias models are:

  • Anchoring and adjustment: Initial information presented to the user affects decisions more than information communicated to the user later in the task.
  • Selective perception: Users disregard information that they feel is not salient to completing the task.
  • Underestimating uncertainty: Users tend to feel they can always recover from faulty decisions in an application.

A good resource for the above models and a more detailed list can be found at the Wikipedia entry for decision making and problem solving (Wikipedia 2005).

It is a good practice to diagram several candidate cognitive-bias models before attempting to use them for explaining a specific departure. The diagramming allows you to get specific about exactly how the model explains the observed departure between the PIM and UIM. The final step is to include the cognitive-bias model as an annotation to the PIM with superimposed UIM (Fig. 9).

Queen_interact_060212_fig9
Figure 9: Complete PIM, UIM, and PDM model integration

Conclusion

The interaction-modeling technique provided here is useful in establishing usability success criteria and uncovers usability issues. The PIM acts as a testable hypothesis and the UIM establishes coded behavioral data. Major disparities observed between the PIM and UIM work as evidence to support the claim that there is a viable usability issue. The use of cognitive decision and problem solving models (PDMs) helps interpret and explain why the disparities exist. The essential components of a viable usability claim, behavioral evidence, and theory driven interpretations will inform the creation and rationale for good user interface design solutions.

References

Olson, J. R., & Olson, G. M. (1990). The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS. Human-Computer Interaction, 5(2-3), 221-265.

Bamber, D. (1979). State trace analysis: A method of testing simple theories of causation.
Journal of Mathematical Psychology, 19, 137-181.

Soudack, A. (2003). Don’t Test Users, Test Hypotheses. Boxes and Arrows, October 27.

Lovett, M.C. (2002). Problem Solving. In H. Pashler & D. Medin (Eds.),Stevens’ Handbook of Experimental Psychology: Memory and Cognitive Processes. New York: John Wiley & Sons.

Decision making. (2005, November 17). Wikipedia, The Free Encyclopedia. Retrieved December 1, 2005 from http://en.wikipedia.org/w/index.php?title=Decision_making.

Fischer, G. (2000). User Modeling in Human-Computer Interaction. User Modeling and User-Adapted Interaction, 11, 65-86.

9 comments

  1. Very informative article. I always referred to them as simply ‘flow diagrams.’ You’ve provided a few more academic terms to my User Experience vocabulary 🙂

  2. Glad I could help with the lingo! There is another distinction though. Suppose you ‘diagram’ 1 user during a session. I would be inclined to call that a flow diagram too! However, suppose you diagram 10 users? Then, user cognitive states like “edit list for X” becomes, “edit” as more users exhibit the same behavior. Then, you get some predictability (with a level of abstraction). More users will do this edit stage! As soon as corroborated predictability sets in … then it’s a model. Intuitively, the phrase “model predictions” is commonly used with that same meaning. BTW, that speech always gets a standing ovation 🙂

  3. Excellent!!…a very conicse but elegant explanatiion…although it seems somehow sad that after all this time we still have to explain “it” so often. I suppose that it comes from Customers being told and then actually believing they will get a “free design” and “save money” having the programmers sort of “design-while-coding”.

    Nothing shows up as quickly as a lack of design (or blueprint or plan) as evidenced by several big-name federal agencies being forced to abandon multi-million dollar development projects because they attempted to follow the Evolutionary Model instead of the Intelligent Design Model.

  4. Of this you speak the truth. And, your comment, “… believing they will get a ‘free design’ and ‘save money’ having the programmers sort of ‘design-while-coding’.” — sounds like the voice of experience 🙂

    That is a case study I’d like to read.

  5. I don’t know if you have noticed it… but there is a little mysterious “suggest” link. This allows you to “suggest” a comment be considered as an article. Matt, if you think Keith’s comment would be a case study you’d like to read, why not click it and try!

  6. This sounds a lot like The Bridge, a participatory analysis design and assessment (PANDA) technique developed at Bellcore by Tom Dayton, Joe Kramer and others. It’s focused on a participatory workshop in which a variety of stakeholders (business, users, experts) created three views, which they called something like Current, Blue Sky, and Real. It was intended for working on large object-oriented expert systems. The writeups are all pretty dry, but the actual sessions (at least the ones I saw) were much more lively and engaging.

  7. I haven’t heard of The Bridge though many similar methods exist in the expert systems lit. Practice mapping, for example, is a much more complex version of interaction modeling that often involves decision analysis techniques in lieu of the cognitive bias warrants I described above. Was there any mention of whether The Bridge enjoyed any success (case studies, anecdotes, etc.)?

  8. Excellent article – still holds up even three years later I’d say. Basically formalizes a technique that I’d been trying to express in a much more naive way through my own efforts.

Comments are closed.