Video games are often overlooked in the scope of usability testing simply because, in a broad sense, their raison d’etre is so different than that of a typical functional interface: fun, engagement, and immersion, rather than usability and efficiency. Players are supposed to get a feeling of satisfaction and control from the interface itself, and in that sense, interaction is both a means and an end. The novelty and whimsy of the design occasionally comes at the expense of usability, which isn’t always a bad thing—that said, video games still have interfaces in their own right, and designing one that is easy to-use and intuitive is critical for players to enjoy the game.
Consider how video games are currently researched: market research-based focus groups and surveys dominate the landscape, measuring opinion and taste in a controlled lab environment, and largely ignoring players’ actual in-game behaviors. Behavior is obviously the most direct and unbiased source of understanding how players interact with the game—where they make errors, where they become irritated, where they feel most engaged. When Electronic Arts engaged Bolt|Peters to lead the player research project for Spore, we set out to do one better than the usual focus group dreck by coming at it from a UX research perspective.
SIMULATED NATIVE ENVIRONMENT RESEARCH
One overarching principle guided the design of this study: we would let the users play the game in a natural environment, without the interference of other players, research moderators, or arbitrary tasks. This took a good bit of planning. Usually, we prefer to use remote research methods, which allow us to talk to our users in the comfort of their own homes. Spore, however, was a top-secret hush-hush project; we couldn’t very well send out disks for just anybody to get their hands on. Instead, CEO Nate Bolt came up with what we call a “Simulated Native Environment.” For each of the ten research sessions, we invited six participants to our loft office, where they were seated at a desk with a laptop, a microphone headset, and a webcam. We told them to play the game as if they were at home, with only one difference: they should think-aloud, saying what ever is going through their mind as they’re playing. When they reach certain milestones in the game, they would fill out a quick touchscreen survey at their side, answering a few questions about their impressions of the game.
Elsewhere, Nate, the clients from EA, and I were stationed in an observation room, where we set up projectors to display the players’ gameplay, the webcam video, and the survey feedback on the wall, which let us see the players’ facial expressions alongside their in-game behaviors. Using the microphone headset and the free game chat app TeamSpeak, we were able to speak with players one-on-one, occasionally asking them what they were trying to do or to go a little more in depth about something they’d said or done in the game.
Doesn’t that sound simple? Actually, the setup was a little brain-hurting: we had six stations; each station needed to have webcam, gameplay, survey, and TeamSpeak media feeds broadcast live to the observation room – that’s 18 video feeds and 6 audio feeds, and not only did the two (that’s right, two!) moderators have to be able to hear the participants’ comments, but so did the dozen or so EA members. On top of that, everything was recorded for later analysis.
“The feedback we received from users wasn’t based on tasks we’d ordered them to do, but rather on self-directed gameplay tasks the users performed on their own initiative”
The important thing about this approach is the feedback we received from players wasn’t based on tasks we’d ordered them to do, but rather on self-directed gameplay tasks the players performed on their own initiative. We didn’t tell players outright what to do or how to do things in the game, unless they were critically stuck (which was useful to know in itself). The observed behavior and comments were more stream-of-consciousness and less calculated in nature.
The prime benefits to our approach were the absence of moderators, which mitigated the Hawthorne effect, as well as the absence of other participants, eliminating groupthink. Additionally, the players were more at ease: it’s hard to imagine these video outtakes (see below) being replicated in a focus group setting. Most importantly, they weren’t responding to focus questions–they were just voicing their thoughts aloud, unprompted, which gave us insight into the things they noticed most about the game, rather than what we just assumed were the most important elements.
OOPS, WE MESSED UP
Over the year-long course of the project, there was one incident which proved to us just how important it was to preserve the self-directed task structure of our research. Because of the multiphase progression of Spore, we believe it was important to carefully structure the sessions to give players a chance to play each phase for a predetermined amount of time, and in a set order as if they were experiencing the game normally.
Partway through the second session, we started having doubts: even though we weren’t telling players what to do within each phase, what if our rigid timing and sequencing is affecting the players’ engagement and involvement with the game?
To minimize this, between sessions, we made a significant change to the study design: instead of telling users to stop at predetermined intervals and proceed to the next phase of the game, we threw out timing altogether and allowed users to play any part of the game they wanted, for as long as they wanted, in whatever order they wanted. The only stipulation was that they should try each phase at least once. Each session lasted six hours spread over two nights, so there was more than enough time to cover all five phases, even without prompting users to do so.
Sure enough, we saw major differences in player feedback. We are unable to provide specific findings for legal reasons, but we can say that the ratings for certain phases consistently improved (as compared with previous sessions). Additionally, a few of the lukewarm comments players had made about certain aspects of the game seemed to stem from the limiting research format, rather than the game itself.
It became clear that when conducting game research, it was vitally important to stick to the actual realities of natural gameplay as much as possible, even at the expense of precisely structured research formatting. You have to loosen up the control a little bit; video games are, after all, interactive and fun. It makes no more sense to formalize a gameplay experience than it does to add flashy buttons and animated graphics to a spreadsheet application.
BRINGING GAME RESEARCH INTO THE HOME
There are a lot of ways to go with the native environment approach. Even with all efforts to keep the process as natural and unobtrusive as possible, there are still lots of opportunities to bring the experience even closer to players’ typical behaviors. The most obvious improvement is the promise of doing remote game research–allowing participants to play right at home, without even getting up.
Let’s consider what a hypothetical in-home game research session might look like: a player logs into XBox Live, and is greeted with a pop-up inviting him to participate in a one-hour user research study, to earn 8000 XBox Live points. (The pop-up is configured to appear only to players whose accounts are listed as 18 or older, to avoid issues of consent with minors.) The player agrees, and is automatically connected by voice chat to a research moderator, who is standing by. While the game is being securely delivered and installed to the player’s XBox, the moderator introduces the player to the study, and gets consent to record the session. Once the game is finished installing, the player tests the game for an hour, giving his think-aloud feedback the entire time, while the moderator takes notes and records the session. At the end of the session, the game is automatically and completely uninstalled from the player’s XBox, and the XBox Live points are instantly awarded to the player’s account.
Naturally, there are lots of basic infrastructure advances and logistical challenges to overcome before this kind of research becomes viable:
- Broadband penetration
- Participant access to voice chat equipment
- Online recruiting for games, preferably integrated into an online gaming framework
- Secure digital delivery of prototype or test build content
- Gameplay screensharing or mirroring
For many PC users, these requirements are already feasible, and for games with built-in chat and/or replay functionality, the logistics should already be much easier to meet. Remote research on PCs is already viable (and, in fact, happens to be Bolt|Peters’s specialty). Console game research, on the other hand, would likely require a substantial investment by console developers to make this possible; handheld consoles present even more challenges.
We expect that allowing players to give feedback at home, the most natural environment for gameplay, would yield the most natural feedback, bringing game evaluation and gameplay testing further into the domain of good UX research.