Quick and Easy Flash Prototypes

Written by: Alexa Andrzejewski

To tackle the classic “how to prototype rich interactions” problem, I developed a process for translating static screen designs (from wireframes to visual comps) into interactive experiences using Flash. Requiring some fairly basic ActionScript knowledge, these prototypes proved to be a quick yet powerful way to bring interaction designs to life.

When asked if I could find a quick and easy way to prototype a web application my project team had wireframed in Visio, I first turned to PDF prototyping. Using a PDF of the wireframes as the base, I overlaid clickable elements and some interactive data entry fields. Everything was wonderful—until the client asked us to add color to the wireframes. The Visio document was updated, a new PDF had to be made… and all that interactivity had to be reapplied. (It is tedious and time-consuming to replace page content in a PDF.)

Not wanting to repeat this tedious process again and again, I turned to Flash. I was excited to find that Flash not only felt more streamlined and intuitive when creating basic click-throughs, but it also offered almost limitless potential for prototyping rich interactions. With some basic ActionScript (a scripting language used in Flash to define interactions) knowledge and a bit of resourcefulness, I was able to create functional combo box navigations, type-ahead mechanisms, and eventually a complex, drag-and-drop scheduling application similar to Google Calendar. And whenever the screen designs changed, all I had to do was import the new background images, while the interactivity layers stayed happily in place, requiring only minimal tweaking.

Quick and easy yet powerful and flexible

The idea of working with Flash may seem daunting, but the effort required to create a basic click-through is comparable to that required by other applications, while the flexibility and potential for extending the prototype is far greater. (After all, Flash was designed for creating interactive applications, not presentations [like PowerPoint], diagrams [like Visio] or portable documents [like Acrobat].) Considering the additional benefits it offers, Flash prototyping is well-worth adding to your interaction design toolkit:

  • Flash prototyping allows you to add interactivity to screen designs and wireframes you’ve already created. You don’t have to recreate the layouts in another application.
  • Flash allows screen images to be updated without losing your interactive layers, which is much more difficult in PDF prototyping, as I described above.
  • Flash includes a robust library of customizable user interface components that can be dropped into your prototype and used as they are to add realism (e.g., a text field you can actually type in) or programmed using ActionScript to serve as fully-functional interface controls.
  • You can export prototypes as stand-alone applications (suitable for running from a thumb-drive at an on-site usability test where the Flash plugin may be blocked) or as HTML pages with embedded Flash (in which the browser can be used to scroll up and down through tall prototypes).
  • You don’t have to hire a professional Flash developer to achieve all of this. (I’m certainly not a Flash expert.) You can create simple, yet believable, prototypes with very little knowledge of ActionScript. All it takes is resourcefulness, creativity, and experimentation.

A valuable tool and impressive deliverable

Flash prototypes can be both a valuable tool for project teams and an impressive deliverable for clients. At the consultancy where I first employed Flash prototyping, these prototypes quickly became an invaluable part of the design, testing and buy-in process for many projects, and clients loved them. Here’s why:
By experiencing the interactions we were proposing, we were able to quickly sense when something that seemed good on paper didn’t feel right in reality.
By testing realistic prototypes, which users perceived to fully-functional, we were able to collect accurate user feedback about how novel interactions (such as a type-ahead search) felt and functioned in a real context.
By providing clients with functional demos, we were able to see our concepts move towards reality as clients enthusiastically used them to rally organizational support for concepts.
Having learned a lot through trial and error in creating these prototypes, I’d now like to share the process I developed through a step-by-step tutorial.

How to use this tutorial

In this tutorial, you’ll be creating a practice prototype using static screen images from Facebook that I’ve annotated using Visio. You should be able to complete it in 1 hour or less. To see what you’ll be creating, you can try out the final prototype files:
You’ll also want to download the files needed to complete this tutorial:
If you don’t have Flash CS3 or higher, you can download a 30-day, fully-functional trial from:
The major steps of this tutorial are in bold. If you have some Flash experience, you may be able to complete the tutorial using the bold steps and the annotations on the screens. For those who are new to Flash, I’ve provided detailed sub-steps and definitions.

Table of Tutorial Steps

Preparing screen designs for prototyping

To prepare for Flash prototyping, you’ll need to have images showing each possible screen and state in the scenarios you want to demonstrate. Often, you’ll have already created many of these screen designs in the wireframing phase of your project. The fidelity of these images may range from wireframes to visual design comps, depending on your prototyping needs. It doesn’t matter what software you use to create your images (e.g., Visio, Illustrator, Photoshop, etc.), as long as you can export them as raster images (e.g., GIF, PNG, JPG).

While images have already been prepared for you in this tutorial, here are some helpful tips for creating and exporting screen designs for use in Flash:

Creating the screens

Create neat layouts using grids and guides to keep items aligned from page to page to prevent inappropriate jumpiness (if only part of a page is supposed to be changing, such as when a menu appears, and the entire page shifts a little, it will detract from the prototype’s realism). Test page-to-page alignment by viewing images full-screen.

Ensure that images are all the exact same size by using a common background image or canvas size for all screens. In Visio and other page layout applications, you must be sure to remove or hide anything that protrudes outside of this background before exporting. Placing annotations that don’t belong in the prototype and anything else that will exist outside of the content area (page titles, footers) on a separate layer allows them to hidden before exporting the images.

Planning interactivity

Add a unique identifier to each screen or page that, unlike the page numbering (if you’re using a page layout program), will not change. You’ll be saving each screen as a file named after this identifier (e.g., “W05.png”). Using these identifiers, if your page ordering changes or you add or delete pages, you’ll still know which page corresponds with which exported image later.

Mark up the screens with callouts explaining everything that should be interactive and what should happen when elements are interacted with. For a basic click-through, most of these callouts will point to an element and read, “Go to X” (where X is the target page identifier). Make sure these callouts don’t extend outside of the image area. These callouts will make the Flash work much easier, and the callout version of the prototype makes a useful “practice prototype” that can help future users learn how to navigate the prototype. You’ll re-export the images and create a version of the prototype without callouts later.

Saving or exporting images

Save or export each screen as an image that will fit on a screen without scaling (preferably a lossless file such as GIF or PNG) using the page identifier as the filename. If you’re using vector-based software (like Visio), make sure the resolution of the exported files is set so that the resulting images will fit on the screen without scaling or scrolling. If your screen dimensions are 1024×768 points, for example, and you want the resulting images to be 1024×768 pixels, you would need to set the export resolution to 72×72 pixels/in.

Welcome to Flash

Note: If you don’t see some of these panels, use the Window menu to locate and turn them on.
You’ll be introduced to each of these elements and to key principles of ActionScript as you work through this tutorial. In the Appendices at the end of this document, you’ll find a handy reference guide and summary of everything you’ll have learned about Flash and ActionScript.

Setting up the Flash document

The images needed to complete this tutorial from scratch can be found in the SourceImagesAnnotated folder of this zip file:
While setting up the Flash document may feel tedious, what’s exciting about Flash prototyping is that after you’ve set up a document once, you can save your work as a template for future prototypes so that you don’t have to start from scratch. All you’d have to do to start a new prototype is import a new set of images with the same filenames and add or remove keyframes as needed depending on the number of screens.
If you’re already familiar with Flash basics and would like to skip to creating interactions, you can download an already-set-up template and begin working through the tutorial from the “Basic principles of interaction” section.

Creating a new prototype template

1. Create a new Flash document that uses ActionScript 2.0 and has a 1024px by 768px stage.
1.1. Open Flash and create a new Flash File (ActionScript 2.0).

ActionScript 2.0 vs. ActionScript 3.0

The latest version of ActionScript is ActionScript 3.0, a much more sophisticated but unfortunately a little harder to comprehend rendition of the language. Because AS 2.0 is a little more direct and intuitive (you can add scripts directly to buttons and objects), I recommend starting there. In fact, if you’re only using Flash for simple applications (like most prototypes will be), AS 2.0 is all you really need. If you eventually want to catch up with the AS 3.0 trend, understanding AS 2.0 first can help you make the conceptual leap to AS 3.0.

When working in ActionScript 2.0, you’ll want to make sure that tutorials and scripts you find online and in books are compatible. Look for tutorials written for Flash MX 2004, Flash 8, or Flash CS3 using ActionScript 2.0.

1.2. To define the size of the stage, in the Properties Panel, click the button next to "Size" that shows its current dimensions. Change the dimensions to the size, in pixels, of your exported screen designs—1024px by 768px in this tutorial. Press "OK.”
The Stage is your visual workspace or canvas. This is where you’ll specify where objects appear. Objects in the grey area outside the stage will not be visible when the exported flash movie is viewed at the correct size.
2. Import the images to keyframes by using Import to Stage and selecting the first file in the sequence. Flash will automatically prompt you to import the rest of the files and place each image on its own keyframe.
2.1. From the File menu, go to File > Import > Import to Stage. Browse to the folder where your images are located and select only the first image in the series—“W01.png.” Click "Import."
2.2. A dialog will appear asking if you want to import all of the images in the sequence. Click “Yes.” All of the images will be imported to your Library and automatically placed on separate keyframes in the timeline.

The Timeline is where you define when objects appear or disappear from the stage. The timeline contains frames and keyframes which can be on multiple, independent layers. The layers in Flash work in an identical fashion to layers in Photoshop.

Frames are displayed as blocks. They are grey when they contain content and white when they are empty.

Keyframes are frames marked with circles or dots. A keyframe is a frame where a change can occur without affecting the previous frames. Changes made on keyframes persist until the next keyframe unless “tweening” is used to fill in the blanks. You can also add commands (ActionScript) to keyframes, which will be executed when the player reaches that frame.

2.3. Note that when you click to select one of the screen images, you’ll notice that the properties of this image are shown in the Properties Panel, including its X and Y position on the stage (measured from the upper-left corner by default). The X and Y positions should both be “0” by default. If you ever move one of the images out of place accidentally, you can use the Properties Panel to place it back in the upper-right corner by re-entering “0,0” as its position.
The Properties Panel is context-sensitive—it shows/allows you to set properties for the last frame or instance that you selected. It is shown at the bottom of the Flash interface.
3. Name the current layer, “Wireframes.” Label each keyframe using the image’s identifier. Insert four frames between every keyframe to make these labels visible.
3.1. In the Layers area at the top of the workspace, double-click on the words, "Layer 1" and type "Wireframes." Press Enter to commit the change.
3.2. Click on the first keyframe to highlight it.
3.3. Press F5 four times to insert four frames after this keyframe.
3.4. With the first keyframe still selected, in the Properties Panel, change “” to the image’s identifier, “W01” (remember, ActionScript is case sensitive).
Frame Labels can be assigned to keyframes so that they can be referred to using ActionScript. It is generally better to refer to frame labels in ActionScript than to frame numbers because frame numbers are subject to change as you add or remove frames.
3.5. Click on the second keyframe to highlight it. Insert four frames after it and label it “W02.” Repeat for the remaining keyframes, including the final one. This part may seem a little tedious, but you should not have to repeat it again unless you insert additional screens. You’ll be able to update these images without recreating the keyframes and labels in the future.
The Properties Panel is where you can assign frame labels to frames and instance names to instances.
4. Create a new layer called, “Frame Actions.” Make sure the Flash movie does not automatically play by adding a “stop();” action to the first frame.
4.1. Right-click or Ctrl+click (Mac) on the first layer’s name and choose “Insert Layer.”
4.2. Rename the new layer, “Frame Actions.”
4.3. Select the first keyframe in this new layer.
4.4. Go to Window > Actions to turn on the Actions Panel.
The Actions Panel is context-sensitive—it shows/allows you to add scripts to the last frame or instance that you selected. Look at the tab name near the bottom of the Actions Panel to verify what you are adding actions to.
4.5. With the first keyframe selected, type the following in the Actions Panel:
stop();
Notice that a small “a" appears in the middle of the keyframe to indicate that actions will be triggered when that frame plays.

ActionScript is case sensitive.

Semicolons should be used at the end of every statement (generally every line).

Frame Actions are attached to keyframes and are triggered when that frame is reached.

Functions are built-in actions that you can call on using keywords. These commands are keywords followed by parentheses in which specific details can be added if necessary.

Creating a reusable button

5. Create a button symbol. Make the button invisible—an invisible button is clickable but does not display anything on the stage when the Flash file is compiled: It has a clickable area (a “Hit” state) but no visible states (Up, Over, and Down are all blank). This button should be a rectangle and the size of an average clickable area in your prototype. You’ll be able to resize instances of this button as needed. You will be using this button overlay to simulate most interactions.
5.1. Go to “Insert > New Symbol…”
Symbols are objects that can be used and reused. They are similar to Shapes in Visio and Stencils in Omnigraffle. Symbols include imported content, Movie Clips (objects that can have their own, independent timelines), Buttons (special movie clips with four frames in which four possible button states can be created), and Graphics (objects that may contain drawings and/or imported images).
5.2. In the dialog that appears, name it "invisibleButton" and choose the type "Button."
5.3. Notice in the Edit Bar that you are now editing “invisibleButton.” This button’s “timeline” consists of four labeled frames. Each frame represents a unique button state. Typically, you would draw what the button looks like normally in the “Up” frame, you’d draw the rollover state in the “Over” frame, and you’d draw the pressed state in the “Down” frame. “Hit” is an invisible state used to designate the clickable area of a button. Since you want your button to be invisible, you will only be creating the “Hit” state. Right-click or Ctrl+click (Mac) on the frame labeled, “Hit” and choose “Insert Keyframe.”
The Edit Bar (Navigation Area) indicates which timeline you are currently viewing. In this tutorial, you will generally be working on the main timeline, labeled “Scene 1” by default, but you will eventually be creating objects that have their own, independent timelines. These objects can be nested inside of each other. When you are inside an object’s timeline, a breadcrumb trail will indicate where you are in the grand scheme of things, e.g., Scene 1 > CarObject > WheelObject.
5.4. Using the Rectangle Tool and Fill Color picker if necessary, draw a solid-colored box on this keyframe. The color does not matter, as it will not be visible. It also doesn’t matter exactly what shape and size it is—you’ll be resizing each instance as needed. Just make it the size of a typical button in your prototype.
The Tools Bar contains tools you can use to create and manipulate graphics and objects on the stage.

5.5. In the Edit Bar, click the Back Arrow or "Scene 1" to go back to the main movie.
5.6. Your "invisibleButton" symbol should appear in the Library Panel.

Creating a layer for interactive elements

6. Add another layer to your movie, called “Interaction,” for buttons and controls. In this layer, insert one keyframe per screen.
6.1. Right-click or Ctrl+click (Mac) on the “Wireframes” layer’s name and choose “Insert Layer.”
6.2. Rename the new layer, “Interaction.”
6.3. Lock the “Wireframes” layer by clicking in the padlock icon column to prevent accidental changes.
6.4. Right-click or Ctrl+click (MAC) on the frame above the keyframe labeled, “W02” and choose “Insert Keyframe.” Repeat, inserting a keyframe in this layer above every labeled keyframe.

Basic principles of interactivity

I’ve provided a Flash file in which all the steps to this point have been completed so that you can experiment with the more interesting aspects of Flash prototyping. If you’d like to start the tutorial from this point, open the file called FlashPrototype_Template.fla from this zip file:

Creating a basic click-through

7. Place invisible buttons over all the “go to X” elements in the images (the ones specified using green callouts). Add gotoAndStop(“frameLabel); actions to each button, telling Flash to go to the frame label specified in the annotations when that button is released.
7.1. Click on the first keyframe of the Interaction layer.
7.2. Drag an instance of the invisibleButton onto the stage and drop it over the “Edit” button in the screen design. It should appear to be a transparent, blue area. You can use the Free Transform Tool (which you can find in the Tools bar) to make it cover just the area that should be clickable.
Instances are unique occurrences of symbols in your Flash movie. You can drag as many instances of a symbol into your movie as you like. If you update the symbol, all of its instances will be updated as well, however, certain properties (such as scaling and positioning) can be customized on an instance-by-instance basis. Later you’ll learn how to name instances so you can refer to them specifically using ActionScript.
7.3. Next you are going to make the button interactive. When you add an action to an object (vs. to a keyframe, as you did earlier), it must be contained within an “Event Handler” so that Flash knows when to execute the action. With the button selected, type this Event Handler in the Actions Panel:
on(release) {
}
Event Handlers are used to specify behaviors that trigger actions. Actions contained within an event handler’s curly braces will be triggered only when the event preceding them occurs (e.g., when an button is pressed and when it’s released).
7.4. All of the actions that should happen when the button is clicked and then released should go within the two curly braces. You’re going to use the gotoAndStop(“frameLabel”) function to tell the prototype to go to the frame labeled “W02” when the button is clicked and released.
on(release) { // When this event occurs…
gotoAndStop(“W02”); // these actions should be triggered.
}

Curly braces can be used to group statements.

Comments that don’t affect the code can be added by preceding comment text with two forward slashes.

7.5. To ensure that the main timeline is controlled (vs. the timeline of the button the script is attached to), you can precede gotoAndStop with the name of the timeline you’re targeting. The main timeline is referred to as the “_root” in ActionScript. The final script should read:
on(release) {
_root.gotoAndStop(“W02”);
}

Actions are context-sensitive. They act on whichever movie or timeline they are attached to. If actions are placed in the main timeline, they’ll act on the main movie. If they’re placed on a timeline within an object, they’ll only act on that object unless otherwise specified.

To target specific timelines and objects, refer explicitly to the main movie as “_root” and other objects by their assigned “Instance Names.” If you’re unsure which object an action applies to, use explicit targets to ensure your actions will work as intended. You’ll learn more about Target Paths later.

7.6. If you go to Control > Test Movie (Ctrl+Enter on PC, Apple+Enter on Mac), you should see that the button symbol is invisible, but that you can still click on it to go to frame “W02.” It gives you the impression that the “Edit” link is working—your pointer should change to a hand cursor, and clicking the link should take you to frame “W02.”
7.7. On keyframes W02 through W04, wherever you see a green, “go to X” callout, add an invisible button with the appropriate script. The easiest way to do this is to copy and paste the button you’ve already created. This creates a new instance of the button while preserving the script you’ve added to it. All you have to do is change the destination frame label.
7.8. Test your prototype (Ctrl+Enter on PC, Apple+Enter on Mac). You should be able to follow the trail of green callouts through the first five frames.

That’s all there is to creating a basic click-through using Flash. The prototype at this point should be comparable to what you could create in Visio, PowerPoint or Acrobat.

What makes Flash a worthwhile prototyping tool, however, is that it makes it easy to add realism and “smarts” to your prototypes. Flash’s “Components Library” offers a robust collection of customizable user interface components that can be dropped into your prototype and used as they are to add realism (e.g., a text field you can actually type in) or programmed using ActionScript to serve as fully-functional interface controls. Using one of these components and some simple ActionScript, you’ll next learn how to add basic logic to your prototype.

Adding a conditional button

There are many applications for conditional buttons in prototypes, including demonstrating error handling.
8. Add a CheckBox component named “termsBox” to frame “W05” of the “Interaction” layer.
8.1. Select the keyframe in the “Interaction” layer above the frame labeled “W05.”
8.2. Open the Components library (Window > Components).
The Components Panel contains a special library of user interface objects, such as radio buttons and combo boxes, and have customizable properties. These properties can be edited in the Parameters tab of the Properties Panel and/or changed using ActionScript.
8.3. In the “User Interface” folder, you’ll find the CheckBox component. Drag it onto the screen and place it over top of the check box in the image.
8.4. In the Properties tab, give the Instance Name, “termsBox” to this checkbox.
An Instance Name can be assigned by selecting an instance and entering a unique name in the Instance Name field in the Properties Panel. Since a symbol in your library can be used in your Flash movie multiple times, each occurrence needs to have a unique name if you want to address it using ActionScript.
8.5. With the check box selected, in the Properties Panel, click the Parameters Tab. Select the “Label” field in the Parameters Tab and delete the label text.
The Parameters tab of the Properties Panel is where you can edit the special properties of components. Each property is listed on a separate line.
9. Create an invisible button that determines whether the check box is selected or not and sends the prototype to the appropriate screen.
9.1. Drag an invisible button over the "Upload Picture" button on frame “W05.”
9.2. Select this button, and in the Actions Panel, add an event handler:
on(release) {
}
9.3. Within this event handler, you’re going to add a conditional statement, an “if Statement,” to check whether “termsBox” is selected. The condition contained within the “if statement” will be tested when the event (on release) occurs. It is constructed as follows:
on(release) {
if(CONDITION) { // If this condition is met…
// perform all actions contained within these curly braces.
}
else { // Otherwise…
// do these actions.
}
}
The If Statement describes a condition that must be met for certain action(s) to be performed. It can optionally be followed by an “else” statement specifies what to do otherwise.
9.4. The condition that you’re checking for is whether the “selected” property of “termsBox” is “true.” To refer to properties of objects, use the object’s target path, followed by the name of the property (for example, _root.termsBox.selected).
Target Paths are like web URLs or file paths, only they use dots (.) instead of slashes to indicate hierarchy. Eventually you’ll be using ActionScript to address movies inside of other movies. An easy way to do so is using “absolute paths,” which indicate the location of an object relative to the main movie (the “root”) using objects’ instance names. (For example, to address an object with the instance name “spokes,” you might write: _root.car.wheels.spokes) You can also use “relative paths,” which you can learn more about in Flash’s documentation.
Properties of an object are special keywords that Flash recognizes. You can evaluate or change properties of an object using ActionScript. Components, movie clips and other types of objects all have unique properties. You can look up a particular object type’s properties and their possible values in Flash’s documentation.
9.5. To test whether something is equal to something else, you’ll use the “equals” operator, which consists of two equals signs. Your condition will read:
on(release) {
if(_root.termsBox.selected == true) {
}
else {
}

}
Operators are used in If Statements to compare one value to another in the format, “if(x OPERATOR y).” The most common operators are:
== equals
> is greater than
< is less than
!= is not equal to
<= is less than or equal to
>= is greater than or equal to
9.6. The complete script that will be placed on the button, which includes the actions that should happen when the condition is met or not met, is:
on(release) {
if(_root.termsBox.selected true) {
_root.gotoAndStop("W07");
}
else {
_root.gotoAndStop("W06");
}
}
9.7. Repeat steps 8 and 9 on the next keyframe, “W06,” only name the checkbox “termsBox2” and add this script to the button:
on(release) {
if(_root.termsBox2.selected == true) {
_root.gotoAndStop("W07");
}
}
10. Test your prototype (Ctrl+Enter on PC, Apple+Enter on Mac). Try selecting the checkbox and clicking the button. Test it again and see what happens when it is not selected. You should be able to complete the entire first part of the prototype.
10.1. If it doesn’t seem to work, you may want to:
10.2. Make sure your scripts were attached to the button, NOT to the checkbox or to the keyframe.
10.3. Check that your syntax is correct (don’t forget your {,}, and ;).
10.4. Ensure that your buttons point to the correct frame labels.

Exporting your prototype

If you made it this far, good job! You’ve learned some of the fundamentals of Flash and ActionScript and have created a simple, yet smart prototype. All that’s left is exporting the prototype as a stand-alone file that can be shared with clients and usability testers. To do this:
11. Go to File > Publish Settings. Choose “Windows Projector” or “Macintosh Projector” (or both), enter a file name, and press “Publish” to create a standalone, self-executing file of your prototype. It will be created in whatever folder the Flash file is in.
While the annotations make the prototype easy to test and navigate, it wouldn’t be very challenging for test participants! To make a version of the prototype that is not annotated:
12. Remove or hide the annotations in the software you used to create the screens.
13. Export or save each of the screens to a new folder using the same file names that you used before. Flash doesn’t care what folder an image comes from, if it has the same name, it considers it to be the same image.
14. In Flash, use “Save As…” to create a new copy of the prototype.
15. Go to “File > Import > Import to Library” and import all of the final images at once (use Ctrl+A or Apple+A to select all). Since the names of these images match the names of the images you imported previously, Flash will ask you whether you want to replace the existing items. Since this is exactly what you want, select “Replace existing items” and press “OK.”
16. Click through each keyframe to make sure the prototype looks right (the buttons are aligned properly and the screens are updated).
17. You can now publish another copy of the prototype without the annotations by going to “File > Publish Settings” and giving the files new names.
Open Andrzejewski_Prototype_Mac.app or Andrzejewski_Prototype_Windows.exe to see the final prototype. You can also open FlashPrototype_Completed.fla from Andrzejewski_SourceFiles.zip to see what the final Flash file should look like.

Resourcefulness, creativity, and experimentation

Using only the principles learned in this tutorial and a little resourcefulness, creativity, and experimentation, you can create quite robust prototypes. In fact, using just what you’ve already learned, you could conceivably simulate rich interactions ranging from fly-out menus to type-ahead search.

Look for a follow-up article that will show you more examples of what’s possible.

The wonderful thing about Flash is that Flash prototypes can be as simple or as complex as they need to be. Start building your click-through prototypes in Flash. Once you’ve built a few of those, try a scenario that involves a little logic. When a more complex situation arises (“Could you make this area turn yellow when you drag that icon over it?”), do some research on sites like ActionScript.org to see if you can find an easy way to show it or at least fake it.

While you may not be able to take full advantage of Flash’s prototyping potential immediately, the benefits of using Flash, even when creating simple prototypes, make it a worthwhile addition to any interaction designer’s toolkit.

Appendix A: The Flash Interface

Note: If you don’t see some of these panels, use the Window menu to locate and turn them on.
The Stage is your visual workspace or canvas. This is where you’ll specify where objects appear. Objects in the grey area outside the stage will not be visible when the exported flash movie is viewed at the correct size.
The Timeline is where you define when objects appear or disappear from the stage. The timeline contains frames and keyframes which can be on multiple, independent layers.
Frames are displayed as blocks. They are grey when they contain content and white when they are empty.
Keyframes are frames marked with circles or dots. A keyframe is a frame where a change can occur without affecting the previous frames. Changes made on keyframes persist until the next keyframe unless “tweening” is used to fill in the blanks. You can also add commands (ActionScript) to keyframes, which will be executed when the player reaches that frame.
Frame Labels can be assigned to keyframes so that they can be referred to using ActionScript. It is generally better to refer to frame labels in ActionScript than to frame numbers because frame numbers are subject to change as you add or remove frames.
The Edit Bar (Navigation Area) indicates which timeline you are currently viewing. In this tutorial, you will generally be working on the main timeline, labeled “Scene 1” by default, but you will eventually be creating objects that have their own, independent timelines. These objects can be nested inside of each other. When you are inside an object’s timeline, a breadcrumb trail will indicate where you are in the grand scheme of things, e.g., Scene 1 > CarObject > WheelObject.
The Library contains symbols—the “actors” that appear on the stage. Symbols are objects that can be used and reused. They are similar to Shapes in Visio and Stencils in Omnigraffle. Symbols include imported content, Movie Clips (objects that can have their own, independent timelines), Buttons (special movie clips with four frames in which four possible button states can be created), and Graphics (objects that may contain drawings and/or imported images).
Instances are unique occurrences of symbols in your Flash movie. You can drag as many instances of a symbol into your movie as you like. If you update the symbol, all of its instances will be updated as well, however, certain properties (such as scaling and positioning) can be customized on an instance-by-instance basis. Later you’ll learn how to name instances so you can refer to them specifically using ActionScript.
The Components Panel contains a special library of user interface objects, such as radio buttons and combo boxes, that have customizable properties. These properties can be edited in the Parameters tab of the Properties Panel and/or changed using ActionScript.
The Actions Panel is context-sensitive—it shows/allows you to add scripts to the last frame or instance that you selected. Look at the tab name near the bottom of the Actions Panel to verify what you are adding actions to.
The Properties Panel is also context-sensitive—it shows/allows you to set properties for the last frame or instance that you selected. This is where you can assign frame labels to frames and instance names to instances. The Parameters Tab of this panel is where you can edit the special properties of components. Each property is listed on a separate line.
The Tool Bar contains tools you can use to create and manipulate graphics and objects on the stage.

Appendix B: ActionScript Recap

You don’t need a deep understanding of ActionScript to create convincing prototypes. Simply understanding the following principles, all of which you’ve already put into practice in this tutorial, can take you a long way.

ActionScript 2.0 vs. ActionScript 3.0

The latest version of ActionScript is ActionScript 3.0, a much more sophisticated but unfortunately a little harder to comprehend rendition of the language. Because AS 2.0 is a little more direct and intuitive (you can add scripts directly to buttons and objects), I recommend starting there. In fact, if you’re only using Flash for simple applications (like most prototypes will be), AS 2.0 is all you really need. If you eventually want to catch up with the AS 3.0 trend, understanding AS 2.0 first can help you make the conceptual leap to AS 3.0.

When working in ActionScript 2.0, you’ll want to make sure that tutorials and scripts you find online and in books are compatible. Look for tutorials written for Flash MX 2004, Flash 8, or Flash CS3 using ActionScript 2.0.

ActionScript Grammar

ActionScript is case sensitive. Semicolons should be used at the end of every statement (generally every line). Curly braces can be used to group statements. Comments that don’t affect the code can be added by preceding comment text with two forward slashes.

Making something happen using ActionScript

First, when do you want it to happen?

  • When a particular frame is reached: Frame Actions are attached to keyframes and are triggered when that frame is reached. For example, you might select the last keyframe in a movie and add a “stop();” action so that the movie will not loop when it reaches the end.
  • When a specific event occurs: When you add actions to objects on the stage, you must use one or more event handlers to specify exactly when they should be executed. Event Handlersare used to specify behaviors that trigger actions. Actions contained within an event handler’s curly braces will be triggered only when the event preceding them occurs (e.g., when an button is pressed and when it’s released). Useful event handlers include:
    on(press) { } on(release) { } on(rollOver) { } on(rollOut) { }
  • The above, but only when a particular condition is met: Within Frame Actions and Event Handlers, you can add additional conditions that must be met for an action to be performed. Note that the condition will be evaluated only when the frame is reached or event occurs. The If Statement describes a condition that must be met for certain action(s) to be performed. Its form is: “if(CONDITION) {ACTIONS}”It can optionally be followed by an “else” statement specifies what to do otherwise. Operators are used in If Statements to create conditions by comparing one value to another in the format, “if(x [operator] y).” The most common operators are:

    == equals
    > is greater than
    < is less than
    != is not equal to
    Multiple conditions can be combined using the operators:
    && and
    || or
    For example, if(x false && y false) means “if x is false AND y is false.”

Next, which object or timeline are you commanding?
  • Actions are context-sensitive. They act on whichever movie or timeline they are attached to. If actions are placed in the main timeline, they’ll act on the main movie. If they’re placed on a timeline within an object, they’ll only act on that object unless otherwise specified.
  • To target specific timelines and objects, refer explicitly to the main movie as “_root” and other objects by their assigned “Instance Names.” An Instance Name can be assigned by selecting an instance and entering a unique name in the Instance Name field in the Properties Panel. Since a symbol in your library can be used in your Flash movie multiple times, each occurrence needs to have a unique name if you want to address it using ActionScript. If you’re unsure which object an action applies to, use explicit targets to ensure your actions will work as intended.
  • When objects are nested inside each other (every object is technically nested inside the main movie, or the _root), address them using their complete “target paths.” Target Paths are like web URLs or file paths, only they use dots (.) instead of slashes to indicate hierarchy. Eventually you’ll be using ActionScript to address movies inside of other movies. An easy way to do so is using “absolute paths,” which indicate the location of an object relative to the main movie (the “root”) using objects’ instance names. (For example, to address an object with the instance name “spokes,” you might write: root.car.wheels.spokes) It doesn’t hurt to get in the habit of always referring to the main movie as “_root”—as this becomes important when using embedded movie clips.You can also use “relative paths,” which you can learn more about in Flash’s documentation.
Finally, what do you want to happen?

  • You can use Flash’s many built-in functions to perform all kinds of actions. Functions are built-in actions that you can call on using keywords. These commands are keywords followed by parentheses in which specific details can be added if necessary. Some useful functions for prototyping include:

appendix_table1

  • You can also change properties of an object using ActionScript. Properties of an object are special keywords that Flash recognizes. You can evaluate or change properties of an object using ActionScript. Components, movie clips and other types of objects all have unique properties. You can look up a particular object type’s properties and their possible values in Flash’s documentation. You can use the “equals” sign to change a property of an object. For example, to change the text of a text box called “welcomeMessage” you might write:

welcomeMessage.text = "Hello!"

 

Some useful properties include:

appendix_table2

 

 

Flash workshop at UX Week

Alexa will be leading a workshop on Flash Prototyping (http://www.uxweek.com/workshops/prototyping-with-flash) at UX Week (August 12- 15, 2008) in San Francisco. This workshop offers the impetus, skills and inspiration you need to get started with Flash prototyping. Come and learn how to bring your wireframes to life!
Use discount code FOAA for 15% off.

UX Design-Planning Not One-man Show

Written by: Holger Maassen

A lot of confusion and misunderstanding surrounds the term "user experience." The multitude of acitivities that can be labeled with these two words span a vast spectrum of people, skills and situations. If you ask for UX design (UXD), what exactly are you asking for? Similary, if someone tells you they are going to provide you with UXD for an application, website or intranet or extranet, what exactly are you going to get?

Is it just one person who is responsible or is it a team of people who are in charge of UXD? In this story I´ll sketch my ideas of UXD based on my experiences and at the end of this story I will give you my answer.

Let us start at the beginning – UXD starts with experience – experience of the users. And so I will talk about the users first.

 

 

UXD-P – every person is an individual

Every person is an individual. Every person is in possession of different roles. For each individual there will be many roles and each person adopts a different role depending on the circumstances.

roles of experiences

User Roles

Sometimes the individual person holds one role, but mainly he will hold quite a few roles like consumer, customer, user, client, investor, producer, creator, participant, partner, part of a community, member, and so on.

 

 

UXD-P – network of expectations, experiences and knowledge

Every user is multi-faceted – and is considerably more complex than they themselves can imagine – so it´s not very helpful just to talk to the user or ask him what he needs. We have to watch what people do; we have to listen to what people say and to recognize what decisions people make – and by observing we have to evaluate and understand why they do this and that. Why and what kind of visual elements will the user like, prefer and or understand? Why and what kind of mental model, navigation or function do they respond to?

Jakob Nielsen said “To design an easy-to-use interface, pay attention to what users do, not only what they say. Self-reported claims are unreliable, as are user speculations about future behaviour.” (Jakob Nielsen – Alertbox) and I agree – I think no statement can be objective. Perhaps the circumstances are not realistic or not reasonable for the person. Or maybe the person himself is not really in the “situation,” or he is being influenced by other factors (trying to please the tester for example). Or maybe he is trying to succeed with the test rather than trying and failing, which tells us so much more.

When all three perspectives (do, say, make) are explored together, we are able to realize the experience spectrum of the “normal” user/customer we are working for.

Jesse James Garrett said: “User experience is not about how a product works on the inside. User experience is about how it works on the outside, where a person comes into contact with it and has to work with it” (J.J.Garrett – The Elements of User Experience) .

areas of experiences

Experiences

Areas of experiences: different areas which effect the quality of communication

 

 

UXD-P – personal and individual

When we talk about experiences, we take the individual into consideration, including the subjective occurrences felt by the person who has the “confrontation” with what we want them to use. Experiences are momentary and brief – sometimes they are part of a multi-layered process or they are on their own.

Normally such know-how has been learned as a part of something or by itself and will be remembered in the same way – but that’s not always the case – and the person deals with the situation in a different way. If we look at their exeperience as a continuum, the user brings their experiences of the past to the interaction in the present and adds their hopes for the future. That future could be: to interact with their banking in a safe and secure way.

flow of experiences

Flow of Experience

Flow of experience: the individual user/customer is always in the present – they act in the present. They are influenced by former experiences and current expectations.

UXD-P is taking the users’ views, behavior, and interactions, to figure out the emotional relationship between them and the thing we have built. For the most part these "people" and their experiences are unknown. It requires an appreciation of the customer: their journey, their personal history and their experiences.

It is the collective set of experiences, in the online-world, the offline-world, or even tiny little things (i.e. My coffee was cold this morning) that affects their experience of the products and the companies that represent them. It is about appreciating the individual user’s unmet needs, wants, capabilities and desires in a contextual way. It´s a box of experiences including the things the user saw, acted and felt. (BBC Two [12th February 2008, 9pm, BBC Two] had a program on rational thought. Highlights of the program include: Loss complex, Post-decision rationalization, Priming, Precognition. Watch highlights from the programme : http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/tx/decisions/highlights/ )

Experiences and expectations meet in the present. Both are inseperably combined, and every action we take takes both parts into consideration. When a person uses an application, he tries to understand what happens. He will always try to reference this to his past experiences. The moment is also tightly coupled to his expectations of his personal outlook.

At this point of “present” I think of the UX honeycomb of Peter Morville [1] …

honeycromb – Peter Morville

Morville’s "honeycomb"

honeycomb – Peter Morville (P.Morville – Facets of the User Experience)

In the present we have to deliver to the individual user and his specific task the best answers to following questions.

  • Is the application useful for the individual user and his specific task?
  • Is the application usable for the individual user and his specific task?
  • Is the application desirable for the individual user and his specific task?
  • Is the application valuable for the individual user and his specific task?
  • Is the application accessible? Available to every individual user, regardless of disability?
  • Is the target findable for the individual user and his specific task?
  • Is the application credible for the individual user and his specific task?

In the UXD-P the whole team has to take the users’ views of the GUI and the interactions to figure out the emotional relationship between the brand and potential customers. It requires a common appreciation of the customer journey and their personal history: not only with the product and similar products, but also with similar experiences.

 

 

UXD-P – teamwork and cooperation

The first stage in discovering – to invent or design for the experience – is to take a new viewpoint about the people who buy and use the products and services we are designing. This is a birdseye view and from step to step we have to use the "mouseview," which is to say a detailed view from the user’s perspective, as we develop the application we have to switch between these views. Our main desire is to to respect, value, and understand the continuum of experience and expectations our users have .

UXD-P can sometimes be a slippery term. With all the other often used terms that float around: interaction design, information architecture, human computer interaction, human factors engineering, usefulness, utility, usability and user interface design. People often end up asking, “What is the difference between all these fields and which one do I need?” If the UXD is aimed to describe the user’s satisfaction, joy or success with an application, product or website, however we specify it, there are a few key essentials which need to be tackled and I have to point out the UX honeycomb of Peter Morville [1] a second time. Each of these points, as enlightened above, makes up a considerable component of the user experience. Each is made effective due to the design offerings from each of the following elements:

Usefulness is based upon utility and usability. This means the product is able to give exactly the right kind of service and what the user is expecting from it. And it´s the joy of reaching my aims and the joy of doing so easily. The information architecture is in charge of clarity of the information and features, lack of confusion, a short learning curve and the joy of finding. The designing of the interaction is essential for a successful and overall satisfying experience. So the interaction design has to answer the questions of workflow, logic, clarity, and simplicity of the information. Visual design is responsible for the clarity of the information and elements, simplicity of tools and features, pleasant or interesting appearance of the interface, the visual hierarchy, and the joy of look and feel. Accessibility is a common term used to describe how easy it is for people to use applications or other objects. It is not to be mixed up with usability which is used to describe how easily an application, tool or object can be used by any type of user. One meaning of accessibility specifically focuses on people with disabilities: physical, psychological, learning, among others. Even though accessibility is not an element of its own, it is important to notice that accessibility also plays a role on the whole user experience to increase the likelihood of a wide-ranging user experience. People tend to gravitate to something that is easier to use regardless of who it might have been designed for.

The UXD innovation process is a nonlinear spiral of divergent and convergent activities that may repeat over time. Any design process begins with a vision. This applies particularly to the UX process. A vision, however, is not enough to start design. As I mentioned before, we always have different circumstances, users and roles. Therefore, it is critical to accurately understand the end user’s needs and requirements – his whole experience and expectations. The UX process relies on iterative user research to understand users and their needs. The most common failure point in UX processes is transferring the perspective of users to UI design. The key is to define interaction first, without designing it. First, all the research (the user, product and environment) have to be organized and summarized in a user research composition. These lead to user profiles, work activities, and requirements for the users. The user research composition feeds directly into use cases. The use cases show steps to accomplish task goals and the content needed to perform interactions. Completed use cases are validated with the intended user population. This is a checkpoint to see if the vision is being achieved and the value is clear to users and the project team. The next step is to design the user interface, generating directly from the interaction definition. A primary concern with design is to not get locked into a single solution too early. To keep the project on time, this step should be split into two parts: rough layout and exact and detailed design. The rough layout allows experimentation and rapid evaluation. Detailed design provides exacting design and behavior previews of the final application that specify what is to be coded. Iterative user evaluations at both stages are connected to be fast and effective in improving GUI, design feedback, rapid iterative evaluations, and usability evaluations.

UX workflow cycle

Image_7

design workflow – workcycle – workspiral

 


 

 

UXD-P – Gathering the elements

The diagram below presents the relationship of the elements above:

elements of UXD-P

Elements of UXD-P


Lewin’s equation

Lewin’s Equation, B = f (P,E) ( B – Behaviour; f – Function; P – Person; E – Environment ), …

… is a psychological equation of behaviour developed by Kurt Lewin. It states that behaviour is a function of the person and his or her environment [2].
There is a desired behaviour that we need to encourage, but we have no control over the person, so via interaction design, information architecture and interface design we control the environment and therefore generate the desired behavior. (see reference: books.google.com ).

 

 

UXD-P – many steps to go but every step is worth it

How do we involve our team, customer and our user/consumer? We can start at different points, but I like to think about the circumstances first. Where do we come from? Where are we? Where will we go? And who is “we”? “We” means each person who is involved in the project. Iin the centre of each effort stands the user. To get the user with his personal experiences and expectations into the project, the design team and the customer needs a combining glue / tool / instrument. I believe these are the personas of the “target users/consumers” in the process of UXD-P. If there are no personas the second or third choice are scenarios or the workflows (based on a certain user/person).

The management point of view for the most cases is also the view of our customer. It includes the user’s/consumer’s age, income, gender and other demographics. The perspective of UXD-P is to look at behaviour, goals and attitude.

To get a realistic persona we have to identify first of all the target users. Out of my experiences this is not only the task of our client to define the users and consumers – we have to support him. During the identification and characterization we have to go beyond demographics and try to understand what drives individual users and consumers to do what they do and these as detailed in quantity and quality as necessary and possible – like I mentioned above. The approach and the complexity of the characterization depend on the tasks, project and functionalities. Parallel to the very personal description we need a “picture” of the environment. For each persona we must define their business and/or their private concerns and aims. Do they want to research a product for purchase later? Are they concerned about not wasting time primarily? Do they just want to purchase something online as easy and quick as possible?

Depending on these personas we can formulate, discuss and prove scenarios – from the very beginning of the project, during the project and as check or analysis at the end of the project.

 

 

 

 

 

UXD-P – my blueprint of schedule – "todos" and deliverables

We are always asking and being asked: what are the deliverables. Throughout my career as an IA, UX-planner and designer, as well as during my study of architecture and town planning, I have constantly asked myself following the questions:

  • What kind of project is it? What are the key points?
  • What should our steps and milestones be in the project?
  • What should our/my deliverables be?
  • How can we/I explain the main idea?

I have realized that if I do not answer these questions previous to creating a deliverable, I waste more time and deadlines slip.

The deliverables are not for us. The deliverables are a means of communication with several people: manager, decision maker, client, designer, front-end developers, back-end developers, etc. Sometimes I have the feeling we overlook this from time to time. After I think about the project I have to ask myself, where will my deliverables and other efforts fit within the process of design? The following diagram describes different lines of work that will lead us to some questions each line must accomplish. Depending on these questions and topics I will outline the basis, basics and deliverables for which each skill and ability which is necessary.

Image_6___schedule of UXD-P_small version

Image_6

schedule of UXD-P ___ better view – schedule 1238px x 1030px

 

UXD-P – my conclusion

I studied architecture and town planning. And just like town planning and architecture isn’t just for architects and art lovers, the internet isn’t just for computer users and developers. Similarly, just as the towns and the cities are for the inhabitants and architecture is for the users of a building, so products and applications are for the user, the customer, the member and not for the people who build them.

In every kind of process we should act in a team but in the process of UXD-P it is absolutely essential that we have to think parallel, with the same focus . We have to act in a team, although every team member is a kind of lawyer: lawyer of budget, of the client, of utility, of usability, of look and feel, of brand and finally of the user himself. Because at the end of the project, our user/customer is the final judge.

Good design is not only interface, or look and feel, or technology, or hardware, or how it works. It is every detail, like the structure, the labelling, the border of a button or a little icon. Finally, it is the sum of every element. I believe that a shared vision of a group of creators will have more potential than individual creativity. And that is the point where creativity meets expectation. The point of view on IA and design and the process to get to a well-designed product will be changed by UXD-P.

The persons who use the application or other object that we invent are the real “architects” of the “architecture” – the real “inventor” of the design. The more we know about our users, the more likely we are to meet their needs.

As the capabilities of interactive applications and the internet go forward and grow, more and more consumers use the applications and the various possibilities in new and different ways. We must think about all aspects of user experience.

And I will ask you once again: Is it just one who is responsible or is it the team which is in charge of UXD-P?
Personally, I believe it is the process of planning and designing for User Experiences (and so I think it’s the team which is in charge), but the overview has to have an experienced planner as a kind of captain.

 

The most common cause of an ineffective website (one that doesn’t deliver value to both the business and its intended constituents) is poor design. The products have to follow, to cover the functions and the experiences. The lack of clear organization, navigation and values of brand and mood mean that people will have an unintentional and maybe bad experience, rather than one that will meet the business’s relationship objectives for each individual. User experience design and planning is a fundamental component to the creation of successful digital products, applications and services.

UXD-P is UXdesign and planning- – In my estimation there are distinctions between Design and Planning.

Design is usually considered in the context of arts and other creative efforts. When I think of design in the UX process it focuses on the needs, wants, and limitations of the end user of the designed goods, but mainly on the visual parts and the mood. A designer has to consider the aesthetic-functional parts and many other aspects of an application or a process.

The planning part provides the framework. The term "planning" describes the formal procedures used in such an endeavors, such as the creation of documents, diagrams etc. to discuss the important issues to be addressed, the objectives to be met, and the strategy to be followed. The planning part is responsible for organizing and structuring to support utility, findability and usability.

I strongly believe that both parts – design and planning – have to work closely together. Every team member should have the ability to think cross-functionally and to anticipate consequences of activities in the whole context.

I’ve often seen timelines like this …

Image_8___

and this doesn´t work for UXdesign and planning …

I give a timeline the preference which looks like this:

Image_9___

… to develop a UXdesign and UXplanning.

And in the center of this team and of this process should stand the leading person – the user!

Image_9___basis points of UXD-P

 

 

 

[1] _ UX honeycomb of Peter Morville

semanticstudios.com-publication

 

 

[2] _ The Sage Handbook of Methods in Social Psychology _ by Carol Sansone, Carolyn C. Morf, A. T. Panter

google-books (The Sage Handbook of Methods in Social Psychology)

amazon.com (The Sage Handbook of Methods in Social Psychology)

 

Personas and the Role of Design Documentation

Written by: Andrew Hinton
I’d seen hard work on personas delivered in documentation to others downstream, where they were discussed for a little while during a kick-off meeting, and then hardly ever heard from again.

In User Experience Design circles, personas have become part of our established orthodoxy. And, as with anything orthodox, some people disagree on what personas are and the value they bring to design, and some reject the doctrine entirely.

I have to admit, for a long time I wasn’t much of a believer. Of course I believed in understanding users as well as possible through rigorous observation and analysis; I just felt that going to the trouble of "creating a persona" was often wasted effort. Why? Because most of the personas I’d seen didn’t seem like real people as much as caricatured wishful thinking.

Even the personas that really tried to convey the richness of a real user were often assimilated into market-segment profiles — smiling, airbrushed customers that just happened to align with business goals. I’d see meeting-room walls and PowerPoint decks decorated with these fictive apparitions. I’m ashamed to say, even I often gave in to the illusion that these people — like the doe-eyed "live callers" on adult phone-chat commercials — just couldn’t wait for whatever we had to offer.

More often than not, though, I’d seen hard work on personas delivered in documentation to others downstream, where they were discussed for a little while during a kick-off meeting, and then hardly ever heard from again.

Whenever orthodoxy seems to be going awry, you can either reject it, or try to understand it in a new light. And one way to do the latter is to look into its history and understand where it came from to begin with — as is the case with so much dogma, there is often a great original idea that, over time, became codified into ritual, losing much of the original context.

The Origin of Personas

When we say "persona", designers generally mean some methodological descendant of the work of Alan Cooper. I remember when I first encountered the idea on web-design mailing lists in 1999. People were arguing over what personas were about, and what was the right or wrong way to do them. All most people had to go on was a slim chapter in Cooper’s "The Inmates are Running the Asylum" and some rudimentary experience with the method. You could see the messy work of a community hammering out their consensus. It was as frustrating as it was interesting.

Eventually, practitioners started writing articles about the method. So, whenever I was asked to create personas for a project, I’d go back and read some of the excellent guides on the Cooper website and elsewhere that described examples and approaches. As a busy designer, I was essentially looking for a template, a how-to guide with an example that I could just fill in with my own content. And that’s natural, after all, since I was "creating a persona" to fulfill the request for a kind of deliverable.

It wasn’t until later that Alan Cooper himself finally posted a short essay on "The Origin of Personas." For me it was a revelation. A few paragraphs of it are so important that I think they require quoting in full:

I was writing a critical-path project management program that I called “PlanIt.” Early in the project, I interviewed about seven or eight colleagues and acquaintances who were likely candidates to use a project management program. In particular, I spoke at length with a woman named Kathy who worked at Carlick Advertising. Kathy’s job was called “traffic,” and it was her responsibility to assure that projects were staffed and staffers fully utilized. It seemed a classic project management task. Kathy was the basis for my first, primitive, persona.

In 1983, compared to what we use today, computers were very small, slow, and weak. It was normal for a large program the size of PlanIt to take an hour or more just to compile in its entirety. I usually performed a full compilation at least once a day around lunchtime. At the time I lived in Monterey California, near the classically beautiful Old Del Monte golf course. After eating, while my computer chugged away compiling the source code, I would walk the golf course. From my home near the ninth hole, I could traverse almost the entire course without attracting much attention from the clubhouse. During those walks I designed my program.

As I walked, I would engage myself in a dialogue, play-acting a project manager, loosely based on Kathy, requesting functions and behavior from my program. I often found myself deep in those dialogues, speaking aloud, and gesturing with my arms. Some of the golfers were taken aback by my unexpected presence and unusual behavior, but that didn’t bother me because I found that this play-acting technique was remarkably effective for cutting through complex design questions of functionality and interaction, allowing me to clearly see what was necessary and unnecessary and, more importantly, to differentiate between what was used frequently and what was needed only infrequently.

If we slow down enough to really listen to what Cooper is saying here, and unpack some of the implications, we’re left with a number of insights that help us reconsider how personas work in design.

1. Cooper based his persona on a real person he’d actually met, talked with, and observed.
This was essential. He didn’t read about "Kathy" from a market survey, or from a persona document that a previous designer (or a separate "researcher" on a team) had written. He worked from primary experience, rather than re-using a some kind of user description from a different project.

2. Cooper didn’t start with a "method" — or especially not a "methodology"!
His approach was an intuitive act of design. It wasn’t a scientific gathering of requirements and coolly transposing them into a grid of capabilities. It came from the passionate need of a designer to really understand the user — putting on the skin of another person.

3. The persona wasn’t a document. Rather, it was the activity of empathetic role-play.
Cooper was telling himself a story, and embodying that story as he told it. The persona was in the designer, not on paper. If Cooper created a document, it would’ve been a description of the persona, not the persona itself. Most of us, however, tend to think of the document — the paper or slide with the smiling picture and smattering of personal detail — as the persona, as if creating the document is the whole point.

4. Cooper was doing this in his "spare time," away from the system, away from the cubicle.
His slow computer was serendipitous — it unwittingly gave him the excuse to wander, breathe and ruminate. Hardly the model of corporate efficiency. Getting away from the office and the computer screen were essential to arriving at his design insights. Yet, how often do you see design methods that tell you to get away from the office, walk around outside and talk to yourself?

5. His persona gained clarity by focusing on a particular person — "Kathy".
I wonder how much more effective our personas would be if we started with a single, actual person as the model, and were rigorous about adding other characteristics — sticking only to things we’d really observed from our users. Starting with a composite, it’s too easy to cherry-pick bits and pieces from them to make a Frankenstein Persona that better fits our preconceptions.

Personas are actually the designer’s focused act of empathetic imagination, grounded in first-hand user knowledge.

The biggest insight I get from this story? Personas are not documents, and they are not the result of a step-by-step method that automagically pops out convenient facsimiles of your users. Personas are actually the designer’s focused act of empathetic imagination, grounded in first-hand user knowledge.

It’s not about the documents

Often when people talk about “personas” they’re really talking about deliverables: documents that describe particular individuals who act as stand-ins or ‘archetypes’ of users. But in his vignette, Cooper isn’t using personas for deliverables — he’s using them for design.

Modern business runs on deliverables. We know we have to make them. However, understanding the purposes our deliverables serve can help us better focus our efforts.

Documentation serves three major purposes when designing in the modern business:

1. Documentation as a container of knowledge, to pour into the brains of others.

By now, hopefully everyone reading this knows that passing stages of design work from one silo to the next simply doesn’t work. We all still try to do it, mainly because of the way our clients and employers are organized. As designers, though, we often have to route around the silo walls. Otherwise, we risk playing a very expensive version of "whisper down the lane," the game you play as kids where the first kid whispers something like "Bubble gum is delicious" into another’s ear, and by the end of the line it becomes "Double dump the malicious."

Of course there are some kinds of information you can share effectively this way, but it’s limited to explicit data — things like world capitals or the periodic table of elements. Yet there are vast reservoirs of tacit knowledge that can be conveyed only through shared experience.

If you’ve ever seen the Grand Canyon and tried to explain it to friends back home, you know what I mean. You’d never succeed with a few slides and bullet points. You’d have to sit down with them and — relying on voice, gesture and facial expression — somehow convey the canyon’s unreal scale and beauty. You’d have to essentially act out what the experience felt like to you.

And even if you did the most amazing job of describing it ever, and had your friends nearly mirroring your breathless wonderment, their experience still wouldn’t come close to seeing the real thing.

I’m not saying that a persona description can’t be a useful, even powerful, tool for explaining users to stakeholders. It can certainly be highly valuable in that role. I’m only saying that if you’re doing personas only for that benefit, you’re missing the point.

2. Documentation as a substitute for physical production.

Most businesses still run on an old industrial model based on production. In that model, there’s no way to know if value is being created unless there are physical widgets coming off of a conveyor belt — widgets you can track, count, analyze and hold in your hand.

In contrast, knowledge work – and especially design – has very little actual widget-production. There is lots of conversation, iteration, learning, trying and failing, and hopefully eventual success. Design is all about problem solving, and problems are stubbornly unmeasurable — a problem that seems trivial at the outset turns out to be a wicked tangle that takes months to unravel, and another that seemed insurmountable can collapse with a bit of innovative insight.

Design is messy, intuitive, and organic. So if an industrial-age management structure is to make any sense of it (especially if it’s juicing a super-hero efficiency approach like Six-Sigma), there has to be something it can track. Documents are trackable, stackable, and measurable. In fact, the old "grade by weight" approach is often the norm — hence the use of PowerPoint for delivering paper documents attenuated over two hundred bulleted slides, when the same content could’ve fit in a dozen pages using a word processor. The rule seems to be that if the research and analysis fill a binder that’s big enough to prop your monitor to eye level, then you must’ve done some excellent work.

In the pressure to create documents for the production machine, we sap energy and focus away from designing the user experience. Before you know it, everything you do — from the interviews and observations, to the way you take notes and record things, the way you meet and discuss them after, and the way you write your documentation — all ends up being shaped by the need to produce a document for the process. If your design work seems to revolve mainly around document deadlines, formatting, revision and delivery, stop a moment and make sure you haven’t started designing documents for stake-holders at the expense of designing experiences for users.

Of course, real-world design work means we have to meet the requirements of our clients’ processes. I would never suggest that we all stop delivering such documentation.

Part of the challenge of being a designer in such a context is keeping the industrial beast happy by feeding it just enough of what it expects, yet somehow keeping that activity separate from the real, dirty work of experiencing your users, getting them under your skin, and digging through design ideas until you get it right.

3. Documentation as an artifact of collaborative work and memory.

While the first two uses are often necessary, and even somewhat valuable, this third use of documentation is the most effective for design — essentially a sandbox for collaboration.

These days, because systems tend to be more interlinked, pervasive and complex, we use cross-disciplinary teams for design work. What happened in Cooper’s head on the golf course now has to somehow happen in the collective mind of a group of practitioners; and that requires a medium for communication. Hence, we use artifacts — anything from whiteboard sketches to clickable prototypes.

The artifacts become the shorthand language collaborators use to "speak design" with one another, and they become valuable intuitive reminders of the tacit understanding that emerges in collaborative design.

Personas, as documents, should work for designers the way scent works for memories of your childhood.

Because we have to collaborate, the documentation of personas can be helpful, but only as reminders. Personas, as documents, should work for designers the way scent works for memories of your childhood. Just a whiff of something that smells like your old school, or a dish your grandmother used to make, can bring a flood of memory. Such a tool can be much more efficient than having to re-read interview transcript and analysis documents months down the road.

A persona document can be very useful for design — and for some teams even essential. But it’s only an explicit, surface record of a shared understanding based on primary experience. It’s not the persona itself, and doesn’t come close to taking the place of the original experience that spawned it.

Without that understanding, the deliverables are just documents, empty husks. Taken alone, they may fulfill a deadline, but they don’t feed the imagination.

Playing the role

About six months ago, my thoughts about this topic were prompted by a blog post from my colleague Antonella Pavese. In her post, she mentions the point Jason Fried of 37 Signals makes in +Getting Real+ that, at the end of the day, we can only design for ourselves. This seems to fly in the face of user-centered design orthodoxy – and yet, if we’re honest, we have to realize the simple scientific fact that we can’t be our users, we can only pretend to be. So what do we do, if we’re designing something that doesn’t have people just like us as its intended user?

Antonella mentions how another practitioner, Casey Malcolm, says to approach the problem:

To teach [designers] how to design usable products for an older population, for example, don’t tell designers to take in account seniors’ lower visual acuity and decreased motor control. Let young designers wear glasses that impair their visual acuity. Tie two of their fingers together, to mimic what it means to have arthritis or lower motor control."

Antonella goes on:

So, perhaps Jason Fried is completely on target. We can only design for ourselves. Being aware of it, making it explicit can make us find creative ways of designing for people who are different from us… perhaps we need to create experience labs, so that for a while we can live the life of the people we are designing for."

At UX Week in Washington, DC this summer, Adaptive Path unveiled a side project they’d been working on — the Charmr, a new design concept for insulin pumps and continuous monitors that diabetics have to constantly wear on their bodies. In order to understand what it was like to be in the user’s skin, they interviewed people who had to use these devices, observed their lives, and ruminated together over the experience. Some of the designers even did physical things to role-play, such as wearing objects of similar size and weight for days at a time. The result? They gained a much deeper feel for what it means to manage such an apparatus through the daily activities the rest of us take for granted — bathing, sleeping, playing sports, working out, dancing, everything.

Personas aren’t ornaments that make us more comfortable about our design decisions. They should do just the opposite.

One thing a couple of the presenters said really struck me — they said they found themselves having nightmares that they’d been diagnosed with diabetes, and had to manage these medical devices for the rest of their lives. Just think — immersing yourself in your user’s experience to the point that you start having their dreams.

The team’s persona descriptions weren’t the source of the designers’ empathy — that kind of immersion doesn’t happen from reading a document. Although the team used various documentation media throughout their work – whiteboards and stickies, diagrams and renderings – these media furthered the design only as ephemeral artifacts of deeper understanding.

And that statement is especially true of personas. They’re not the same as market segmentation, customer profiling or workflow analysis, which are tools for solving other kinds of problems. Neither do personas fit neat preconceptions, use-cases or demographic models, because reality is always thornier and more difficult. Personas aren’t ornaments that make us more comfortable about our design decisions. They should do just the opposite — they may even confound and bedevil us. But they can keep us honest. Imagine that.

References:

  • Alan Cooper, “The Origin of Personas”:http://www.cooper.com/insights/journal_of_design/articles/the_origin_of_personas_1.html
  • Jason Fried, “Ask 37 Signals: Personas?”:http://www.37signals.com/svn/posts/690-ask-37signals-personas
  • Antonella Pavese, “Get real: How to design for the life of others?”:http://www.antonellapavese.com/archive/2007/04/249/
  • Dan Saffer, “Charmr: How we got involved”:http://www.adaptivepath.com/blog/2007/08/14/charmr-how-we-got-involved/

_*Author’s Note:* In the months since the first draft of this article, spirited debate has flared among user-experience practitioners over the use of personas. We’ve added a few links to some of those posts below, along with links to the references mentioned in the piece. I’d also like to thank Alan Cooper for his editorial feedback on my interpretation of his Origins article._

  • Peter Merholz, “Personas 99% bad?”:http://www.peterme.com/?p=624
  • Joshua Porter, “Personas and the advantage of designing for yourself”:http://bokardo.com/archives/personas-and-the-advantage-of-designing-for-yourself/ and “Personas as tools”:http://bokardo.com/archives/personas-as-tools/
  • Jared Spool, “Crappy personas vs. robust personas”:http://www.uie.com/brainsparks/2007/11/14/crappy-personas-vs-robust-personas/ and “Personas are NOT a document”:http://www.uie.com/brainsparks/2008/01/24/personas-are-not-a-document/
  • Steve Portigal, “Persona Non Grata”:http://interactions.acm.org/content/?p=262, interactions, January/February 2008

Building a Data-Backed Persona

Written by: Andrea Wiggins

Incorporating the voice of the user into user experience design by using personas in the design process is no longer the latest and greatest new practice. Everyone is doing it these days, and with good reason. Using personas in the design process helps focus the design team’s attention and efforts on the needs and challenges of realistic users, which in turn helps the team develop a more usable finished design. While completely imaginary personas will do, it seems only logical that personas based upon real user data will do better. Web analytics can provide a helpful starting point to generate data-backed personas; this article presents an informal 5-step process for building a “persona of the people.”

In practice, outcomes indicate that designing with any persona is better than with no personas, even if the personas used are entirely fictitious. Better yet, however, are personas that are based on real user data. Reports and case studies that support this approach typically offer examples incorporating data into personas from customer service call centers, user surveys and interviews. It’s nice work if you can get it, but not all design projects have all (or even any!) of these rich and varied user data sources available.

However, more and more sites are now collecting web analytic data using vendor solutions or free options such as Google Analytics. Web analytics provides a rich source of user data, unique among the forms of user data that are used to evaluate websites, in that it represents the users in their native habitat of use. Despite some drawbacks to using web analytics that are inherent to the technology and data collection methods, the information it provides can be very useful for informing design.

Google Analytics is readily accessible and offers great service for the price, so for the sake of example, the methods described here will refer to specific reports in Google Analytics. Any web analytics solution will provide basic reporting similar to Google Analytics, give or take a few reports, so using a different tool will just require you to determine which reports will provide data equivalent to the reports mentioned here.

To illustrate the process, an example persona design scenario is included in the description for each of the five steps:

Kate is an independent web design contractor who is redesigning the website of a nonprofit professional theater company. She has hardly any budget, plenty of content, and many audiences to consider. The theater’s website fills numerous functions: it advertises the current and upcoming plays for patrons; provides patrons information about ticketing and the live theater experience; announces auditions; specifies playwright manuscript and design portfolio requirements for theater professionals; recruits theater intern staff; serves as the central repository of collected theater history in the form of past play archives and press releases; advertises classes and outreach activities; and attempts to develop a donor base as well. As she gathers requirements, Kate decides to use the theater’s new Google Analytics account as a data source for building personas.

Step One: Collect Data

After Google Analytics has been installed on a site, you must wait for data to accumulate. Sometimes you will have the good fortune to start a project that has already been collecting data for months or years, but when this isn’t the case, try to get as much data as you can before extracting the reports you will use to build personas. Ideally, you want to have enough data for reporting to have statistical power, but not all sites generate this level of traffic. As a rule of thumb, less than two weeks of data is not sufficient for any meaningful analysis. One to three months of the most recent data is much more appropriate.

If it is reasonable, try to set up two profiles to filter on new and returning visitors. While some Google Analytics reports do allow segmentation, profile filtering on new versus returning visitor status gives you the best access to the full array of reports for each visitor segment. If this setup can be arranged early in data collection, then you can later draw on a profile that contains only new visitors to determine the characteristics of your personas who are new visitors, and likewise for returning visitors.

Kate has been given administrator privileges in the theater’s Google Analytics account for the duration of her contract. The theater has just one profile that includes all site traffic, so she starts off by making two new profiles with filters to include new visitors in one profile and returning visitors in the other. Kate knows that she needs a decent sample of site data, so she monitors the profiles weekly to make sure that the data is accumulating. She starts designing her personas using the existing Google Analytics profile (all visitors), and checks back later on the custom profiles to see if the segmented data can provide any new insights to add to her personas.

Step Two: Determine How Many Personas to Use

Next, determine how many personas to use–generally no less than three and rarely more than seven or eight. This gives you the number of blank slates across which to proportionately distribute the user characteristics that you extract from Google Analytics reports. If there are four personas, each will be assigned the characteristics of 25% of the site audience in each report; if five personas, each represents 20% of the site audience. Despite the fact that you’re working with statistics, you don’t have to be exacting in proportionately representing user segments; sometimes it is very important, for business reasons, to strongly represent a small user segment.

After thinking carefully about the many functions that the site has to fill, Kate looks at the Top Content report in Google Analytics to see what pages get the most traffic. She notices that most of the top pages are related to current shows, tickets and directions, and decides that she will have at least one persona represent a first-time patron who plans to travel from out of town. The other pages that are popular include the “About Us,” “People,” and “Classes” pages; “Auditions” is a little further down the page, but well above “Support Us.” Kate determines that she will create another persona to represent people interested in joining the theater company. Kate knows that fund development is important to the theater, but it doesn’t appear to be all that important to the website audience, so she decides to create another patron persona who has attended several plays and is interested in becoming a donor. She feels that these three roles can represent the audience the theater is most interested in reaching, and starts creating a persona document for each of them. She names her personas: Regina is the first-time out-of-town patron, Monica is the would-be theater participant, and Rex is the returning patron.

Step Three: Gather Your Reports

After allowing some data to accumulate, the next step is to acquire the Google Analytics reports, whether you’re interacting directly with the application yourself or someone else is providing you with reports. If you are not the person extracting data, make sure that you receive the PDF exports of reports, as these contain summary data that is not present in some of the other export formats. Whether or not you have profiles that are filtered on new versus returning visitor segments, you will be interested in the same handful of reports:

  • Visitors Overview Report. In one convenient dashboard-style screen, you can get the percentage of “new visits,” or visits by new visitors, and a snapshot of other visitor characteristics.
  • Browsers and OS Report. While you can look at browsers and operating systems separately in other individual reports, it usually makes more sense to look at them in combination in the Browsers and OS Report. Typically only a handful of browser and operating system combinations are required to represent well over 90% of the site’s visitors.
  • Map Overlay Report. To use this report, which provides a great deal of detail on the geographic origins of site visits, you will need to do just a little bit of math. Divide the number of visits from the top country or region (whichever is of greater use to you) by the total number of visits to get the percentage of visits from that geographical area. This allows you to determine the proportions of domestic and international visits. For the visits from your country, you will want to drill down to the city level and select a few cities from the top ranks of the list, keeping in mind that big cities will statistically generate more traffic than small ones. For your international visitors, choose from the top cities in the countries that bring the most visits.
  • Keywords Report. This report shows the queries that bring users to your site. When you look at the search engine query terms, ask yourself, “What are our users looking for? What type of language do they use when searches bring them to our site?” This gives you a starting point to think about user motivations and goals.
  • Referring Sites Report. Like the Keywords Report, the Referring Sites Report gives you an opportunity to look for answers to questions like, “Where do our users come from? Are they reaching our site from search engines, other sites, or just appearing directly with no referrer, as returning visitors are more likely to do?”

If you have the segmented profiles set up, extract the same reports from both of these profiles, and get the Visitors Overview report from an unfiltered profile.

Kate starts looking for report data to build her personas. She has already generated user goals for her 3 personas, but the goals are pretty general, so she hopes to find more specific characteristics that are based on the real user population. Kate consults the Visitors Overview report and find that about 75% of the site’s visits in the last month were from new visitors; she decides that the Regina and Monica personas will be new visitors to the site and quickly brainstorms a few questions that she thinks they might have, based on their goals, that motivate their site visits. The last persona, Rex, will be a returning visitor.

Kate knows that the overwhelming majority of patrons are local because it is a regional theater company. She checks the Map Overlay report and sees that at the state level, about half of the visitors come from Michigan, where the theater is located. She decides that Monica comes from another state, and picks New York because it’s in second place behind her state, and because of the level of activity of the theater community in New York City. Kate drills down to view the traffic from Michigan, and chooses the top city for Rex’s home–the city is near the theater, so this makes intuitive sense. For Regina, who is planning to travel a little further, she selects the #4 city, which is about an hour away, and is a much bigger city. The visitors from that city have longer visits and a lower bounce rate, so she feels these characteristics would match well with Regina’s goal of planning an out-of-town visit to the theater. Coming from that city, she will also want to have dinner and stay the night at a local bed-and-breakfast, so Kate jots down these additional goals for Regina.

Since two of her personas are new visitors, Kate looks up the Traffic Sources Overlay and then the Referring Sites and Keywords reports. There’s a lot of search engine referral traffic, and some strong referrers among regional event listings sites. She decides that Regina got to the site from an event listings site that refers a lot of traffic, and that Monica arrived from a Google search on the phrase, “auditions in Michigan.” Kate thinks that a logical reason Monica would be searching for auditions in Michigan is because she’s planning to move there from New York, so Kate adds this detail to Monica’s persona.

Step Four: Fill in the Blanks

The next step is to “fill in the blanks” from the report data. Make a template for each persona, and first fill in whether they are a new or returning visitor. If you have segmented profiles on new versus returning visitor status, draw the remaining characteristics of your new visitors evenly from the new visitors profile, and likewise for the returning visitors. When you have distributed the other statistics (browser, operating system, and geographical location) among your persona templates, review them against the unfiltered “all visitors” profile for a reality check to make sure you have not unintentionally over-represented a user characteristic, which is one hazard of using segmented data. If you have no preconceptions about user goals, you can distribute the report characteristics randomly at this point, as there is not necessarily much meaningful interplay between the statistics for new/returning status, geographic location, and browser/OS. Alternately, using a goal-oriented approach as in the example, you can select persona characteristics from the user data that make sense with the goals you have established.

Kate took a goal-oriented approach to building her personas, so she has already assigned the report data to the personas. She builds her normal persona description template with the notes she made while looking at reports and adds OS and browsers based on the Google Analytics report to each of them. Kate then starts drilling down into the Google Analytics reports’ segmentation to add more detail. She clicks on Rex’s city in the Map Overlay to check the average visit length, bounce rate, and number of pageviews in the visit, which she uses to help her think about which pages Rex would be looking at, given his goals and those averages. Visits from Regina’s city are a little longer, so Kate considers what pages might show up, and checks the event listings site that referred Regina’s visit to find out what Regina might already know before visiting the theater’s site. Kate also checks on the referrers and keywords for visits from NYC and verifies that they contain some phrases similar to the one she chose for Monica.

Step Five: Bring the Personas to Life

The fifth and final step is to breathe life into these rough skeletons of personas. This is the familiar practice of generating the rest of the fictitious biography of the user, the detailed picture of who that person is and what motivates her or him, and so on. Let your creativity take over and build off the initial characteristics from the web analytics data to create a coherent persona. For example, the assigned browsers and operating systems should guide the determination of the computer makes and models that your personas use. Use the new or returning visitor status to assign the personas a level of comfort with using your site and their motivations for the site visits. The geographic location determined from the user data can help generate appropriate user goals and challenges, as well as occupations and hobbies, which may differ for domestic and international users. The reports on Keywords and Referring Sites offer insight on visitors’ interests and motivations, albeit slightly abstracted, and are a good starter for writing usage scenarios.

Kate spends some more time fleshing out her personas, and eventually decides that she needs more information about Rex, the returning patron and would-be donor. She asks the theater for some information from their patron database about how often regular patrons from Rex’s city visit the theater. Kate also interviews the company’s Development Director to gain more perspective on the characteristics of the theater’s existing donors from the local area. After learning more about the types of donors that the theater attracts and the general giving patterns they have, Kate feels that Rex is a good representation of the kind of potential donor who would visit the theater’s website repeatedly, and adds in some additional details based on her interview with the Development Director.

If you have other sources of user data, this is a great time to work it in. Survey data can often provide useful demographics that web analytics cannot, like users’ age, sex, and education level, for example. Free answers from surveys, interviews and focus groups are great sources of inspiration for filling in the details that make personas come to life. The Google Analytics Keywords report can sometimes provide the very questions that bring users to your site–and where better to answer them than in the design process?

Even when there is relatively little user data available to aid in the process of persona development, leveraging the resources at hand creates a stronger design tool. The 5-step process presented here aims to provide a starting point for developing personas using web analytic user data, rather than relying solely on assumption or imagination. An evidence-based approach like this one can lend structure and credibility to using personas and scenarios in the design process. At the same time, user data and statistics must be creatively synthesized to produce a useful representation, and imagination is always required to transform a user profile into a persona.

PDF Prototypes: Mistakenly Disregarded and Underutilized

Written by: Kyle Soucy

Creating a clickable PDF to prototype a new design is not a new concept, but it is a valuable tool that is often overlooked and underutilized. While working over the years with other designers, information architects and usability professionals, I’ve noticed that many of my colleagues believe the same fallacies about the limitations of PDFs. Contrary to popular belief, you can do more than just create links and interactive forms with PDFs; you can also add dynamic elements such as rollovers and drop-down menus, embed audio and video files, validate form data, perform calculations and respond to user actions. PDF prototypes have the ability to replicate most interactive design elements without investing a lot of time and effort.

Debunking common misconceptions about PDFs

Misconception #1: Dynamic elements can NOT be created in a PDF.

Image rollovers and similar dynamic effects can be created in a PDF without even writing a line of code. Although they may not be typically found in a PDF, fore- and background color changes, tooltips, pop-up boxes and other common DHTML scripts can be easily simulated in a PDF. (See an example of a PDF prototype for an eCommerce website.)
Screenshot of eCommerce site pdf.

Misconception #2: PDFs are only good for prototyping page-based applications.

Many people believe that a PDF can not be used to prototype screen-based applications, but this is simply not true. You can easily mimic certain Ajax-like functionality by updating only parts of the PDF instead of an entire page. PDFs also have the ability to hide or show certain form fields and layers based on a user’s actions. (See an example of a PDF prototype for an eCommerce checkout form.)

Note: PDF layers are only supported if the original document was made with InDesign, AutoCAD, or Visio, with the compatibility set to Acrobat 6 (PDF 1.5), and with “Create Acrobat Layers” selected in the export PDF dialog box. The example above is not using layers; only hidden form fields.
Screenshot of eCommerce checkout form pdf.

Misconception #3: PDFs can NOT include multimedia.

Audio and video files (including Flash movies) can be directly embedded into your PDFs for enhanced interactivity. Furthermore, you can select to have these files play automatically in response to specified triggers (such as when the user clicks on the “Product Demo” button). With a little creativity, you can mimic just about any interaction with an interface by taking advantage of this excellent feature. Integrating video clips into PDF usability reports is also an excellent way to report your findings. (See an example of a PDF usability report with integrated video.)

Note: This example PDF prototype requires version 6.0 or later of Acrobat or Adobe Reader to view the media files. Please get the latest version of Acrobat Reader. When you add a movie or sound clip to a PDF document, you choose whether the clip is available in Acrobat 6.0 or later, or in Acrobat 5.0 or earlier. If you select Acrobat 6, the movie clips can be embedded in the actual PDF document.
Screenshot of user testing report pdf.

Remote User Testing with PDF Prototypes

One of the best benefits of a PDF prototype is that it can be tested remotely. Paper prototyping is such a wonderful method, but one of its biggest drawbacks is that you can’t test the prototypes remotely. By using clickable PDFs, we bring our paper prototypes to life and still make it possible to test remotely with users, early in the development cycle. For websites or computer applications, this can be a huge advantage since the user doesn’t have to stretch their imagination by playing with paper; they get to interact with the prototype on a computer by using an Internet browser, the same way they would with the real end product.

All of the benefits of paper prototyping and remote testing is combined when using PDF prototypes. You’re able to collect invaluable feedback from users, in their natural environment, no matter where they are geographically located, without investing a huge amount of time or money in coding or designing your product.

When to Consider Using a PDF Prototype

Using a PDF prototype is a great option when you want to gather quick feedback on a design, early in the development cycle. It’s an optimal prototyping medium if you have minimal coding, Flash or graphic design skills. It’s important to remember that, just like with paper prototyping, a PDF prototype is meant to be low-fidelity. If you want to achieve a much more detailed and functional prototype, than I would highly suggest creating a CSS, DHTML or otherwise coded prototype.

Creating a PDF Prototype

Creating an interactive PDF couldn’t be easier. You will need to install Adobe Acrobat to convert your existing documents into PDFs. If you want to create interactive forms you would need Adobe Acrobat Professional.

There are multiple ways that you can create a PDF. You can start by working from your preferred wireframing program. Once you open up your wireframe file, simply choose to print your document and select the Adobe PDF printer. Once your document is converted, you can begin adding your desired level of interactivity into your PDF prototype using Adobe’s advanced editing tools (choose Tools > Advanced Editing).

Image of advanced editing toolbar.
Advanced editing toolbar.

If you prefer the ease and quickness of hand drawing paper prototypes, you can easily scan your drawings and convert them into interactive PDFs as well. (See example of a hand drawn PDF prototype.)
Screenshot of paper sketch pdf.

Creating Links

Creating a link to another page in your PDF prototype is very easy. Follow the steps below to learn just one of the many ways that this can be done. Links can also be set to open up a file or go to a web page.

  1. Choose Tools > Advanced Editing > Link Tool, or select the Link tool ( Link tool icon ) on the Advanced Editing toolbar.
  2. Drag around the area you want to have a link to create a rectangle. This is the area in which the link is active.
  3. In the Create Link dialog box, choose the settings you want for the link appearance.
  4. To set the link action to link to another page, select Go To A Page View, click Next, set the page number and view magnification you want in the current document or in another document, and then click Set Link.

Creating Image Rollovers

To create a rollover, you will need to have the image(s) you will be using already created and ready to insert into your PDF. Acrobat’s ability to hide or show form elements and the fact that buttons can have alternate appearances, determined by mouse behavior, makes it possible for us to create a rollover. The following steps will guide you through making a button, changing its appearance (essentially turning your button into your imported image file), and adding button actions based on mouse actions.

  1. Using the Button tool ( Button tool icon ), drag across the area where you want the rollover to appear. You have now created a button.
  2. While the Button tool is still selected, double-click the button you just created to open the button dialog box.
  3. In the button dialog box, click the Appearance tab. If needed, deselect Border Color and Fill Color.
  4. Next, click the Options tab and choose Icon Only from the Layout menu.
  5. Choose Push from the Behavior menu, and then choose Rollover from the State list.
  6. Click Choose Icon, and then click Browse. Navigate to the location of the image file you want to use and then click OK to accept the previewed image as a button.
  7. Select the Hand tool and move the pointer across the button. The image field you defined appears as the pointer rolls over the button.

Adding Movie Clips

Follow the steps below to add a movie clip to your PDF prototype. You will need to have the video(s) you will be adding already created and ready to insert into your PDF. Buttons can also be created in your PDF to control playing, stopping and pausing the video clip.

  1. Choose Tools > Advanced Editing > Movie Tool, or select the Movie tool ( Movie tool icon ) from the Advanced Editing toolbar.
  2. Drag or double-click to select the area on the page where you want the movie to appear. The Add Movie dialog box appears.
  3. Within the dialog box, select Acrobat 6 Compatible Media to embed your movie clip directly into the PDF Prototype.
  4. To specify the movie clip, type the path or URL address in the Location box, or click Browse (Windows) or Choose (Mac OS) and double-click the movie file.
  5. Click OK, and then double-click the movie clip with the Hand tool ( Hand tool icon ) to play the movie.

Adding Forms

A PDF form contains interactive form fields that the user can fill in using their computer. A PDF form can collect data from a user and then send that data using email or the web. After the user fills out the form, you can choose to have them print, email or submit their data online to a database.

In order to create interactive forms you will need Adobe Acrobat Professional. The steps below will discuss adding form fields to an already existing document. You can also create a new form by using Adobe Designer. Designer is an application that comes with Adobe Acrobat professional. Designer lets you lay out a form from scratch (static parts in addition to interactive and dynamic form elements) or use a form template.

Forms toolbar
Forms toolbar.

  1. Choose Tools > Advanced Editing > Show Forms Toolbar.
  2. On the Forms toolbar, select the form field type that you want to add to your PDF.
  3. Drag the cross-hair pointer to create a form field of the required size and the Properties dialog box for that form field will appear.
  4. In the dialog box, set the form field’s property options, and then click Close.

You’ll notice a lot of different options in the Properties dialog box depending on which form element you are creating. Form elements can be marked as required to perform client-side error checking in your PDF form. Text fields can be checked for spelling and you can also specify the format of the data entered (such as number, percentage, date, time, etc).

Giving PDFs a Try

The ability to add dynamic elements and multimedia to a PDF makes it a great tool for rapid, low-fidelity prototyping. For me, one of the best perks of using a PDF prototype is that it’s quick and easy to make changes, which is especially useful when I want to tweak a design between rounds of user testing.

A PDF is NOT the perfect prototyping medium for all projects, but I hope that knowing the possibilities of what you can do with a PDF provides you with another tool for your prototyping toolbox.

References