Straight From the Horse’s Mouth with Dan Brown

by:   |  Posted on

iTunes     Download     Pod-safe music generously provided by Sonic Blue

banda_headphones_sm.gif Christina Wodtke traveled with microphone to the IA Summit in Las Vegas this year and sat down with some of the most interesting and accomplished information archictects and designers in all the land. Bill Wetherell recorded those five conversations, and now B&A is proud to bring them to you. Thanks to AOL for sponsoring these podcasts.

In this bat episode, Dan Brown, “consultant”: and “author”: extraordinaire, deftly parries Tom Wailes’ repeated calls to oust the wireframes and task flows for prototyping and simulations. Our stalwart hero defends mindful subversion of the status quo as the best path in many corporate and public sector projects.

While exciting to throw out the bathwater, not every baby is fed by radical innovation alone.

Thanks to Tom for taking the voice baton after his previous turn as interviewee.

We discuss…

*Conceptual vs Design Documentation*
Ideation processes is where the team needs to think bout creativity and innovation. As designers, we create a set of artifacts to help us communicate.

*More detail required?*
Rather than using Wire Frames, Tom Wails says that his core artifacts are more detailed prototypes rather than wire framing, calling Dan’s approach to using Wire Frames into question.

*Know more than your audience*
Dan discusses the importance of knowing not only your audience but also understanding the corporate culture into which you’ll be working and designing.

*Government Work*
Dan points out a constraint to innovation from his experience is that most contracts are very specific with respect to deliverables. The challenge is creating within these set parameters. Dan provides examples of such creativity when designing Wire Frames.


[musical interlude]

Announcer: Boxes and Arrows is always looking for new thinking from the brightest minds in user experience design. At the IA Summit, we sat down with Dan Brown from EightShapes.

Tom Wailes: Hi, my name is Tom Wailes. I’m User Experience Director for Yahoo! Local and Maps. I’m going to be discussing with Dan Brown a few issues that came up today at the IA Summit.

Dan Brown: That sounds awesome. I look forward to it.

Tom: So a little bit of background. Dan gave a talk today, “Communication Design”. So something to summarize, I guess, his book what I thought about some of the design deliverables that information architects and designers who typically delivered over the years and made some very senseful suggestions for some refinement improvement to that. I gave a talk with a colleague of mine from Yahoo! Kevin Cheng that talked about different kinds of design deliverables, primarily storyboards and prototypes and simulations. So I’m here to talk with Dan about where he sees this to working together or not.

So Dan first, I kind of teased you a little bit in my talk after praising your talk. It was very good, I enjoyed it. But then, noting that you hadn’t already talked at all about prototyping or simulation, storyboarding, things like that which is what my team has been doing a lot of and we’ve been very little wireframing. So your reactions to that.

Dan: I think it strikes me as a very important part of the work that we do as designers. I didn’t sense any sort of disconnect between your story and my story. In a sense, I was speaking to all those people who have to jump on a project after a concept has been approved and funded and need to hash up the details. At the same time my, I guess, philosophy is meant to help people provide a critique of their own documentation and I wonder if there’s an interesting synergy there in looking at the kinds of conceptual documents that you do dividing it into layers as what I suggest.

So very important stuff that’s critical to the document versus the more extraneous stuff and using that as a model for evaluating conceptual documentation as much as design documentation.

Tom: So can you clarify a little bit what you mean by conceptual documentation versus design documentation?

Dan: Sure. What I got out of your talk was there’s this ideation process. There’s this process where we need to spend some time just spending some brain cells to think about what could be or let me get a better understanding what the problem is. We created a set of artifacts to either better articulate what that problem is, help get our heads around it, or, in the case that you were discussing, we’ve got this concept, we’ve got this idea to improve your product or create a new product and we need to sell it to the people who make the decisions and hold the money. That’s certainly what I got out of your talk and it’s something that I’ve had to do a lot in my career.

Why that stuff didn’t make it into the book, I would say is only because I was trying to –there’s a long list of documentation, I’d to cut it off that one of the things that people really do everyday in their jobs. And I was looking at your little video, I’m going, “God, if I could do that everyday, my job satisfaction would go –I mean, I love my job as it is, I’m self-employed, etc., etc.–but my job satisfaction would go through the roof”. So I see the conceptual stuff as sort of trying to sell and capture big ideas about a product or product direction. Whereas the design documentation, elaborating on the details and providing directions to the people who have to implement it.

Tom: So one thing I didn’t really cover today but it’s also part of our process that we’re trying, we’re experimenting and sort of making up as we go along, frankly. But it’s not just using interactive visualizations for the concept, but it’s also for the details that designs are right now, rather than doing wireframe or anything like that, we’re continuing only detail prototypes to help us work out more detailed aspects of the product, and that ends up being our core documentations.

It’s not to say we won’t go on to do some wireframes later but we’re involving the engineers right now in that so that they can start thinking about, “Oh, you wanted an interacting look like that. Let me think about it.” So using those kinds of –documents is kind of a funny word to use.

Dan: Artifacts.

Tom: Artifacts, those are our core artifacts now throughout the process, not just the ideation but as we’re working through the details. So how do you react to that?

Dan: I think that’s an amazing opportunity that you have. I remember you polled people at the beginning and ask them, for example, “Do you have a 20% role in your organization that allows you to just simply trying to innovate for one day a week?” And only a handful of people raised their hands and I think if you ask them, “Could you experiment with the kinds of documentation that you do to try and continue some of these prototyping or conceptual type of stuff throughout the life cycle of a project?” you get a similar number of hands.

I come from a world of government contracting, working with large Fortune 500 companies that are stuck in old school tradition, wireframes are, in a sense, –I know this is maybe shocking– innovation enough for them as far as a new kind of document. They’re used to sort of, I imagine, 1980’s IBM, big binders of functional requirements, the idea that we can translate those into some digital format is radical in and out of itself.

Can we get to a point that we’re all doing that kind of documentation? I would love that, in 10-years time we will be but in 10-years time you, guys, are going to be doing a whole another kind of creating another kind of artifact to capture functional requirements of behaviors and all those kinds of things. Does that answer your question?

Tom: It does, well, I think it does. So are you really saying that there are just core differences in the type of industries and the type of projects that might make it incredibly hard to break away from more traditional documentation like wireframes and flows and requirements, documents and things like that?

Dan: I’m not even in charge of industry thing, I just think it’s a corporate culture thing as far as there are some companies that are just not –one of my clients, for example, is a hospitality company. They’re not a technology company, they’re not geared towards that kind of innovation.

They grew out of this idea of selling hotel rooms to people. So that kind of culture is throughout the organization. They have technology people there, and they are fighting an uphill battle to do the kind of innovation that you are talking about. That hill is culture of 100 years of hospitality industry.

Tom Wailes: Obviously I know nothing about that company and that project, but I can imagine… You talk about hospitality and selling hotel rooms. At least me, from the outside, I can imagine a great opportunity to start with some visualization and prototyping to get across some concepts, particularly since you are talking about selling. I don’t know the details of that.

So what would stop you, what would make it very hard for you to say “You know what? I’m going try something different on this project”. What are the main inhibitors for you?

Dan: Oh, I’m not afraid to try something different. But I think, as designers, we need to be responsible… I’m no Steve Jobs, so I need to be responsible for about just exactly how much I am going to push the envelope.

The kind of conceptual stuff that you are creating is working for you and your organization and your culture and the kinds of products that you are working on. I think that there are opportunities to create those kinds of artifacts and documents in other organizations but maybe not push the envelope so much.

So, if I were to show that to someone who is so used to, in the flip side, seeing certain kinds of documents, it may not speak to them as well. They may be saying “why are you wasting my time with a comic?”

There is no controversy here. I am not trying to say that I don’t think there is a place for those things. I have not been able to cultivate a place for those things in the kinds of clients that I work on.

Tom: OK, I have two comments. The first is, in our environment, people were used to wire frames and requirements documents and things like that. We had been using those but we decided just to experiment with new methods like the comic storyboarding. The reaction actually wasn’t “I don’t understand that or I don’t get that or don’t want that”. It was like “oh my gosh, can you do more of this? I can see much more clearly what the core ideas are. I can be involved in giving my opinions now”. The wireframes and other kinds of documentation are much harder to be involved in. So that would be one comment.

The second comment is we talked about starting small. In what ways could you perhaps start small? I can understand you cannot just turn your client overnight into completely new processes. You have deadlines, budgets, and things like that. But in what ways do you think you could start small in introducing new ways of working?

Dan: There are things we do all the time. That culture may have given rise to a certain kind of wireframe, and I may see opportunities to encourage them to go in a different direction.

They may start with a conventional site map, and I might move them more to a conceptual model that includes things beyond web pages. It encourages them to think about maybe incorporating their users into that picture so they have a better sense of that. So I think there are definitely small opportunities. I believe we take advantage of them as much as possible.

The other constraint I wanted to point out was that, as an outie, as someone who is not inside an organization… I mean, to a certain extent, you serve clients inside your organization. But as a complete outie, my contracts are structured to do something very specific for a particular client.

So, if they hired me to help improve a set of pages or a particular function on their site and I said “OK I’ll do that but let me show you this first” they would really not happy with that because they are paying me to achieve something very particular.

I am working within the constraint of that particular project scope I need to find a way to do that things you are talking about and sell them on big ideas. The book, Communicating Design, talks about using documentation and use it in different contexts and those contexts, as the contexts vary, they will impact the nature of the documentation itself, as well. I don’t know if I answered your question.

Tom: I think you did. I’m still not entirely convinced that you can’t introduce new ways of working in a very small way where maybe you do not take any of the client’s time or maybe you only take a day. We gave some example today. I can show you some stuff later that just took two days to visualize some ideas.

So it might be something that is very lightweight and you are not beating the client of the head with it and saying “oh my god we’ve got to work this way”. It’s just like “yeah we are going to do all the things we are contractually committed to doing but, by the way, why don’t you have a look at this as well”.

Dan: I am not disagreeing with you at all. I completely think there are opportunities to do that, but my primary concern (I may get in trouble by saying this) is less about doing cool work period and more about doing cool work within the constraints that have been handed to me.

So I do want to push that envelope as much as possible but my primary concern, as a consultant, is customer service. Ultimately I can feed the kid by getting hired again. So I will do a little thing. I will show them a different kind of document. I will take their wireframes to the next level or I will show them how they can incorporate all of their flows. Who knows what it is.

I might produce a comic. We have done a couple of projects where we have done comic-like things that incorporated user commentary and very explicit screens or wireframes along with some more technical contexts. Not a comic in the true sense of the word, but something leaning in that direction. Those can be very helpful, especially when clients themselves are struggling with the scope.

That was a long rambling answer to say I agree with you.

Tom: OK, let me challenge you a little bit then. What if I was to put it to you that you would actually do better work and serve your clients better if you did less wireframing or other traditional kinds of documentation, and more prototyping, simulations and storyboarding?

Dan: I think you are right. So ha! So try and challenge that! [laughter]

I agree that there is an opportunity to do more prototyping and stuff like that. It is balancing that with the expectation of what we are going to get and what is going to work inside the organization.

In some cases, we are shielded from the development team entirely. So I am working to support to user experience team, and they are burdened with communicating with the developers. If I am going to ask them to challenge their developers, that is not very responsible on my part.

Tom: OK, thanks very much. So we sort of agree, and disagree, and agree again.

Thank you so much for your time.

Dan: I look forward to our next conversation.

Tom: Me, too.

Visio Glue: Not For Sniffing – Special Deliverable #13

by:   |  Posted on

Spend any time with Visio and you’ll find yourself wondering how glue works. In the real world, it’s pretty straightforward: put glue between two things and they’ll stick. Although glue is used for sticking shapes together in Visio, the metaphor ends there.

In Visio, glue is not an object. Instead, it’s a property of other objects. Whether two things stick together depends on several factors, which we’ll discuss in this article.

You can’t talk about glue without mentioning connectors: lines that stick to shapes to show a relationship between them. Connectors are one of the defining features of Visio, but their behavior is even more unpredictable than glue’s.

What follows is an inventory of Visio glue behavior, connectors, and connection points. After reading this article, the word “glue” (which appears 71 times) will look and sound very strange indeed.

Glue is directional.

  • In the real world, two objects are glued to each other. In Visio, one object is glued to another. For the purposes of this discussion, “target” refers to the object that has been glued to. “Glued object” refers to the shape that has been glued to the target. A nursery school art project involving construction paper and macaroni is perhaps the best real-world equivalent. The paper is the target and the dried noodles are the glued objects.
  • Moving the target results in the glued object moving, or shifting to remain glued. (Just like a macaroni project, where moving the paper moves all the macaroni attached to it. This enables such projects to appear on refrigerators all over suburbia.)
  • Moving the glued object results in the glue being broken. The original target remains where it is. The metaphor breaks down here because in the real world, two objects glued together move together.
  • When a 1-D object and a 2-D object are glued to each other, the 2-D object is always the target, no matter what technique used to glue them together.
  • Distinguishing between the target and the glued object is no easy task. Click on the target, and there is no indication that there are any objects glued to it. Click on the glued object, however, and you’ll see what it’s glued to, as well as the type of glue used. Type of glue? Read on…

There are two types of glue…

  • When gluing a 1-D object to a 2-D object, glue behaves in two different ways. Visio refers to these as dynamic glue and static glue.
  • Think of static glue as “fixed point” glue. The glued object is affixed to the target at one point and one point only.
  • Dynamic glue is “fixed object” glue. The glued object will remain affixed to the target, but at whatever point is most convenient.
  • Clicking on a glued object shows a red endpoint. If the endpoint is a large red square, it is glued with dynamic glue. A small red endpoint with a black X indicates static glue.

  • To use dynamic glue, drag the glued object’s end point to the center of the target object. The target object will highlight with a red border.
  • If many objects are close together, you can guarantee dynamic glue by holding the CONTROL key as you drag a connector to an object.

Not all surfaces are sticky.

  • Although dynamic glue is always available, static glue may or may not be available depending on the application settings.
  • Through the “Snap & Glue” dialog box, you can determine whether a surface will glue. To get to this dialog, choose “Snap & Glue…” from the Tools menu. There are five different options in the “Glue to” list.
  • Shape Geometry: Checking this box will make the entire surface of target shapes “sticky”. If you’re familiar with Visio ShapeSheets, you can also think of this as all points defined by the Geometry sections of the ShapeSheet. If you’re not familiar with ShapeSheets, forget what I just said.
  • Guides: A shape glued to a guide will move when the guide is moved. Guides are always targets.
  • Shape Handles: Glued objects may be attached to any of the shape’s handles, the little green squares that appear on a shape when you select it.
  • Shape Vertices: Shapes’ corners are sticky. Circles are S.O.L. When you round a shape’s corners, its vertices are still considered to be the corners that meet at the intersection of the shape’s sides.

  • Connection Points: Objects can stick to areas of the shape explicitly defined as a sticky point.

Visio has hidden controls for connector behavior.

  • Moving target shapes around the page can have the unwanted side effect of disrupting perfectly placed connectors. You can prevent this by right-clicking on any connector and choosing “Never Reroute” from the menu. This makes connector behavior slightly less unpredictable and you may still have to adjust the connectors after moving the target shapes.
  • Connector behavior can also be controlled from the behavior dialog box, accessed by choosing Behavior•from the Format menu. When a connector is selected, the box has an additional tab, just for this shape. This allows you to control the appearance and behavior of the connector.
  • In several of the following menus, there is a “Page Default” option. Default connector and routing options are controlled in the Layout and Routing tab of the Page Layout dialog (File > Page Setup…). These settings may also be controlled through the Lay Out Shapes dialog by choosing that option in the Shapes menu.
  • Style: The general appearance of the connector. I’m partial to “center to center”.
  • Direction: For some styles of connector, a direction is implied. This menu becomes available when Flowchart, Tree, Organizational Chart, or Simple is chosen from the Style menu.
  • Reroute: Matches the options in the connector right-click menu (described above) and indicates the level of control granted to Visio to alter the connector paths.
  • Appearance: Probably the best discovery when I stumbled across this dialog box. Creates curved connectors with eccentricity lines.
  • Line Jumps define the rules for using and displaying line jumps – breaks in a line when it intersects another. Line jumps symbolize the distinctness of each line. I prefer to create diagrams where lines do not cross because line jumps simply add visual noise.

Connection points are like bellybuttons.

  • Although most connections occur between a 1-D object (like an arrow) and a 2-D object (like a box), it is possible to glue 2-D objects to each other without grouping them.
  • Connection points, the little blue Xs attached to shapes, define points on a shape that can be glued to. As stated previously, however, the target object does not have to have connection points to glue something to it. For example, if you have “vertices” turned on in the Snap & Glue dialog box, you can glue connectors to a target shape’s corners.
  • Connection points come in several varieties; they can be inward, outward, or both. To change the type of connection point, right-click on it with the connection point tool.
  • Inward connection points can have other shapes glued to them. Inward connection points designate the object as the target object.
  • Outward connection points are glued to other shapes. They are the glued objects.
  • Connection points that are inward and outward can be both targets and glued objects.

  • To understand these concepts, create a couple shapes with different kinds of connection points and play around. For example, draw two rectangles. Choose the connection point tool. Select one of the rectangles. CTRL-click with the connection point tool to add connection points to the rectangle. Do the same with the other rectangle. Now change the direction of the connection points by right-clicking on the each point.
  • Notice that when you drag a shape’s INWARD connection point to another shape’s OUTWARD connection point, they won’t glue. Do it the other way and they’ll stick together.
  • With the two rectangles glued together try moving the target shape, and then try moving the glued shape. Moving the target will cause the glued shape to move as well. Moving the glued shape will cause it to come un-glued.

Visio glue is one of the application’s more puzzling concepts. It doesn’t behave like real-world glue and can be unpredictable. This inventory of glue features attempts to tame the madness.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Lost in Translation: IA Challenges in Distributing Digital Audio

by:   |  Posted on
“The main challenge facing network audio devices is how to provide remote access to the music library… this looks like a job for an information architect!”

With each new advancement in digital media come new ways to consume and distribute it, and new and different challenges for information architecture. For example, several new devices on the market are designed to distribute digital audio from a computer to audio systems in other rooms of the house. These devices connect to your home network through a standard Ethernet cable or wi-fi, routing music from your computer to your stereo using standard audio connections.

The main challenge facing these devices is how to provide remote access to the music library. While sitting at a computer, you have the benefit of using a keyboard, mouse and screen to interact with software like iTunes or WinAmp. Since network audio devices need to sit on the shelf with your stereo, they do not have a full display, and the only means of interaction is a remote control. In other words, this looks like a job for an information architect!

This new paradigm for accessing music libraries presents at least two information architecture challenges:

  1. How do users find a song in their music library?
  2. How do users know what’s playing and what’s coming up?

The challenges are made even more difficult by several factors:

  1. Limited display size
  2. Limited availability of metadata
  3. User’s expectations—people are used to browsing through a CD library

This article looks at how three devices on the market today address these IA challenges. Two of these devices, RokuLabs’ Soundbridge and Slim Devices Squeezebox have a screen on the shelf unit. The display on each of these devices is limited to two lines of text, and the remote controls are configured for navigation. On the other hand, Sonos’ device uses a different approach, putting the display in the remote control. Because of this, Sonos’ remote looks like a large iPod with a color display while the device that networks the music has no display at all.

Design Philosophies

Sean Adams created the first generation Squeezebox in 2001 by hacking together some hardware and software. From that first foray into distributed digital music grew a large community grounded in the open source culture. Slim Devices made their server software open source, and there are now more than 50 developers working on it worldwide. This approach has led to constant gradual improvement.

Slim Devices' Squeezebox
Dean Blackketter, Slim Devices’ CTO, says that although the community is the key to adding new features, he monitors all changes to the software before they are officially released. This allows Slim Devices to ensure that any changes to the interface stick with their style guide. Blackketter appreciates the open source approach because it allows people to work on the interface quirks that bother them the most; he told a story about someone who found the timing of the scroll a little off, and wrote a new scrolling algorithm. Blackketter frequently uses the “friends and family” approach to test the usability of these upgrades.

Slim Devices uses no formal user-centered design methodology and maintains no tools beyond a style guide. Blackketter says that the company has internalized the personas of their customers. The management team came to an implicit agreement over the life of the device that their target audience consists of highly technical people—users who like playing with the device—and their spouses—people who just want to listen to music.

Like Slim Devices, RokuLabs’ design philosophy does not depend on formal user testing. Many of the team members at RokuLabs came from ReplayTV, the main competitor to TiVo, and the designers at RokuLabs depend on their previous experience in networked media devices to provide insight into usage. Mike Cobb, RokuLab’s senior engineer, says their experience with ReplayTV provided many lessons for the user experience of the Soundbridge.

The user experience of iTunes also drove the design philosophy for Soundbridge, since the unit was meant to be an extension of that software; RokuLabs sought to make the interactions similar to those of iTunes or the iPod. One key difference is the interaction model of the remote control: while Squeezebox uses the “right arrow” button to make a selection, Soundbridge users must push a “select” button. RokuLabs’ design rejects the use of navigation as selection. In this way, it resembles the iPod, which uses a one-dimensional navigation device (the wheel) and forces users to physically make a selection (by pushing the center button).

RokuLabs also had the benefit of not being the first to market. They played with early versions of the Squeezebox and decided what they liked and didn’t like. One thing they noticed was that the experience seemed geared to tech-savvy users, and RokuLabs wanted a more mass market device.

The newest entrant is Sonos, whose unit shipped in January 2005. I spoke to Mieko Kusano, the director of product management who says that although the idea for Sonos came from its founder, they spent a lot of time defining their target market, which led to creating personas. Sonos also employed a simple ground rule: their designers were not allowed to talk about what “I” want. Instead, all design decisions had to be made within the context of the personas. Kusano says the personas were useful for making the process more concrete, and they gave the company a common platform. She advocates doing as many user studies as you can. “Every time we had something new to show,” said Kusano, “we brought users in.”

Initial user research drove a couple of key design decisions, including putting the display on the remote and focusing on distributing music to many rooms in the house. Having decided to make the screen on the remote in early user studies, they developed a method for prototyping new remote controls by using a PDA. They could program the PDA to display different screens and then test them with their users.

The second decision—focusing on multiroom audio distribution—motivated the design of the remote control itself. Sonos’ remote boasts the fewest buttons. Many functions use “soft keys”—buttons that change their function depending on state—but escalates key functions to physical buttons. Besides volume and playback and navigation, there are only two other buttons: Music and Zones. The music button brings users to the menu where they can select music and the zones button brings users to the menu to select what room to program. All other controls (for example, shuffle, repeat, music queuing, etc.) are presented in the screen.

As Sonos neared their launch date, they did frequent in-home testing, taking beta units to customers’ houses and observing them. They watched users as they went through the out-of-box-experience, the set-up, and use of the unit. Sonos’ approach represents a departure from the other two philosophies, and I was eager to see how the structure of information would differ among them.

Browsing Music

Before digging into the navigation scheme, I want to set out the underlying conceptual structure for each system, which is the same across all three and resembles that of the iPod. (Squeezebox was around before iPod, and was the first unit to employ this structure.) Songs live in a music library. They are “moved” to a queue of songs to play. Users may move songs one at a time or implicitly by selecting a “natural grouping” of songs—an album or an artist, for example. Conceptually, a music player’s key interaction is moving songs from library to queue. At any given time, users need to know what song is currently playing and what songs will be coming up. They also need to navigate the library to facilitate moving songs to and from their queue.

I don’t know if this is the best structure, but it appears to be employed across the board. Even though the underlying structure is consistent, it’s possible for each system to present a different mechanism for navigating the library and moving songs from library to queue. Possible, but unfortunately not true: despite having differing design philosophies, all three devices use nearly identical information architectures, all of which resemble the iPod’s structure. The root menu of each system varies slightly, but one option takes users to a familiar menu:

  • Browse Albums
  • Browse Artists
  • Browse Composers
  • Browse Genres
  • Browse Songs

In Sonos’ system, this menu is called “Music Library”; SoundBridge calls it “Browse.” Selecting any of the options from this menu will take users to an alphabetical list of albums, artists, etc. Each entry represents a group of songs. Users can move the entire group to the play-queue, or can “open” the group to look at individual songs.

Looking at all the songs in a group, users can select a track and play it, add it to the queue or get more information about it. Specifics vary depending on the system. Soundbridge takes you to a list of options, the first of which is “play songs starting with this one,” allowing users to select the group of songs by selecting one song inside the group.

When compared directly, the core information architectures of each are virtually indistinguishable. Each album, genre, artist, and composer is a separate category and each track fits into one of each. There are relationships between the categories:

  • Genre → Artist → Album
  • Bluegrass → Del McCoury Band → It’s Just the Night

The problem is that music is much more complicated than this architecture, even if it does account for some of the nuances of music libraries. For example, an artist or album can belong to multiple genres:

  • Folk → Eva Cassidy → Songbird
  • Popular → Eva Cassidy → Songbird

Another problem with the architecture is that artists’ names may be rendered differently, depending on what they’re working on:

  • Bela Fleck & the Flecktones → UFO TOFU
  • Bela Fleck and Edgar Meyer → Music for Two
  • Edgar Meyer/Bela Fleck/Mike Marshall → Uncommon Ritual

Each of these instances of Bela Fleck is rendered differently in the architecture, because the architecture is conceived as a straight hierarchy.

“All the problems with navigation can be traced back to a single central issue: lack of data. Creating more complex structures depends on having more comprehensive information about the music.”

All the problems with navigation can be traced back to a single central issue: lack of data. Creating more complex structures depends on having more comprehensive information about the music. Because the artist is rendered as a simple text field, the systems can not match up “Bela Fleck & the Flecktones” with “Edgar Meyer/Bela Fleck/Mike Marshall.” Using the systems’ browse features alone I would not be able to find every track in my library on which Bela Fleck performs. The systems’ search features afford some improvement, but they still depend on having good metadata.

Searching Music

The appalling state of music metadata is no secret. Other authors have already explored the limitations of the available metadata with respect to jazz, a genre that “goes beyond the ‘Great Man’ theory and recognizes the influence of side players…” Whether other genres of music have as rich a metadata landscape as jazz is immaterial. Liner notes from any album in any genre hold more information than currently captured in most digital audio systems. All three manufacturers highlighted in this article believe the lack of good metadata is a crisis facing the entire industry. However, they all feel that once the industry cracks the nut, their devices will be prepared to address it.

Search on the Squeezebox and Soundbridge operate as you would expect them to. Select a search field from a menu, enter keywords video game hall-of-fame style with the arrow keys, and get a list of results. The extra step of selecting a field (eg: Search Artists) seems pointless, but Soundbridge engineer Mike Kobb explains:

[I]f I want to find tracks by “Barenaked Ladies”, it’s only a few key presses to choose “Search Artists,” then enter “ba.” The same 2-letter search would find too many items if it were done as a keyword search. I believe making the initial selection and then entering a smaller term is generally quicker than entering enough letters in a keyword search to get a small result set.

This makes sense from a technical point of view: allow people to limit the scope of their search so they don’t need to enter as many letters with the arrow keys. This approach solves one issue with navigation. So long as “bela” appears in the artist field, I can do a search to find all Bela Fleck’s music in my library. On the other hand, entering “be” to see all Bela Fleck tracks seems like an enormous conceptual leap from browsing a library of CDs. In other words, if the task is “get a list of all Bela Fleck’s tracks,” my inclination is to browse by artist—kind of like what I would do in real life.

The third device, Sonos, does not offer a search mechanism. They intend to offer it in the future, but provide no rationale as to why it wasn’t included in the initial release.

Knowing Where You Are

Digital music players give us two virtual spaces: the library and the queue. Knowing your “location” in the library is relatively easy because a mental image of the virtual space is readily available. When navigating the library, users are focusing on the task at hand. The use case for the queue is a different story; users put the queue together and leave it to do its thing. Only occasionally does the queue become the focus of attention after the initial set-up. All three units have a default view called “Now Playing,” in which the display shows information about the track that’s currently coming out of the stereo. Usually, that’s the name of the track and the amount of time left on the song.

On shelf-bound displays, Soundbridge and Squeezebox both give you “one-click” access to the next song. On Soundbridge, simply push the down arrow on the remote and you’ll see what’s next in the queue. Keep pushing the down arrow, and you’ll scroll through the queue. Sonos offers a bit more information, but not much. The “Now Playing” display shows the title of the next song, and getting to the entire queue is just a click away.

When looking at the queue on Sonos the large up-close display offers a broader view, providing more context. Think about using a CD: a complete track listing in the liner notes; you can see the whole thing and get information like song length. Displays of the shelf-bound devices offer only a limited window into the queue. Sonos’ display offers more information because you can see more of the queue. Still, the experience is not quite the same as looking at a set of liner notes because it lacks all the other information.

Is it fair to compare the user experiences of digital and analog worlds? Until music players carve out a new set of user behaviors, their designers don’t have much choice. People are used to interacting with their personal music collections in a certain way, and deviating too far may slow the adoption of new technologies.

Supporting User Behaviors

With only a few nit-picky exceptions the three devices generally do a good job supporting three basic scenarios:

  • I want to play an album and I know which one.
  • I want to play an album by an artist whose name I know.
  • I want to play a specific song and I know its album/artist/genre.

As an end-user, these tasks are pretty easy, once you get the hang of the IA and the interaction model with the remote control. If you want to create a mix on the fly things can get a little clunky as you run through the last task several times over.

Moving back and forth between your music library and the current queue requires gestures that may be difficult for users to get used to. Also, the idea of a queue is unique to this interaction model. If you’re doing the DJ thing and playing random songs for your friends, you may stack up a bunch of CDs to go through, but the queue is in your head and easily modified.

Each of these scenarios depends on user knowledge. If you know the artist or album, you can easily narrow down the library. Things get difficult when you don’t know the name of the song, or when you know the name of the artist, but not which variation of their name is the correct one.

Browsing is another user behavior that’s been neglected. There’s an aspect to browsing a collection of CDs that’s lost when translated to an iTunes-like environment. People don’t keep their entire music library in their head, and the ability to browse is crucial. Because the browse features on these systems are pre-divided into Track, Artist, Album, and Genre, “browsing” is limited to only text-based information.

Browsing a long list of album names is not the same thing as browsing jewel case spines. Color, typography and organization of the jewel cases give more information than just the album name; I may know that the Yonder Mountain String Band song I want is on their latest album which has a brown spine with orange lettering. The black spine with white lettering is their earlier album. I may not know the names of these albums, just the look of their spines. This free-browsing of a physical CD library is a nut not yet cracked by the industry. To be fair, this is a serious challenge: how do you support existing behaviors when users are used to browsing by more than just the names of albums or artists?

On the other hand, a virtual environment enables behaviors unimaginable in the physical world. Wouldn’t it be great if I could play tracks:

  • Based on how much I listen (or don’t listen) to them
  • Based on how often I play them sequentially
  • That my wife has marked as a favorite
  • That my kids did NOT mark as a favorite
  • Featuring certain kinds of instruments or vocalists
  • That have a special place in music history (like the “definitive” newgrass song)
  • That have been tagged by other listeners with particular keywords
  • I usually play on this day of the week or year
  • That feature a specified combination of musicians
“Virtual spaces with robust metadata models enable the kind of serendipitous browsing you’d find on IMDB, or the “social networking” you’d find on”

As online services emerge that compile this and other information, network audio players will need to tap into that metadata to enrich the music playing experience. Virtual spaces with robust metadata models enable the kind of serendipitous browsing you’d find on IMDB, or the “social networking” you’d find on Music libraries are ripe for this kind of experience, and the proliferation of these players could be the catalyst to bring about the change.


There is something very cool about storing all your music on a single server and being able to play it in any room in the house. Homeowners have an option for whole-house audio that, while still bearing a hefty price tag, doesn’t come close to the cost of “old school” systems. (The cheapest network audio systems are only a few hundred dollars, but you need a unit AND a stereo for each room.) The wireless network is much more appealing than running miles of cable through your walls.

When these manufacturers sought to create a whole-house audio system, they each started with slightly different ideas for the user interface problem. For Slim Devices, the pioneer, it was whether it could be done at all. The others each chose a different aspect: the remote, multiple zones, the display. The purpose of this article is not to recommend one device over another (there are many more than these three). The point is, none of these three devices demonstrate any innovation in the underlying information architecture.

Network audio technology is faced with a chicken-and-egg situation. Innovative IA in audio devices like these will be limited by the available metadata. At the same time, industry fears of piracy will limit the amount of metadata supplied with the music. Until the adoption of audio devices reaches critical mass, the industry won’t face pressure from consumers to expand the quality of data, but audio device adoption may stall without more innovative navigation methods.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Toggling Shapes in Visio: Special Deliverable #12

by:   |  Posted on
“Employing a continuation node that toggles means literally flipping a switch to go from one state to another when you’re moving shapes around on your sitemap.”The last Special Deliverable introduced several Visio techniques, including ShapeSheets and formulas. This issue will expand on those ideas, showing how to create a widget with a toggle built into the shape’s context menu. A toggle-able shape is useful when an element is repeated in your diagram, but can exist in one of two states.

To illustrate this, we’ll use one of the shapes from Jesse James Garrett’s Visual Vocabulary–the continuation node–which can appear in either the horizontal or the vertical state.

Continuation Node

Although the Visio stencil for the Visual Vocabulary includes a shape for each state, it can be clumsy to switch from one to the other as you rearrange your site maps or flows. Employing a continuation node that toggles means literally flipping a switch to go from one state to another when you’re moving shapes around on your sitemap.

The basic idea

The difference between combined shapes and grouped shapes
A shape with multiple geometries is different from a group of shapes. Each shape in a group maintains a unique identity and has its own set of properties. When you change the formatting of a group of shapes, you are really assigning the new property to each shape en masse. You can also still change the properties of individual shapes within the group.

Any shape in Visio is composed of one or more “geometries.” Each geometry represents a different component of the shape. Most shapes have just one geometry, but some have two or more. If you followed the Visio tutorial in the Special Deliverable #11, you’ll recall that to create the annotation shape, we combined a circle and the corner of a square. Each of these is a separate geometry.

When shapes are combined they become one shape, sharing all properties. In a combined shape, Visio can still distinguish between the different parts of the shape. We will take advantage of this feature to create a toggle-able shape.

A toggle-able shape has multiple geometries, each of which can be turned on or off depending on the state of the toggle. Our continuation node will have geometries representing the horizontal format and the vertical format. The toggle will turn off the horizontal geometries when the vertical ones are turned on, and vice-versa.

There are therefore three main steps to creating a toggle-able shape:

  1. Create all the possible states of the shape and combine into a single shape
  2. Add the toggle to the shape as an item in the context menu and define its behavior
  3. Adjust the visibility of the geometries based on toggle state

Step 1. Create all the possible states of the shape and combine into a single shape

The continuation node will end up with four geometries: two for the horizontal brackets and two for the vertical brackets. When you create the four brackets, make sure that each of them is a continuous line by clicking and dragging each leg starting on the end point of the previous leg. Arrange the brackets as they will appear in the final shape.

The four geometries of the continuation node.

Because the horizontals overlap with the verticals, they will appear to be a single rectangle. Now, select all four brackets and choose Combine from the Shape > Operations menu. When you now select the shape, it will appear as if it is a single rectangle, and that all the brackets have been lost. Fear not, the geometries are still hidden in the shape.

Step 2: Add the toggle to the shape as an item in the context menu and define its behavior

There are two parts to this step and both occur in the ShapeSheet. Besides adding the menu item, we will need a place to store the current state of the shape. Since the state is binary (one of two possible values) we will use a Boolean (true-false) variable to store this information. In the next step we associate each value with a different state.

  1. Show the ShapeSheet by selecting the shape and then choosing Show ShapeSheet from the Window menu. Notice that the ShapeSheet has four Geometry sections. (Recall that a section in a ShapeSheet represents a different aspect of the shape.) In the next step we will learn how to distinguish which section corresponds to which bracket.
  2. For the toggle, the ShapeSheet needs two additional sections. Right-click anywhere on the gray area of the ShapeSheet and choose Insert Section… from the context menu. From the dialog box, put checks next to “User-defined cells” and “Actions.” Click OK.
    Insert Section dialog
  3. The User-Defined Cells section is a place where we can store information about the shape that does not appear by default. This is where we’ll store information about the state of the shape. First, give the variable a friendly name by clicking on the red “User.Row_1” label and typing “state.” We can now refer to this variable from functions with “User.state.”
    User defined cells, User State
  4. Give User.state its initial value by entering TRUE into the Value column.
    User state equals True
  5. The Actions section is what allows us to add items to the shape’s context menu. There are two critical cells: Action and Menu. Action specifies the function to execute when the menu item is chosen. Menu specifies the language to appear in the context menu. For Menu, enter “Toggle Horizontal/Vertical” or some equally dry indication of the purpose.
    Action cells
  6. It is in the Action column where the magic happens. In this cell, we’ll use a function that swaps the current value of User.state with the opposite value. Type the following into the Action cell:

    The SETF function

    The SETF() function sets the formula of the cell specified in the first argument. The GETREF() function allows us to refer to the cell itself, and not its value. Using GETREF() is required as the first argument in SETF(). The second argument of SETF() defines what the new formula should be–in this case, the opposite of what it is right now.

You can try it out now. Keep the ShapeSheet open, right-click on the shape and choose “Toggle Horizontal/Vertical” from the context menu. You’ll see the value in User.state change from TRUE to FALSE. Do this until it no longer amuses you.

Shape toggling and User State changing from True to False.

Entering formulas into ShapeSheet cells
The sections of the ShapeSheet resemble Excel spreadsheets and can be a little finicky about having data entered into them. Once you’ve entered a formula or value, be sure to hit TAB or RETURN, or use the arrow keys to move to another cell.

Clicking to another cell will not work. When you click on a cell when another one is active, Visio enters a cell reference into the active cell. This can be confusing and annoying.

Step 3: Adjust the visibility of the geometries based on toggle state

Now we’ll move onto the Geometry sections of the ShapeSheet and modify the property that controls the visibility of the bracket. The visibility will be a function of our new User.state variable.

  1. Each Geometry section represents a different bracket, but Visio does not help us distinguish them. To ascertain which Geometry section refers to which bracket, you need to click on one of the cells in numbered rows of the section. These rows describe the shape using a series of directional commands (like “MoveTo” and “LineTo”.) Click on the first cell of the first numbered row of the section Geometry 1. In the drawing, you’ll see a small square appear on the shape. This shows you what part of the shape this row describes.

    Geometry section

    Hit the down arrow key and move through the rows of the Geometry 1 section. The square on the drawing will move around. As it does, you should be able to discern which bracket is being described. You may want to make a little reference for yourself.
    Bracket reference sketch

  2. Once you have established which section represents which bracket, you need to put the following formulas into the Geometry.NoShow cells. Make sure that you use the same formula for both the horizontal brackets and the OPPOSITE formula for the vertical brackets. In this example, assume Geometry sections 2 and 3 represent the horizontal brackets and Geometry sections 1 and 4 represent the vertical brackets. For Geometry2.NoShow and Geometry3.NoShow use:


    For Geometry1.NoShow and Geometry4.NoShow use:


As you enter these formulas you’ll see one set of the brackets disappear, depending on the setting of User.state. Now when you choose “Toggle Horizontal/Vertical” in the shape context menu, the brackets will switch orientation.

By creating an improved version of Jesse’s continuation node, you’ve had an opportunity to explore Visio’s ShapeSheets and formulas. You also used the following techniques:

  • Combining shapes to create a new shape with shared properties
  • Inserting sections into a ShapeSheet
  • Creating a user-defined variable for storing additional shape data
  • Adding a command to the context menu
  • Using formulas for changing the value of a user-defined variable
  • Modifying the NoShow property of a shape’s geometry

With these techniques you can create other toggle-able shapes (What about a checkbox that can be checked? Or a folder icon that can appear in both the opened and closed state?) and you can use these techniques to create shapes with other behaviors.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Wireframe Annotations in Visio : Special Deliverable #11

by:   |  Posted on
“Remember in the first Matrix movie, at the very end when Neo started knowing he was The One? He looked around saw streams of numbers, the building blocks of the Matrix – at once a terrifying and awe-inspiring view of the world.”

Few information architects tap the full power of Visio. For the IA, Visio is a means to an end—a mechanism for capturing some ideas on paper before they are transformed into graphics, HTML, and code. Even so, the information architecture community can take advantage of some of Visio’s advanced features to make developing documentation more efficient.

This article introduces several techniques in the context of wireframe annotations. At the conclusion, you will have learned to create an annotation widget, and you will also have learned several facets of Visio you may not have been aware of.

The widget consists of two parts: the annotation shape, which points out the feature of the wireframe; and the footnote shape, which contains the reference for the annotation.

Creating the annotation widget requires three steps:

  1. Creating the annotation shape and the footnote shape
  2. Establishing a relationship between the annotation and the footnote
  3. Changing the behavior of the annotation

Step 1: Creating the annotation shape and the footnote shape
In this step, you create two shapes, one a basic circle for the footnote and one a circle with a pointer for the annotation. Although we use basic shapes, we use some advanced shape operations techniques.

  1. Draw a circle that’s .25” in diameter and make a copy. One circle will be the annotation shape and one will be the footnote shape.


  2. Draw a square that’s .25” on a side and rotate it 45 degrees. (You can do this by opening the Size & Position Window from the View menu. Select the square and type 45 into the Angle field.)


  3. Position the square directly over the circle that will be the footnote shape. Make sure both the circle and the square are selected. Choose Fragment from the Shape > Operations menu. This operation breaks up the two shapes into component pieces.



  4. Delete three of the square’s corners. Select the circle and the fourth corner and choose Combine from the Shape > Operations menu.



You’re done with step 1. Now you should have two shapes on the page: a plain circle and a circle with a pointer on it.

Step 2: Establishing a relationship between the annotation and the footnote

In this step, you’ll teach your footnote shape to mimic whatever text you type in the annotation shape. This way, if you renumber your annotations, the footnotes will automatically renumber. This step introduces a few techniques: naming shapes, inserting fields, using Visio formulas, and using Visio shape references in formulas.

  1. To establish a relationship between these two shapes, you need to name them. Naming shapes is easy enough. Select the shape and then choose Special&#8230 from the Format menu. (How the name of a shape relates to Format Special is beyond me, but the nuances of Visio are for another discussion.) The Special dialog box includes a field for a name. Type a name for each of your shapes. Name the footnote shape “footnote” and the annotation shape “annotation.” This way, there can be no confusion.
  2. Now, select the footnote shape and choose Field… from the Insert menu. The Field Chooser appears.
  3. Click Custom Formula in the left-hand column. The Custom Formula field at the bottom of the dialog will become active. The field already has an equal sign, which lets Visio know that a formula is coming up.
  4. AFTER the equal sign, enter the following formula:



  5. Click OK.

The SHAPETEXT function returns the text of the referenced shape. In the function’s arguments, we have specified the name of the shape (“annotation”) and the reference to the shape’s text property (“!TheText”). This seems redundant, but the SHAPETEXT function requires it.

You’re done with Step 2. Now you can type a number into the annotation shape and it will appear in the footnote shape as well. For example, select the annotation shape and type “4”. The “4” will appear in both shapes. Be sure you type the number into the annotation shape (the one with the pointer). If you type it into the footnote shape, you will lose the Custom Formula reference and will have to re-enter it.

What is a ShapeSheet?
Remember in the first Matrix movie, at the very end when Neo started knowing he was The One? He looked around saw streams of numbers, the building blocks of the Matrix – at once a terrifying and awe-inspiring view of the world. A ShapeSheet is to a Visio drawing as Neo’s view is to the Matrix: the numbers behind the façade. (Coincidently, I’m terrified and awestruck by Visio.)

Any given shape in Visio is described by a collection of formulas. These formulas are captured on the ShapeSheet. When you adjust a shape – change its height or format the text, for example – you are actually changing the formulas behind the scenes. In some cases, it makes more sense to adjust the formulas themselves, and tapping the full extent of Visio’s power means becoming familiar with ShapeSheets.

Step 3: Changing the behavior of the annotation

The shapes as they stand right now are pretty useful, and will make the internal bookkeeping of wireframe annotations a little easier. This last step will make the annotation shape more elegant. This step introduces several techniques related to ShapeSheets, the backbone of any Visio drawing.

There are three adjustments you need to make to the annotation shape: the text block, the shape rotation, and the orientation of the text.

To adjust the text box, select the shape and then choose the Text Block Tool. You may have to click the arrow next to the text tool to find the Text Block Tool. The Text Block Tool allows you to change where text appears relative to the shape. By default, the text block occupies the entire rectangle of the shape.

With the shape selected with the Text Block Tool, change the shape of the text box to occupy only the circle, dragging the right-hand handle of the rectangle to form a square over the circle. Now when you type text in the annotation shape, it will appear centered on the circle.

To adjust the rotation, select the shape and then choose the Rotation Tool. Notice that the center point is not centered on the circle. This is because the default formula for the rotation point is the geometric center of the entire shape. Move the pointer over the center rotation point. The pointer will change to a small circle.

Click and drag the rotation point to the center of the circle.

Now test the rotation by grabbing one of the rotation handles (the green circles at the corners). The shape will rotate around the center of the circle.

Notice that the text rotates with the shape. By default, the rotation of the text block matches the rotation of the shape. To correct the orientation of the text, we need to adjust the angle of the text block, forcing it to stay absolutely zero regardless of the shape’s rotation.

To adjust the text orientation, you need to make a change in the ShapeSheet. First, select the annotation shape and then choose Show ShapeSheet from the Window menu. The screen will split, with one part showing the original drawing and the other part displaying the ShapeSheet of the annotation shape. A ShapeSheet is made up of sections. Each section addresses a different aspect of the shape and appears as a table made up of cells.

The cell that controls the rotation of the text is in the Text Transform section. Scroll through the ShapeSheet until you find this section. If you cannot find the section, you may need to add it to the ShapeSheet: right-click in the ShapeSheet and select Insert Sections… from the context menu. Be sure to right-click in the dark gray area. Put a check next to Text Transform and click OK. (If Text Transform is grayed, that means it’s already in the ShapeSheet and you just need to have your eyes checked. This happens to me frequently. Very frequently.)

In the Text Transform section is a cell called TxtAngle. At this point it is set to 0 degrees. This may seem right, but that number is not an absolute measurement. Instead, it is measured relative to the angle of the overall shape. Therefore, the appropriate formula for this cell is:



(Don’t forget the minus sign!) Angle is the name of another cell, the cell that defines the angle of the overall shape. Because the TxtAngle cell acts relatively to the angle of the shape, putting a negative value equal to the angle will permanently render it absolutely zero.

You can now close the ShapeSheet and rotate the annotation till the cows come home. The text will remain upright and readable. The cows, I’m afraid, may not.


Exercise for the reader
You may want to lock the text of the footnote shape so you don’t accidentally overwrite the field that automatically matches the annotation text. Although Visio has a dialog box for protecting different aspects of a shape (under Format > Protection…), the shape text is not one of those aspects. There is a way to do this using ShapeSheets.

This little exercise gives you a handy tool that allows you to place annotations without losing track of your numbering scheme. The tool also allows you to rotate the annotation pointer without having to adjust the text every time you turn it. Having built this tool, you now have some experience with Visio shape operations, formulas, and ShapeSheets.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Representing Content and Data in Wireframes: Special Deliverable #10

by:   |  Posted on
“Information architects sometimes do not repeat data but invent more of it.”Visio practically groaned as I opened the wireframes for my current project, which were in something like the twentieth revision. It was the usual story—poorly defined requirements and business rules—and my project folder was fast becoming the poster child for Feature Creep Flu. To hang all the versions of the wireframes up side-by-side would reveal something like the storyboard for Memento.

Anyway, as meticulous as the project manager and I were in going through the wireframes to ensure they looked “clean,” things are always dirtier in the cold light of day (read: during the presentation to the client). Although it went well enough, the hang-ups in this meeting were over the examples used in the wireframes, requiring additional explanations to clarify functionality. Until that moment, I had not given much thought to the kinds of sample data and content I used the in wireframes.

Typically, sample data and content in wireframes is repetitive and invented:
Sample data and content

During my presentation, a table similar to this one stopped the client in his tracks. Is it a list of the same address over and over? Given the circumstances—and that the requirements had changed so much—this was not an unreasonable question.

Information architects sometimes do not repeat data but invent more of it, so the address book above might also contain entries for Jane Doe, Homer Simpson, and Mickey Mouse. Invented data or content is essentially meaningless, representing an archetype of the kinds of information expected to appear in different areas.

Using repetitive and/or invented data, however, can confuse and mislead stakeholder in five different ways.

  • Misrepresent rules and behavior
  • Misrepresent what the user sees
  • Shift focus from the design
  • Misrepresent the data’s impact on the page layout
  • Misrepresent the scope of the fields

To illustrate all these, we’ll look at one of the most data-rich screens available on the Web: the shopping cart.
Data rich screen in a shopping cart

  1. Misrepresenting rules and behavior:
    In a word, the math in our shopping cart doesn’t add up.
  2. Misrepresenting what the user sees:
    This order has two destinations and users can click the second destination to see what’s going there. Because the dummy address is repeated, however, it does not accurately illustrate what the user will see.
  3. Shifting focus from design:
    If dummy data ends up being inaccurate (“Hey, widgets don’t come in black!”) stakeholders can be more focused on the data than on the architecture.
  4. Misrepresenting data’s impact on page layout:
    Using exclusively short examples does not accurately show the designer what he or she will have to accommodate in the page layout. Frequently this leads to some dummy data like, “ThisIsAVeryLongNameToShowWhatLongNamesLookLike.” Which is just weird.
  5. Misrepresenting field scope:
    An address field can take so many different forms (apartment numbers, international addresses, ZIP+4, etc) and no dummy data can accurately capture all the variations.

No doubt each of these problems can be solved individually: use numbers that add up, use two different dummy addresses, etc. But coming up with a comprehensive, unified strategy to represent data and content can make wireframes easier to create and present. That is, the examples selected for a wireframe should tell a single, complete story.

The Universe of Sample Data
A cursory review of some wireframes out there reveals five different kinds of sample data and content, listed here from the most concrete to the most abstract:

Actual 7220 Wisconsin Ave, Suite 300, Bethesda, MD 20814
Dummy 123 Main Street, Anytown, ST 22222
Labeled Address1-30City-30[ZIP-5]

numbers indicate field lengths


for dates: MM/DD/YY or something equivalant

Greek Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Morbi.

No one kind of sample data is better than any other kind. Indeed, like most things, it depends. In this case, the type of information, the disposition of the client, and the amount of detail required would all influence how examples are displayed.

Some advantages and disadvantages to each kind of sample data:

Advantages Disadvantages
Actual Recognizable by stakeholders.
Offers most accurate depiction of what users might see.
May be difficult to get enough actual data to populate all areas.

May not address all possible variations of data.

Dummy Easy to generate examples.
Closely resembles what users might actual see.
May be confused with actual data.
May not address all possible variations of data.
Labeled Describes content of data. May be difficult to explain to stakeholders.
Different data may be represented by same variable names.
Symbolic Can show “shape” of data. Could clutter wireframe.
May be difficult to distinguish between different types of data.
Greek Easy to generate examples.
Avoids distraction from interface.
Represents prose well, but may not represent other kinds of data effectively.

With the universe of sample data codified, information architects need only a mechanism for deciding which type is best for different applications. A hard-and-fast formula is perhaps not appropriate, but I’ve devised four strategies for typical documentation problems.

Greek text is most appropriate for representing long blocks of prose. Where description of the content is necessary, I justify and dim the greek while superimposing copy direction over it.

Greek text representing long blocks of prose

Tables and Lists
Because the data in tables and lists tend to include repetition of type, using dummy data can confuse stakeholders if they take this to mean that the real content (not just the type) is repeated. Using actual data in a table may help, but comes with all the disadvantages of using actual content (finding it, ensuring it represents all variations, etc.) After some experimentation, I decided to use exclusively labeled data:

Labeled data in a table

Annotations must accompany such a table to indicate the rules for populating it.

If a Web application depends on dates, the wireframes should use actual dates and employ them consistently. The project I mentioned at the beginning of this article was a scheduling application. As the wireframes evolved over several weeks, the date examples I used in the wireframes were not applied consistently. Some screens showed sample dates from May and others from August, which made narrating the scenarios very difficult.

To approach this issue on my final round of revisions, I first listed all possible scenarios (schedule new event, change existing event, etc.) and then identified key milestones (first login, first scheduled event, subsequent login, etc.). With these dates defined up front, the wireframes told a more coherent story.

Date data present an additional problem since they can appear in several formats. Wireframes can address this problem by specifying a format on a cover sheet. Symbolic sample data is frequently useful for specifying date content. The symbol should match the format:

Sample Date Appropriate Symbol
7/26/04 M/D/YY (the single M and D specify using one digit where possible)
07/26/2004 MM/DD/YYYY
Jul 26, 2004 MMM DD, YYYY (the three Ms indicate using the three-letter month abbreviation)
July 26, 2004 MMMM DD, YYYY (the four Ms specify using the full month name)
Monday, July 26, 2004 DDDD, MMMM DD, YYYY (the four Ds BEFORE the month specify spelling out the name of the day)

Unique and Non-Unique Data
Using labeled sample data presents a challenge because a variable name can represent more than one piece of information. For example, in an address book application, [FirstName] could represent the name of the address book owner or the name of someone in the address book. There are two strategies for dealing with this situation:

  1. For data that is unique, always use actual or dummy data. In the address book example, the first name of the owner would always be rendered as “Jane,” for example. Non-unique data could then use the labeled format (e.g., [FirstName-20]) without conflicting with unique data.
  2. Using the labeled data format, visually distinguish unique and non-unique data. For example, when referring to a specific first name, the field could appear with braces instead of brackets: {FirstName-20}.

Sample data can make or break a wireframe, whose purpose is typically to illustrate architecture and interaction. Poorly selected sample data can end up clouding the wireframe or distracting stakeholders from its purpose. By codifying the types of sample content they employ in their deliverables, information architects can create a coherent narrative to illustrate a website’s functionality.

These days, rather than try to think of sample data, I use the labeled format almost exclusively. (Combined with Visio’s stencils, this makes keeping the wireframes up-to-date very easy.) If, later in the process, it becomes appropriate to include more concrete sample data, it’s easy enough for me to go in and change [FirstName-20] to Jane or John.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but he wishes he were.

Dan Brown has been practicing information architecture and user experience design since 1994. Through his work, he has improved enterprise communications for Fortune 500 clients, including US Airways, Fannie Mae, First USA, British Telecom, Special Olympics, AOL, and the World Bank. Dan has taught classes at Duke, Georgetown, and American Universities and has written articles for the CHI Bulletin, Interactive Television Today ( and Boxes and Arrows

The Information Architecture of Email

by:   |  Posted on

“Gmail revealed to me my email behavior — something I hadn’t previously given much thought.”

At least several times a year, I try (I really do) to set up folders to sort my email. I am an information architect, after all. Setting up folders is, according to my job description, my area of expertise. Actually, I suck at setting up folders for email.

Email is hard to sort into a strict taxonomy because:

  1. Most messages could live in more than one category.
  2. Personal and business priorities may shift several times a year, rendering email taxonomies obsolete.

Gmail is Google’s foray into the free email market, and attempts to address these inherent limitations. Typical of Google, they avoided putting out just another email service. Put aside the controversy about the privacy invasion, and Google’s email interface is remarkably innovative. (My exposure to email handlers is limited to Outlook, Outlook Express, Entourage, and various online email services. Gmail’s approach may be old news to you.)

Gmail revealed to me my email behavior — something I hadn’t previously given much thought. By making certain things easier (and others more difficult), Gmail showed me how “typical” email applications weren’t necessarily designed according to how I used them.

Messages in threads

I’ve already mentioned categorizing emails, a behavior most email programs expect users to do. Instead, Gmail bundles messages together in threads. A reply to one of your messages, therefore, does not appear as a separate message in the long queue of messages in your inbox. Instead, it simply is appended to the end of the thread. In the inbox, the thread is highlighted in bold to show that there is a new message.

By keeping all the messages together in a single thread, it’s easier to follow a conversation. More importantly, it doesn’t bog down the inbox with lots of messages with the same subject line.

Google introduced some nice interface elements in both the inbox and in the message view to make it easy for users to adapt to this unusual approach.

In the inbox, threads with new messages get promoted to the top and the total number of messages in the thread is indicated near the subject line. One of my favorite features is that the “From” field indicates the last three people to contribute to a thread. So if I’m emailing with my wife on something related to our home renovation, the From field would show “me, Sarah (7),” showing that we’ve exchanged seven messages.

Screenshot of thread in the inbox

In the case of emailing with friends throughout the day, the From field might show “Nate, James, Eric (5),” showing that of the five messages exchanged, Nate, James, and Eric were the last three contributors. (It may be worthwhile to have some indicator of whether they were the ONLY contributors to a thread or if there were more.)

When displaying the thread itself, Gmail shows the messages in the order they were received, with the oldest one at the top. This may seem inconvenient, but Gmail hides all but the headers of the older emails, so the newest email is easily above the fold. The header includes the name of the person who posted the thread, a little teaser from the message, and the date it was posted. Clicking on the header of any given message reveals it without leaving the screen.

In a sense, threading messages is like putting them in folders, with each folder being a different thread. Messages, therefore, are pre-categorized, removing that burden from the user. In my case, this is an enormous relief, since I’m not much of a Message Categorizer. (There is no way, according to Gmail Help, to ungroup a message from its thread. But we’ll see shortly that it is unnecessary.)

The reply function appears at the bottom of the page, which could present problems if you’re looking at an especially long message. (The problems with unsnipped quoted messages become readily apparent.) With short messages, this doesn’t present a problem because the reply function is sufficiently above the fold and scrolling is unnecessary.

The annoyance of occasionally having to scroll to reply revealed that I’m equally likely to hold off replying to a message as I am replying to it right away. Sometimes, I’ll wait until later in the day or week to have time to reply to it. For a long message on Gmail, this means scrolling down to the bottom of the page to reach the reply function on a message I’ve already read.

Archive it and forget it

Another behavior I’m loath to admit is that I’m a packrat, and email servers and I have never gotten along because of it. At least once a week, I get scolded that I’ve used up too much space in my inbox. My Yahoo! Mail account goes weeks at a time with a warning message at the top of every page, a bright red bar shouting “98%” at me.

Gmail was designed for us packrats. Besides giving users a gigabyte of storage, Google introduced an “archive” feature with their online email. Although Gmail allows users to delete messages, it instead encourages them to archive messages. In fact, the two most prominent buttons on the inbox page are Archive and Report Spam. The interface for viewing a thread of messages adds “Back to Inbox” to this list, but nothing else. Putting an item in the trash is a task buried in a drop-down menu.

Archiving is Google’s answer to inbox management. Typical email programs expect users to manage their inboxes by removing messages to folders. Every time I need to categorize a message, however, I need to make a decision about where it goes. To paraphrase Steve Krug, “Don’t make me make a decision.” Call me lazy (you wouldn’t be the first), but I shouldn’t have to make a decision every time I get an email. It’s a lot of brain power for not a lot of value. Just because I put something somewhere doesn’t make it easier to retrieve later.

Because there are no folders, Gmail’s inbox could easily become unwieldy, but a message in Gmail exists in one of two places: inbox or archive. For those threads that are no longer active, but you want to hang onto, you can archive them. Putting a thread in the archive simply puts it in storage and removes it from the inbox. If you get another message in an archived thread, the thread appears again in the inbox.

By default, Gmail shows only the inbox, but the “All Mail” link on the left hand bar reveals every thread currently stored, even those you’ve started but to which you haven’t gotten a response.

By archiving messages, you might think they’re essentially gone. You might as well have trashed it. After all, how easy is it to find something in your attic if you haven’t put it in a labeled box? Google, however, includes a handful of powerful features (including its search engine) that renders the email attic as neat and tidy as your local library.

Searching, stars, and spam

Google’s familiar search box appears at the top of every page, though the button “I’m feeling lucky” is replaced by “Search Mail.” You can enter any search term and Gmail will return any threads that include the search terms. Clicking on any of the threads from the search results will reveal the thread. Those messages in the thread that contain the search term are expanded in the thread view, and the search term itself is highlighted in those messages.

Gmail’s search engine is, of course, fast. While a search on my wife’s name in Gmail took less than a second, a comparable search in Outlook Web Access took nearly 15 seconds. Searching on multiple terms leads to similar results. When it comes to email, I prefer searching to browsing, especially when Google is under the hood.

After explaining these basics to my friend Eric, he indicated that he was wary because he uses a lot of filters and subscribes to a lot of mailing lists that are automatically sorted into separate folders. He had a good point, so I looked at some of the other email management features offered by Gmail. Despite the lack of folders, Gmail does give users different ways of marking and categorizing messages.

The method that requires the least amount of thought is to mark the message with a star. In other email applications, this would be like setting a flag. The purpose of starring a message is to give it some priority, and making it easily findable. A white star appears next to every message, whether in the inbox or in viewing the thread. Clicking on the white star turns it yellow and adds a star to the message. A link on the left bar allows users to see all the messages they’ve starred.

For those who feel they would miss folders, Gmail offers labels, a way of categorizing messages. Labeled messages do not disappear from the inbox, unless they’ve been archived. Instead, a message’s label appears adjacent to the subject line. A list of the uesr’s labels also appears in the left bar. Clicking on one of the label names shows all the messages with that label.

Like most email applications, Gmail has filtering functionality, allowing users to apply rules to messages as they arrive. There are four actions a filter can do to a message: trash it, archive it, star it, or label it. Once users establish filtering criteria, they can select any number of these actions.

To test Gmail’s ability to deal with mailing lists, I subscribed to a new mailing list (for people who play the mandolin, a new hobby of mine) and applied a filter. New messages that arrive from coMando are labeled and automatically archived. Mailing list messages, therefore, do not clutter my inbox. At the same time, they are automatically grouped together under the correct label. Clicking on the “coMando” label on the left bar allows me to see all the mailing list messages.

I was pleased to see that when new mailing list messages arrived, the label name appeared in boldface to show that there were new messages, even though they were sent straight to my archive and not in the inbox. The number of new messages also appeared in parentheses next to the label name. The label, in other words, behaved as folders do in other email applications.

No review of new a new email application would be complete without looking at its spam-handling capabilities. Despite having Gmail for only a month, I’m already receiving spam—30 messages in the last week. Gmail’s left bar has a “spam” link to show you all the email that you’ve received that has automatically been categorized as spam. In the last several days, none of my personal email was mistakenly categorized as unsolicited mail. On the other hand, at the beginning of the week, I received three unsolicited emails in my inbox. Since then however, none has made it to my inbox.

Although it’s a little disconcerting that I’m receiving unsolicited messages while barely anyone has my new email address, Gmail’s ability to handle spam seems as good as any other email application.


As Gmail comes out of beta, Google may find itself with a product that users are slow to adopt. People may find the subtle change in the email paradigm more dramatic than Google anticipated. Perhaps this speaks to the dangers of bad design: a bad product can just as easily become entrenched as rejected, such that when a better one comes along, users are reluctant to adopt it.

It may be difficult to think of email applications as “bad design,” and before I started using Gmail it never occurred to me that they were. On the other hand, Google’s different approach to email has led to some stark revelations about my email behavior. At the most basic level, managing email — an activity whose necessity rates somewhere between scheduled car maintenance and eating — requires too much thinking under current models. Users may be pleased to have to “think less.”

The paradigm shift, however, will be the least of Google’s problems. With its search engine advertising practices under constant scrutiny, Google faces myriad new issues by attaching targeted advertisements to emails, potentially a gross invasion of privacy. At the same time, the advertisements for mandolin dealers and instructors that come attached to posts to the mandolin mailing list are almost as valuable as the posts themselves.

Dan Brown has been practicing information architecture and user experience design since 1994. Through his work, he has improved enterprise communications for Fortune 500 clients, including US Airways, Fannie Mae, First USA, British Telecom, Special Olympics, AOL, and the World Bank. Dan has taught classes at Duke, Georgetown, and American Universities and has written articles for the CHI Bulletin, Interactive Television Today ( and Boxes and Arrows

The Visual Vocabulary Three Years Later: An Interview with Jesse James Garrett

by:   |  Posted on

Special Deliverable #9

In October 2000, Jesse James Garrett introduced a site architecture documentation standard called the Visual Vocabulary. Since then, it has become widely adopted among information architects and user experience professionals. The Visual Vocabulary is a simple set of shapes for documenting site architectures. In conceiving the vocabulary, Jesse sought to create a system that was “tool-independent“—that is, readily adaptable to any diagramming software as well as any medium (pen and paper, dry-erase, etc.). The vocabulary was also designed to be portable, fitting easily on letter-sized paper for convenient printing.

Despite the unassuming approach Jesse took in promoting the vocabulary—he posted it to his website—it has earned a reputation as a useful tool for the practicing information architect. So useful, in fact, that it has been incorporated as a template in several diagramming software packages, most notably OmniGraffle. Jesse has evolved the vocabulary over time, welcoming contributions and extensions from people all over the world. Through the work of others, the vocabulary has been translated into seven languages beyond English and is summarized in a cheat sheet.

More information about the Visual Vocabulary may be found at Jesse’s website:

B&A: How has the Visual Vocabulary changed in the last three years?

JJG: It hasn’t changed as much as I expected. When I released the vocabulary in 2000, it still seemed to be in flux—some of the elements were fairly new additions, and I figured it was likely that there would be more in short order. But, in retrospect, the vocabulary was actually more mature than I realized at the time.

B&A: What element or innovation of the vocabulary are you most proud of?

JJG: I think my favorite aspect of the system is the emphasis on practicality throughout its design. At that time, the mainstream school of thought held that any respectable information architect should be producing color deliverables in a professional diagramming or drawing application, and if you want to do any serious, large-scale architecture work, for God’s sake go get yourself a plotter. I saw the resources my clients tended to have, and went in the opposite direction: I wanted to enable anybody with a copy of PowerPoint and a cheap black-and-white inkjet to solve the same kinds of problems.

B&A: Why do you think no other IA documentation standards have emerged in the last three years?

JJG: I suspect that there are a lot of people out there who have cooked up their own ways to express complex (and not so complex!) architectural concepts. They just can’t publish them without making their bosses angry. So I think there are a lot of standards in use out there—they just aren’t public.

B&A: There are several other kinds of IA and UX documents—wireframes, content inventories, personas, etc.—do you think there’s room in the industry for standards for these?

JJG: I think there’s room, but there isn’t necessarily a strong need. In my work, at least, I haven’t encountered a case where I thought a deliverable could be substantially improved by the development of a universal standard.

I’m not dogmatic about the need for standards. Documentation standards only help us to the extent that they enable us to communicate complex concepts without having to invent new means of expression for each new problem. Every once in a while I get email from someone asking me to look at a diagram and tell them if it’s compliant with my system. Although I love seeing examples of what people are doing with my work, I always tell them not to worry about what I think. It doesn’t matter whether your diagrams pass the “JJG validator”—what matters is whether they successfully communicate your ideas to your colleagues.

B&A: What makes the site architecture a deliverable that “could be substantially improved by…a universal standard?”

JJG: Architecture is an abstraction—the information that has to be conveyed is largely conceptual, not concrete like the interface details you might find in a wireframe. So you don’t have the luxury of a straightforward means of representation that you can rely on to be self-evident to your audience. Additionally, the nature of architecture work—describing interrelationships among information and interaction elements—really cries out for visual representation. Having a standard visual way to express those relationships means the architect can spend less time grappling with representing the architecture and more time refining it. Plus, it gives us a common language for sharing our work with our peers, which is important and necessary to the maturation of the discipline.

B&A: Have you found any design problems the Visual Vocabulary cannot represent?

JJG: I haven’t ever encountered a system I couldn’t describe in the vocabulary. Of course, some concepts are harder to draw than others. One attribute of the vocabulary is that the complexity of the representation is proportional to the complexity of the system being described. Simple and straightforward systems have diagrams that are easy to read; systems in which a lot of variables are being juggled and conditions evaluated make for diagrams that take a good amount of attention to create and read.

B&A: What makes the site architecture such an appealing deliverable for clients?

JJG: I often refer to the architecture diagram as a “trophy deliverable”—of everything involved in a project, it’s the one most likely to be pinned up proudly on a manager’s wall. I think there are two reasons it has such a strong appeal. First of all, it’s a visual deliverable. Written documentation just doesn’t have the same visceral impact. Secondly, it’s often the only deliverable that provides a high-level view of the project. Frequently this is the one document that most comprehensively answers the question, “What exactly are we building here?”

B&A: What techniques do you use when presenting site architectures to clients?

JJG: I always take the time to walk them through the architecture. By the time I’m ready to present the architecture, I have a pretty clear idea of what parts they’re likely to approve without a second thought, what parts will require a little careful framing to get them to understand where I’m coming from, and what parts will really need the hard sell. I plan the walk-through accordingly: get them started with easy stuff everybody can agree on, then work them up to the hard questions by showing them around some areas that introduce the more difficult considerations involved in the architecture.

B&A: The last three years have seen the emergence of more complex websites. What effect has this growth had on IA and specifically IA documentation?

JJG: With the proliferation of large-scale, complex, dynamic sites, IA has moved (further, some would say) into the realm of the abstract. Instead of specifying specific links between specific pages, we’re often developing rules by which such links—or even the pages themselves—can be generated automatically. The documentation has had to adapt to that increasing abstraction. I often find myself using the vocabulary to diagram navigational relationships between abstract classes of pages, rather than specific elements with unique URLs.

B&A: Would you consider introducing a new vocabulary specifically geared toward more abstract problems?

JJG: That’s an area worth exploring, for sure. I don’t yet know where those explorations might lead.

B&A: Is there a Visual Vocabulary book in the works?

JJG: Running Adaptive Path doesn’t leave me a lot of time these days to consider writing another book. But suffice to say there’s a good reason there isn’t much detail on the vocabulary in my first book, The Elements of User Experience. I’d want to make sure I took the time to do it right.

B&A: What’s the best feedback you’ve gotten on the Visual Vocabulary?

JJG: A number of people have emailed me with alternate approaches to this or that aspect of the system. It’s great to see the ways in which people have adapted the system to their needs. Sometimes they’re a little anxious about how I might react to their tinkering, but my advice is always: “Do what works.” Take the parts that can help you do your job, don’t worry about the rest. If you have a different way of expressing the same idea that everyone on your team understands, use it.

B&A: Have you seen an example of work where the author used the vocabulary in a way you hadn’t thought of before?

JJG: There are people out there using the vocabulary to document systems many times larger and more complex than anything I had worked on when I was developing it. I had the idea in the back of my mind that the system should be modular and scalable, but I never imagined the sheer complexity of some of the systems people are now maintaining using the vocabulary.

B&A: The Visual Vocabulary has become so entrenched; its templates are shipping with diagramming products. Did you anticipate this response?

JJG: I think I was more surprised than anyone when applications started supporting the vocabulary. I didn’t know about any of these products before they were released. I literally did a double-take the first time I saw the IA stencil in OmniGraffle. It just goes to show you that things have a life of their own once you release them into the world. You never know where they’ll end up.

B&A: What’s the most requested update to the Visual Vocabulary?

JJG: The vocabulary currently treats the page as something of a black box; anything that happens within or between elements of a page can’t be represented in the system. Among people who deal with problems like pages that are dynamically assembled based on certain conditions, there’s a desire to see the vocabulary extended to address this page-level logic. It’s an interesting problem. I’m not convinced that the vocabulary is the right tool to solve it, but I’d like to take a crack at it.

B&A: Can you give an example of this kind of problem?

JJG: Page logic comes into play when you have content or design elements that change depending on conditions. If a navigational element differs based on user type, or based on some other defined condition, the Visual Vocabulary can represent that. But it’s not designed to describe other aspects of the page that might change based on conditions.

B&A: You’ve devised and distributed other tools for UX professionals—the Nine Pillars, The Elements of User Experience. Do all these tools comprise part of a larger whole, or are they meant to be used independently?

JJG: There’s a sense in which the “Nine Pillars” grew out of the Elements, although that path was not as direct as it might seem. The vocabulary developed on a separate track, following the first IA cocktail hour in San Francisco. We did a deliverables show-and-tell, and my diagrams spurred a lot of questions. I started out composing an email answering those questions, and that eventually became the full-blown description of the system I posted to my site. I wish I could say there’s a master plan at work here, but really all I’ve ever done is pursue answers to questions I found interesting.

B&A: What questions are haunting you these days?

JJG: I’m still haunted by the big question I raised in “ia/recon:” What happens at that point when the “miracle occurs,” when we turn our knowledge and intuition about people into information architectures? How can we, as individuals and as a community, develop the skills to make those conceptual leaps?

B&A: How do you think the Visual Vocabulary will change in the next three years?

JJG: I don’t expect the vocabulary itself to change all that much. I would expect it to be joined by other, similar systems for describing other facets of a user experience solution. The vocabulary was never meant to stand alone anyway—as central as architecture diagrams are to my work, I always considered the Visual Vocabulary one particularly handy tool in my toolkit. I look forward to having more!

Dan Brown has been practicing information architecture and user experience design since 1994. Through his work, he has improved enterprise communications for Fortune 500 clients, including US Airways, Fannie Mae, First USA, British Telecom, Special Olympics, AOL, and the World Bank. Dan has taught classes at Duke, Georgetown, and American Universities and has written articles for the CHI Bulletin, Interactive Television Today ( and Boxes and Arrows, an online magazine dedicated to information architecture. In March 2002, Dan participated in a panel discussion on the creation of information architecture deliverables at the annual IA Summit in Baltimore. He also presented a poster entitled, “Where the Wireframes Are: The Use and Abuse of Page Layouts in the Practice of Information Architecture.” Currently, Dan leads the Information Design and Content Management group within the office of e-Government for the Transportation Security Administration, a federal agency dedicated to protecting freedom of movement in the US.

Deliverables and Methods: Special Deliverable #8

by:   |  Posted on
“User experience tasks and activities were aimed at answering these questions: Who is the user? How do they interact with the system? Does the system work?”Toward a universal methodology
When I was at the consultancy-that-shall-not-be-named, I worked with a talented group of user experience consultants as part of a multi-office initiative to establish the user experience practice. The purpose of the practice was to define standards and tools that could be employed across the firm. The standards and tools had to be flexible to accommodate the range of approaches resulting from the Frankensteinian merger of offices from around the world. At the same time, we wanted a product that could be proprietary to the firm.

In our efforts to define a methodology, we decided to boil our process down to three essential questions. User experience tasks and activities were aimed at answering these questions:

  • Who is the user?
  • How do they interact with the system?
  • Does the system work?

These questions gave us a framework that all offices could use to map their methodologies. Those offices that employed a “Discovery” phase could say that those activities mapped to the “Who is the user?” question. Offices using the Rational Unified Process could correlate those activities to these questions. We imagined that these questions could be asked in any order, in case a project called for a quick interaction design or for testing the usability of a legacy system. We imagined that the questions could be asked iteratively, to form increasingly lucid responses.

In hindsight, this approach appears too simple. Indeed, with the growth of the user experience field, I do not think our questions would have scaled to accommodate the complexity of either large systems or long-term projects. Either way, however, this approach allowed us to place our deliverables in context. Ultimately, a deliverable or document is only as valuable as the activity it supports.

To date this column has focused on how to make deliverables more effective, either through their content or through the tools to create them. For this issue, I would like to explore the relationship between deliverables and methodology. Unfortunately, this calls for a definition of IA methodology, which may challenge the definition of IA as the hardest question in our field.

To define methodology, I’ll look at activities and issues. These two components are not mutually exclusive. Activities describe what information architects do and issues are what they do it with.

The two activities of IA method
I have always thought of methodology, in these circles, as consisting of two activities: understanding the problem and solving the problem. Most “creative” methodologies are documented in this way, with a series of steps or phases leading up to a conception of the problem (creative brief, for example), followed by a series of steps or phases leading to the creation and implementation of a solution (one part of which may be a style guide).

For example, in the “old days” a design firm would be approached by a client who wanted a commerce-enabled website to sell patriotic ice cream flavors. (Alas, strange consumer goods abound in times of war.) This is a general statement of the problem. The firm would spend days or weeks (in the government, months) to elaborate on the problem: everything from the kinds of ice cream to the expected sales to the legal implications of potentially shipping ice cream across the country.

Ultimately, all this information contributes to the team’s overall understanding of the problem and sets the bar for the final product. Transitioning to problem-solving activities, the firm must make a series of decisions: the look of the homepage, the information collected at checkout, the way the website database hooks into the fulfillment vendor’s database. Each design decision is evaluated against the problem statement, and the firm must convince the client that the decisions they made effectively solve the problem. Ultimately, the distinction between understanding the problem and solving the problem comes down to the difference between What and How.

If you haven’t yet had a What and How conversation with any of your clients, it goes a little something like this:

Client: I want tabs across the top of my homepage, like my favorite site, [fill in high-profile ecommerce site here].

You: We can look at using tabs, but we first need to establish the main purpose of the site.

Client: Can the tabs be green?

You: Once we figure out the main navigation categories, we can make some decisions about how the page should look. But we can’t even figure out navigation categories until we understand the kinds of information you’d like to make available.

Client: We have a lot of information, but I only want one row of tabs.

You: [After writing down: “has lots of information.”] The issues you’re bringing up will help us describe HOW the site needs to look. We need to first understand WHAT the site needs to do. We can look at the HOW (the design of the pages) only after we establish the WHAT.

For “waterfall” methodologies–those that consist of phases occurring in a linear series–this framework meshes nicely. The first several phases are dedicated to understanding the problem, and the last several phases are dedicated to solving the problem. So-called “iterative” methodologies follow this structure as well, though they are composed of multiple conception-solution cycles. Each cycle tends to be limited in scope or time. Ultimately, having solved enough of the small problems, the iterative methodology will solve the larger problem.

The three issues of IA method
Distinguishing between these two main activities, however, is not enough. For each activity, the information architect must consider several issues. Each person may define these issues differently, but for the sake of simplicity, I’ll use business, users, and content as the primary issues information architects must address. In understanding the problem, the information architect must understand it from these three aspects.

Likewise, a solution must also address these three aspects. Any given system must have a business case, a marketing plan, and a content strategy. While information architects may be responsible only for the last of these, the aspects cannot exist independently of one another. (More often than not, IAs find themselves involved with all three.)

Where deliverables fit in
What I’ve presented is, no doubt, an oversimplification of methodology, but it provides a useful framework for considering deliverables. Any deliverable is serving one of two purposes: helping the design team understand the task, or documenting the solution itself.

In the first category of deliverables, a document may help set the context through business goals or user descriptions; or it may help put a stake in the ground with respect to scope by showing the breadth of the existing system. Documents in the second category capture the decisions made by the information architect–metadata for a content management system, structure of browse navigation, or a task flow for a checkout process.

At the same time, any given deliverable is addressing at least one of three issues: business, users, or content.

Business Users Content

Considering deliverables in this framework leads to some interesting insights:

  • There is not necessarily a linear progression left-to-right. The three aspects are mutually exclusive but not independent of one another. Indeed, in order to come to a complete understanding of the business problem, one must also have a complete understanding of the user and content aspects of the problem.
  • There is not necessarily a linear progression up-and-down. In other words, having an understanding of the problem from the user aspect does not immediately imply that a solution from the user aspect will solve the problem.
  • Only a complete understanding of the problem, across all the issues, will lead to a complete solution. A good content solution is dependent on understanding all aspects of the problem. A single deliverable does not necessarily live in one, and only one, box. A single deliverable can capture multiple aspects across a single activity. A sitemap, for example, may indicate how different users can access the information.
  • BUT: a single deliverable should not attempt to both understand and solve the problem.

More on this last point: About a year ago, I did a concept model (a simple sitemap-like document showing the relationships between different concepts) for a client that described different “content types” for their organization. The client had never seen anything like it, nor had they attempted to define content types for the organization before. It’s easy to see how this could be part of the solution, and would fall into the second category of deliverables on the second row of the table above.

On the other hand, the deliverable did not describe how the ultimate solution would behave or look. It did not ask the client to do anything differently with their content. Its value, instead, was to help the team (both consultant and client) get their arms around the scope of content produced by the organization. Sometimes a good understanding of the problem masquerades as a solution.

Mapping deliverables to this methodology framework allows IAs to understand the purpose of their deliverables and clarify their role in evolving towards a solution. Admittedly the distinctions used here to describe methodology may be helpful for clients, but an oversimplification for professional information architects. On the other hand, even if your methodology includes more subtle distinctions in activities and issues, your deliverables must be geared toward getting closer to a solution. The deliverable may be the solution itself or a step in that direction. Understanding how your deliverables fit into the larger picture–by mapping them to a particular activity and issue–will make them more effective communications.

Dan Brown has been practicing information architecture and user experience design since 1994. Through his work, he has improved enterprise communications for Fortune 500 clients, including US Airways, Fannie Mae, First USA, British Telecom, Special Olympics, AOL, and the World Bank.

IA Library Quick Reference: Special Deliverable #7

by:   |  Posted on
“One look at my bookshelf and most innocent cube-visitors think, ‘This guy really spends a lot of money on books.’”By now you’ve acquired all the essential IA books for your bookshelf. I like having these books around because they make me feel important, and smarter than I really am. One look at my bookshelf and most innocent cube-visitors think, “This guy really spends a lot of money on books.”

A good professional book, besides making your bookshelf impressive, can:

  1. Offer a good brush-up on the basics
  2. Help put into words ideas you’ve had
  3. Suggest introductions for topics you’re not familiar with
  4. Provide alternate perspectives on topics you are familiar with
  5. Inspire

It’s this last reason that I like having these books around. Even when I’m not working on an information architecture problem, I like picking up Blueprints or Polar Bear and browsing through it. More often than not, I find something that gets my creative juices flowing.

On the other hand, it’s when I’m stuck on an IA problem that I really need these books to help me find a way out of it. And when I’m stuck on a problem, I hardly have time to kick my feet up on my desk and browse idly through. That’s where this issue’s Special Deliverable column comes in. In this column, you’ll find an overview of three IA books from a deliverables point of view. The purpose of this article is not to say whether one book is better than another, or even to comment on the overall quality of the books, but to provide a guide to what kind of deliverables information you can find in each book, and where.

The three books reviewed are:

Information Architecture: Blueprints for the Web. Christina Wodtke. New Riders, New York: 2002.

Information Architecture for the World Wide Web. Louis Rosenfeld and Peter Morville. O’Reilly and Associates, Boston: 2002.

Practical Information Architecture: A Hands-On Approach to Structuring Successful Web Sites. Eric L Reiss. Addison-Wesley, New York: 2000.

In the course of this article, I will abbreviate these references as Blueprints, Polar Bear, and Practical IA, respectively.

Author’s Disclaimer: Some of my work appears in Blueprints.

The three books represent a variety of attitudes toward deliverables, and these distinctions highlight that the product of information architecture is intangible. No book flat-out states that only “professional” or “formal” deliverables are preferable to informal ones. Of the three, however, Blueprints discusses documents that are geared toward thinking through problems and those geared toward communicating ideas to non-IAs. Practical IA offers almost no advice on the “formal” IA deliverable, but does suggest several pen-and-paper methods for thinking through ideas. In contrast, Polar Bear seems much more focused on formal documentation.

Only Polar Bear and Blueprints have full chapters dedicated to documentation. Practical IA does have a chapter entitled “Getting it down on paper,” but the intent of this chapter is not to demonstrate or explain formal deliverables.

Your preference on preparing “formal” or “client-ready” deliverables depends entirely on your working style, and the demands of your client.

All three books define wireframes similarly–as representations of a single screen or page–and acknowledge that they can “cross the line” between information architecture and visual design. Polar Bear has an entire section dedicated to wireframes (pp. 283-289) in its chapter on deliverables (chapter 13, pp. 270-304). Blueprints’ section on wireframes (pp. 284-289) also appears in its chapter on deliverables (chapter 9, pp. 246-290). Practical IA never goes beyond a brief definition in the glossary in the first chapter.

Blueprints dives right in and offers a crash course on building wireframes. The approach is terse, but includes several detailed examples. Blueprints also shows the final screen design that came out of the sample wireframes. For addressing the potential conflict with visual designers, Blueprints offers three suggestions. Blueprints Chapter 10 includes a case study which illustrates how a wireframe fits into the overall IA process.

Polar Bear offers a more detailed look at the definition of wireframes and spins them as architectural tools. By creating these scratch layouts, suggests Polar Bear, the IA can identify structural or navigation issues. Polar Bear also explains that wireframes can vary in fidelity, and presents low, medium, and high fidelity examples. (The examples in Blueprints, however, do a better job illustrating annotated wireframes.) Although Polar Bear does not offer any sort of troubleshooting or basic process for creating wireframes, it does list five guidelines for creating them.

None of the books go into any detail about how to use tools to create wireframes, so if you’re looking for tips on Visio, PowerPoint, or OmniGraffle, you’ll have to look elsewhere.

The table below summarizes the three books’ treatment of wireframes. For some topics, a book with an XX indicates that it is an exceptional source of information.

pp. 284-289
Polar Bear
pp. 283-289
Practical IA
p. 13
General Definition X XX X
Process for creating X
Examples XX X
Troubleshooting X
Tool how-to
Guidelines X
Pros and Cons X
Shown in context of IA process X (through case study) X

Site Maps
What Blueprints calls Site Maps, Polar Bear calls “blueprints,” and Practical IA calls “written outlines.” All three books offer comprehensive definition of this essential deliverable, but there are slight variations in each book. The practicing IA would do well to look at all three sources when preparing a sitemap.

Blueprints treats sitemaps (pp. 272-283) in its chapter on deliverables. While it spins sitemaps as a method for showing relationships between pages, Blueprints indicates that a sitemap can show other aspects of a website, including whether pages are static or dynamic. Blueprints discusses two main components of the sitemap: the layout and the “visual vocabulary.” For the layout, Blueprints offers four alternatives and provides simple examples of each. For visual vocabulary, Blueprints offers several typical shapes. The section on sitemaps concludes with three examples, meant to show the variation between the approaches of three information architects.

While Blueprints focuses on the layout-vocabulary distinction, Polar Bear distinguishes high-level from detailed blueprints. For high-level sitemaps (pp. 272-278), Polar Bear spells out the process for creating them, showing how the purpose of a high-level sitemap is to explain the abstract concepts that form the foundation of the information architecture. Although it does not provide a step-by-step approach, Polar Bear does walk through examples of high-level and detailed sitemaps (pp. 279-280). Since information architects face so many different scenarios, Polar Bear provides a nice array of examples. Polar Bear advocates simplicity in sitemaps (a point which I would do well to take) and offers strategies for keeping sitemaps in check.

Practical IA begins its section on “written outlines” (pp. 99-100) with: “A surprising amount can be accomplished using an ordinary word processor.” Ultimately, Practical IA advocates using an outline to begin the sitemapping process, but concludes that a diagram is preferable when presenting it.

Once again, none of the books offered any documentation on specific tools.

As controversial as wireframes are, the three books disagreed more on the purpose and implementation of sitemaps. Perhaps there is a tacit understanding in the IA community that sitemaps are “owned” by the information architect, which is why the controversy focuses on wireframes. With the community’s assumption about wireframes, however, comes the unfortunate side effect that none of the books address sitemaps’ pros or cons or typical pitfalls.

SITEMAPS Blueprints
pp. 272-283
Polar Bear
pp. 272-280
Practical IA
pp. 99-100
General Definition X X X
Process for creating X
Examples X XX X
Troubleshooting X
Tool how-to
Guidelines X
Pros and Cons X
Shown in context of IA process X (through case study) X X

Content Inventories
Each book has a different take on content inventories, although each uses them to get their arms around the domain of information. In Blueprints, a content inventory is a tool for analyzing each page of the site. On the other hand, Polar Bear uses content inventories to catalog the “chunks” as content gets migrated from one medium to your information architecture. The distinction is subtle, but does significantly alter the spin of content inventories in each section.

Blueprints calls content inventories the “single most painful job of information architecture.” (Also, a “Sisyphean task,” for those SAT word freaks.) As difficult as Blueprints’ introduction to Content Inventories makes them out to be, its advice for creating one (pp. 267-271) is pragmatic and helpful. The book describes a three-step process for creating content inventories and gives a nice example of a few rows from a content inventory spreadsheet.

Polar Bear combines its account of Content Inventories with content mapping–the process of reconciling the content inventory with an information architecture. Polar Bear’s account of content inventories is much more generous (no references to tragic Greek myths), but does not offer a process for understanding the scope and range of content. Within the context of content mapping, a content inventory is a “byproduct.”

With no specific deliverable to speak of, Practical IA nonetheless dedicates an entire chapter (pp. 31-40) to the process of cataloging content. The book offers ingenious ways of keeping track of countless pieces of information. Practical IA also explores methods for brainstorming content to flesh out the breadth and depth.

pp. 267-271
Polar Bear
pp. 289-293
Practical IA
pp. 31-40
General Definition X X X
Process for creating XX X
Examples X X X
Tool how-to
Guidelines X
Pros and Cons
Shown in context of IA process X

Other Highlights
From a deliverables perspective, each book has different strengths.

Blueprints includes several deliverables I had never even heard of, like Sitepath Diagramming (pp. 248-252), which is meant to show the relationship between a site’s users and its information architecture. Unlike most other documentation techniques, this approach includes users as a critical piece, which can help information architects ensure that they’ve addressed all user needs. Blueprints spins Sitepath Diagramming as a brainstorming technique, but turning it into a formal deliverable should be a fairly straightforward exercise.

Polar Bear is by far the most comprehensive, detailed account of information architecture available. Even the most advanced IAs will find something new in here. From a deliverables perspective, Polar Bear is strongest on client relationship issues. Whatever deliverable you are working on, Polar Bear offers explicit advice on how to explain and present it to clients. This book is useful for client relationships implicitly as well, by providing complete descriptions of the deliverables and their purpose in the IA process. Such language is begging to be referenced for those of us who get tongue-tied in front of clients.

The strength of Practical IA is in its simplicity. While it may not account for the depth of IA activities, it does show the breadth. It is the perfect book for lending to people new to the field. When your own deliverables become complex, Practical IA offers a common foundation for people generally unfamiliar with IA concepts.

If there is a single book that offers a comprehensive view of IA deliverables, complete with descriptions, samples, guidelines, and tool tips, it has not yet been written. (Given demand, we may never see such a volume on our shelves.) With these three staples in an IA’s library, however, no information architect should be lacking inspiration.

For the next column, Special Deliverables will look at the relationship between deliverables and IA methodologies.

Dan Brown has been practicing information architecture and user experience design since 1994. Through his work, he has improved enterprise communications for Fortune 500 clients, including US Airways, Fannie Mae, First USA, British Telecom, Special Olympics, AOL, and the World Bank.