Straight From the Horse’s Mouth with Dan Brown

by:   |  Posted on

iTunes     Download     Pod-safe music generously provided by Sonic Blue

banda_headphones_sm.gif Christina Wodtke traveled with microphone to the IA Summit in Las Vegas this year and sat down with some of the most interesting and accomplished information archictects and designers in all the land. Bill Wetherell recorded those five conversations, and now B&A is proud to bring them to you. Thanks to AOL for sponsoring these podcasts.

In this bat episode, Dan Brown, “consultant”: and “author”: extraordinaire, deftly parries Tom Wailes’ repeated calls to oust the wireframes and task flows for prototyping and simulations. Our stalwart hero defends mindful subversion of the status quo as the best path in many corporate and public sector projects.

While exciting to throw out the bathwater, not every baby is fed by radical innovation alone.

Thanks to Tom for taking the voice baton after his previous turn as interviewee.

We discuss…

*Conceptual vs Design Documentation*
Ideation processes is where the team needs to think bout creativity and innovation. As designers, we create a set of artifacts to help us communicate.

*More detail required?*
Rather than using Wire Frames, Tom Wails says that his core artifacts are more detailed prototypes rather than wire framing, calling Dan’s approach to using Wire Frames into question.

*Know more than your audience*
Dan discusses the importance of knowing not only your audience but also understanding the corporate culture into which you’ll be working and designing.

*Government Work*
Dan points out a constraint to innovation from his experience is that most contracts are very specific with respect to deliverables. The challenge is creating within these set parameters. Dan provides examples of such creativity when designing Wire Frames.


[musical interlude]

Announcer: Boxes and Arrows is always looking for new thinking from the brightest minds in user experience design. At the IA Summit, we sat down with Dan Brown from EightShapes.

Tom Wailes: Hi, my name is Tom Wailes. I’m User Experience Director for Yahoo! Local and Maps. I’m going to be discussing with Dan Brown a few issues that came up today at the IA Summit.

Dan Brown: That sounds awesome. I look forward to it.

Tom: So a little bit of background. Dan gave a talk today, “Communication Design”. So something to summarize, I guess, his book what I thought about some of the design deliverables that information architects and designers who typically delivered over the years and made some very senseful suggestions for some refinement improvement to that. I gave a talk with a colleague of mine from Yahoo! Kevin Cheng that talked about different kinds of design deliverables, primarily storyboards and prototypes and simulations. So I’m here to talk with Dan about where he sees this to working together or not.

So Dan first, I kind of teased you a little bit in my talk after praising your talk. It was very good, I enjoyed it. But then, noting that you hadn’t already talked at all about prototyping or simulation, storyboarding, things like that which is what my team has been doing a lot of and we’ve been very little wireframing. So your reactions to that.

Dan: I think it strikes me as a very important part of the work that we do as designers. I didn’t sense any sort of disconnect between your story and my story. In a sense, I was speaking to all those people who have to jump on a project after a concept has been approved and funded and need to hash up the details. At the same time my, I guess, philosophy is meant to help people provide a critique of their own documentation and I wonder if there’s an interesting synergy there in looking at the kinds of conceptual documents that you do dividing it into layers as what I suggest.

So very important stuff that’s critical to the document versus the more extraneous stuff and using that as a model for evaluating conceptual documentation as much as design documentation.

Tom: So can you clarify a little bit what you mean by conceptual documentation versus design documentation?

Dan: Sure. What I got out of your talk was there’s this ideation process. There’s this process where we need to spend some time just spending some brain cells to think about what could be or let me get a better understanding what the problem is. We created a set of artifacts to either better articulate what that problem is, help get our heads around it, or, in the case that you were discussing, we’ve got this concept, we’ve got this idea to improve your product or create a new product and we need to sell it to the people who make the decisions and hold the money. That’s certainly what I got out of your talk and it’s something that I’ve had to do a lot in my career.

Why that stuff didn’t make it into the book, I would say is only because I was trying to –there’s a long list of documentation, I’d to cut it off that one of the things that people really do everyday in their jobs. And I was looking at your little video, I’m going, “God, if I could do that everyday, my job satisfaction would go –I mean, I love my job as it is, I’m self-employed, etc., etc.–but my job satisfaction would go through the roof”. So I see the conceptual stuff as sort of trying to sell and capture big ideas about a product or product direction. Whereas the design documentation, elaborating on the details and providing directions to the people who have to implement it.

Tom: So one thing I didn’t really cover today but it’s also part of our process that we’re trying, we’re experimenting and sort of making up as we go along, frankly. But it’s not just using interactive visualizations for the concept, but it’s also for the details that designs are right now, rather than doing wireframe or anything like that, we’re continuing only detail prototypes to help us work out more detailed aspects of the product, and that ends up being our core documentations.

It’s not to say we won’t go on to do some wireframes later but we’re involving the engineers right now in that so that they can start thinking about, “Oh, you wanted an interacting look like that. Let me think about it.” So using those kinds of –documents is kind of a funny word to use.

Dan: Artifacts.

Tom: Artifacts, those are our core artifacts now throughout the process, not just the ideation but as we’re working through the details. So how do you react to that?

Dan: I think that’s an amazing opportunity that you have. I remember you polled people at the beginning and ask them, for example, “Do you have a 20% role in your organization that allows you to just simply trying to innovate for one day a week?” And only a handful of people raised their hands and I think if you ask them, “Could you experiment with the kinds of documentation that you do to try and continue some of these prototyping or conceptual type of stuff throughout the life cycle of a project?” you get a similar number of hands.

I come from a world of government contracting, working with large Fortune 500 companies that are stuck in old school tradition, wireframes are, in a sense, –I know this is maybe shocking– innovation enough for them as far as a new kind of document. They’re used to sort of, I imagine, 1980’s IBM, big binders of functional requirements, the idea that we can translate those into some digital format is radical in and out of itself.

Can we get to a point that we’re all doing that kind of documentation? I would love that, in 10-years time we will be but in 10-years time you, guys, are going to be doing a whole another kind of creating another kind of artifact to capture functional requirements of behaviors and all those kinds of things. Does that answer your question?

Tom: It does, well, I think it does. So are you really saying that there are just core differences in the type of industries and the type of projects that might make it incredibly hard to break away from more traditional documentation like wireframes and flows and requirements, documents and things like that?

Dan: I’m not even in charge of industry thing, I just think it’s a corporate culture thing as far as there are some companies that are just not –one of my clients, for example, is a hospitality company. They’re not a technology company, they’re not geared towards that kind of innovation.

They grew out of this idea of selling hotel rooms to people. So that kind of culture is throughout the organization. They have technology people there, and they are fighting an uphill battle to do the kind of innovation that you are talking about. That hill is culture of 100 years of hospitality industry.

Tom Wailes: Obviously I know nothing about that company and that project, but I can imagine… You talk about hospitality and selling hotel rooms. At least me, from the outside, I can imagine a great opportunity to start with some visualization and prototyping to get across some concepts, particularly since you are talking about selling. I don’t know the details of that.

So what would stop you, what would make it very hard for you to say “You know what? I’m going try something different on this project”. What are the main inhibitors for you?

Dan: Oh, I’m not afraid to try something different. But I think, as designers, we need to be responsible… I’m no Steve Jobs, so I need to be responsible for about just exactly how much I am going to push the envelope.

The kind of conceptual stuff that you are creating is working for you and your organization and your culture and the kinds of products that you are working on. I think that there are opportunities to create those kinds of artifacts and documents in other organizations but maybe not push the envelope so much.

So, if I were to show that to someone who is so used to, in the flip side, seeing certain kinds of documents, it may not speak to them as well. They may be saying “why are you wasting my time with a comic?”

There is no controversy here. I am not trying to say that I don’t think there is a place for those things. I have not been able to cultivate a place for those things in the kinds of clients that I work on.

Tom: OK, I have two comments. The first is, in our environment, people were used to wire frames and requirements documents and things like that. We had been using those but we decided just to experiment with new methods like the comic storyboarding. The reaction actually wasn’t “I don’t understand that or I don’t get that or don’t want that”. It was like “oh my gosh, can you do more of this? I can see much more clearly what the core ideas are. I can be involved in giving my opinions now”. The wireframes and other kinds of documentation are much harder to be involved in. So that would be one comment.

The second comment is we talked about starting small. In what ways could you perhaps start small? I can understand you cannot just turn your client overnight into completely new processes. You have deadlines, budgets, and things like that. But in what ways do you think you could start small in introducing new ways of working?

Dan: There are things we do all the time. That culture may have given rise to a certain kind of wireframe, and I may see opportunities to encourage them to go in a different direction.

They may start with a conventional site map, and I might move them more to a conceptual model that includes things beyond web pages. It encourages them to think about maybe incorporating their users into that picture so they have a better sense of that. So I think there are definitely small opportunities. I believe we take advantage of them as much as possible.

The other constraint I wanted to point out was that, as an outie, as someone who is not inside an organization… I mean, to a certain extent, you serve clients inside your organization. But as a complete outie, my contracts are structured to do something very specific for a particular client.

So, if they hired me to help improve a set of pages or a particular function on their site and I said “OK I’ll do that but let me show you this first” they would really not happy with that because they are paying me to achieve something very particular.

I am working within the constraint of that particular project scope I need to find a way to do that things you are talking about and sell them on big ideas. The book, Communicating Design, talks about using documentation and use it in different contexts and those contexts, as the contexts vary, they will impact the nature of the documentation itself, as well. I don’t know if I answered your question.

Tom: I think you did. I’m still not entirely convinced that you can’t introduce new ways of working in a very small way where maybe you do not take any of the client’s time or maybe you only take a day. We gave some example today. I can show you some stuff later that just took two days to visualize some ideas.

So it might be something that is very lightweight and you are not beating the client of the head with it and saying “oh my god we’ve got to work this way”. It’s just like “yeah we are going to do all the things we are contractually committed to doing but, by the way, why don’t you have a look at this as well”.

Dan: I am not disagreeing with you at all. I completely think there are opportunities to do that, but my primary concern (I may get in trouble by saying this) is less about doing cool work period and more about doing cool work within the constraints that have been handed to me.

So I do want to push that envelope as much as possible but my primary concern, as a consultant, is customer service. Ultimately I can feed the kid by getting hired again. So I will do a little thing. I will show them a different kind of document. I will take their wireframes to the next level or I will show them how they can incorporate all of their flows. Who knows what it is.

I might produce a comic. We have done a couple of projects where we have done comic-like things that incorporated user commentary and very explicit screens or wireframes along with some more technical contexts. Not a comic in the true sense of the word, but something leaning in that direction. Those can be very helpful, especially when clients themselves are struggling with the scope.

That was a long rambling answer to say I agree with you.

Tom: OK, let me challenge you a little bit then. What if I was to put it to you that you would actually do better work and serve your clients better if you did less wireframing or other traditional kinds of documentation, and more prototyping, simulations and storyboarding?

Dan: I think you are right. So ha! So try and challenge that! [laughter]

I agree that there is an opportunity to do more prototyping and stuff like that. It is balancing that with the expectation of what we are going to get and what is going to work inside the organization.

In some cases, we are shielded from the development team entirely. So I am working to support to user experience team, and they are burdened with communicating with the developers. If I am going to ask them to challenge their developers, that is not very responsible on my part.

Tom: OK, thanks very much. So we sort of agree, and disagree, and agree again.

Thank you so much for your time.

Dan: I look forward to our next conversation.

Tom: Me, too.

Visio Glue: Not For Sniffing – Special Deliverable #13

by:   |  Posted on

Spend any time with Visio and you’ll find yourself wondering how glue works. In the real world, it’s pretty straightforward: put glue between two things and they’ll stick. Although glue is used for sticking shapes together in Visio, the metaphor ends there.

In Visio, glue is not an object. Instead, it’s a property of other objects. Whether two things stick together depends on several factors, which we’ll discuss in this article.

You can’t talk about glue without mentioning connectors: lines that stick to shapes to show a relationship between them. Connectors are one of the defining features of Visio, but their behavior is even more unpredictable than glue’s.

What follows is an inventory of Visio glue behavior, connectors, and connection points. After reading this article, the word “glue” (which appears 71 times) will look and sound very strange indeed.

Glue is directional.

  • In the real world, two objects are glued to each other. In Visio, one object is glued to another. For the purposes of this discussion, “target” refers to the object that has been glued to. “Glued object” refers to the shape that has been glued to the target. A nursery school art project involving construction paper and macaroni is perhaps the best real-world equivalent. The paper is the target and the dried noodles are the glued objects.
  • Moving the target results in the glued object moving, or shifting to remain glued. (Just like a macaroni project, where moving the paper moves all the macaroni attached to it. This enables such projects to appear on refrigerators all over suburbia.)
  • Moving the glued object results in the glue being broken. The original target remains where it is. The metaphor breaks down here because in the real world, two objects glued together move together.
  • When a 1-D object and a 2-D object are glued to each other, the 2-D object is always the target, no matter what technique used to glue them together.
  • Distinguishing between the target and the glued object is no easy task. Click on the target, and there is no indication that there are any objects glued to it. Click on the glued object, however, and you’ll see what it’s glued to, as well as the type of glue used. Type of glue? Read on…

There are two types of glue…

  • When gluing a 1-D object to a 2-D object, glue behaves in two different ways. Visio refers to these as dynamic glue and static glue.
  • Think of static glue as “fixed point” glue. The glued object is affixed to the target at one point and one point only.
  • Dynamic glue is “fixed object” glue. The glued object will remain affixed to the target, but at whatever point is most convenient.
  • Clicking on a glued object shows a red endpoint. If the endpoint is a large red square, it is glued with dynamic glue. A small red endpoint with a black X indicates static glue.

  • To use dynamic glue, drag the glued object’s end point to the center of the target object. The target object will highlight with a red border.
  • If many objects are close together, you can guarantee dynamic glue by holding the CONTROL key as you drag a connector to an object.

Not all surfaces are sticky.

  • Although dynamic glue is always available, static glue may or may not be available depending on the application settings.
  • Through the “Snap & Glue” dialog box, you can determine whether a surface will glue. To get to this dialog, choose “Snap & Glue…” from the Tools menu. There are five different options in the “Glue to” list.
  • Shape Geometry: Checking this box will make the entire surface of target shapes “sticky”. If you’re familiar with Visio ShapeSheets, you can also think of this as all points defined by the Geometry sections of the ShapeSheet. If you’re not familiar with ShapeSheets, forget what I just said.
  • Guides: A shape glued to a guide will move when the guide is moved. Guides are always targets.
  • Shape Handles: Glued objects may be attached to any of the shape’s handles, the little green squares that appear on a shape when you select it.
  • Shape Vertices: Shapes’ corners are sticky. Circles are S.O.L. When you round a shape’s corners, its vertices are still considered to be the corners that meet at the intersection of the shape’s sides.

  • Connection Points: Objects can stick to areas of the shape explicitly defined as a sticky point.

Visio has hidden controls for connector behavior.

  • Moving target shapes around the page can have the unwanted side effect of disrupting perfectly placed connectors. You can prevent this by right-clicking on any connector and choosing “Never Reroute” from the menu. This makes connector behavior slightly less unpredictable and you may still have to adjust the connectors after moving the target shapes.
  • Connector behavior can also be controlled from the behavior dialog box, accessed by choosing Behavior•from the Format menu. When a connector is selected, the box has an additional tab, just for this shape. This allows you to control the appearance and behavior of the connector.
  • In several of the following menus, there is a “Page Default” option. Default connector and routing options are controlled in the Layout and Routing tab of the Page Layout dialog (File > Page Setup…). These settings may also be controlled through the Lay Out Shapes dialog by choosing that option in the Shapes menu.
  • Style: The general appearance of the connector. I’m partial to “center to center”.
  • Direction: For some styles of connector, a direction is implied. This menu becomes available when Flowchart, Tree, Organizational Chart, or Simple is chosen from the Style menu.
  • Reroute: Matches the options in the connector right-click menu (described above) and indicates the level of control granted to Visio to alter the connector paths.
  • Appearance: Probably the best discovery when I stumbled across this dialog box. Creates curved connectors with eccentricity lines.
  • Line Jumps define the rules for using and displaying line jumps – breaks in a line when it intersects another. Line jumps symbolize the distinctness of each line. I prefer to create diagrams where lines do not cross because line jumps simply add visual noise.

Connection points are like bellybuttons.

  • Although most connections occur between a 1-D object (like an arrow) and a 2-D object (like a box), it is possible to glue 2-D objects to each other without grouping them.
  • Connection points, the little blue Xs attached to shapes, define points on a shape that can be glued to. As stated previously, however, the target object does not have to have connection points to glue something to it. For example, if you have “vertices” turned on in the Snap & Glue dialog box, you can glue connectors to a target shape’s corners.
  • Connection points come in several varieties; they can be inward, outward, or both. To change the type of connection point, right-click on it with the connection point tool.
  • Inward connection points can have other shapes glued to them. Inward connection points designate the object as the target object.
  • Outward connection points are glued to other shapes. They are the glued objects.
  • Connection points that are inward and outward can be both targets and glued objects.

  • To understand these concepts, create a couple shapes with different kinds of connection points and play around. For example, draw two rectangles. Choose the connection point tool. Select one of the rectangles. CTRL-click with the connection point tool to add connection points to the rectangle. Do the same with the other rectangle. Now change the direction of the connection points by right-clicking on the each point.
  • Notice that when you drag a shape’s INWARD connection point to another shape’s OUTWARD connection point, they won’t glue. Do it the other way and they’ll stick together.
  • With the two rectangles glued together try moving the target shape, and then try moving the glued shape. Moving the target will cause the glued shape to move as well. Moving the glued shape will cause it to come un-glued.

Visio glue is one of the application’s more puzzling concepts. It doesn’t behave like real-world glue and can be unpredictable. This inventory of glue features attempts to tame the madness.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Lost in Translation: IA Challenges in Distributing Digital Audio

by:   |  Posted on
“The main challenge facing network audio devices is how to provide remote access to the music library… this looks like a job for an information architect!”

With each new advancement in digital media come new ways to consume and distribute it, and new and different challenges for information architecture. For example, several new devices on the market are designed to distribute digital audio from a computer to audio systems in other rooms of the house. These devices connect to your home network through a standard Ethernet cable or wi-fi, routing music from your computer to your stereo using standard audio connections.

The main challenge facing these devices is how to provide remote access to the music library. While sitting at a computer, you have the benefit of using a keyboard, mouse and screen to interact with software like iTunes or WinAmp. Since network audio devices need to sit on the shelf with your stereo, they do not have a full display, and the only means of interaction is a remote control. In other words, this looks like a job for an information architect!

This new paradigm for accessing music libraries presents at least two information architecture challenges:

  1. How do users find a song in their music library?
  2. How do users know what’s playing and what’s coming up?

The challenges are made even more difficult by several factors:

  1. Limited display size
  2. Limited availability of metadata
  3. User’s expectations—people are used to browsing through a CD library

This article looks at how three devices on the market today address these IA challenges. Two of these devices, RokuLabs’ Soundbridge and Slim Devices Squeezebox have a screen on the shelf unit. The display on each of these devices is limited to two lines of text, and the remote controls are configured for navigation. On the other hand, Sonos’ device uses a different approach, putting the display in the remote control. Because of this, Sonos’ remote looks like a large iPod with a color display while the device that networks the music has no display at all.

Design Philosophies

Sean Adams created the first generation Squeezebox in 2001 by hacking together some hardware and software. From that first foray into distributed digital music grew a large community grounded in the open source culture. Slim Devices made their server software open source, and there are now more than 50 developers working on it worldwide. This approach has led to constant gradual improvement.

Slim Devices' Squeezebox
Dean Blackketter, Slim Devices’ CTO, says that although the community is the key to adding new features, he monitors all changes to the software before they are officially released. This allows Slim Devices to ensure that any changes to the interface stick with their style guide. Blackketter appreciates the open source approach because it allows people to work on the interface quirks that bother them the most; he told a story about someone who found the timing of the scroll a little off, and wrote a new scrolling algorithm. Blackketter frequently uses the “friends and family” approach to test the usability of these upgrades.

Slim Devices uses no formal user-centered design methodology and maintains no tools beyond a style guide. Blackketter says that the company has internalized the personas of their customers. The management team came to an implicit agreement over the life of the device that their target audience consists of highly technical people—users who like playing with the device—and their spouses—people who just want to listen to music.

Like Slim Devices, RokuLabs’ design philosophy does not depend on formal user testing. Many of the team members at RokuLabs came from ReplayTV, the main competitor to TiVo, and the designers at RokuLabs depend on their previous experience in networked media devices to provide insight into usage. Mike Cobb, RokuLab’s senior engineer, says their experience with ReplayTV provided many lessons for the user experience of the Soundbridge.

The user experience of iTunes also drove the design philosophy for Soundbridge, since the unit was meant to be an extension of that software; RokuLabs sought to make the interactions similar to those of iTunes or the iPod. One key difference is the interaction model of the remote control: while Squeezebox uses the “right arrow” button to make a selection, Soundbridge users must push a “select” button. RokuLabs’ design rejects the use of navigation as selection. In this way, it resembles the iPod, which uses a one-dimensional navigation device (the wheel) and forces users to physically make a selection (by pushing the center button).

RokuLabs also had the benefit of not being the first to market. They played with early versions of the Squeezebox and decided what they liked and didn’t like. One thing they noticed was that the experience seemed geared to tech-savvy users, and RokuLabs wanted a more mass market device.

The newest entrant is Sonos, whose unit shipped in January 2005. I spoke to Mieko Kusano, the director of product management who says that although the idea for Sonos came from its founder, they spent a lot of time defining their target market, which led to creating personas. Sonos also employed a simple ground rule: their designers were not allowed to talk about what “I” want. Instead, all design decisions had to be made within the context of the personas. Kusano says the personas were useful for making the process more concrete, and they gave the company a common platform. She advocates doing as many user studies as you can. “Every time we had something new to show,” said Kusano, “we brought users in.”

Initial user research drove a couple of key design decisions, including putting the display on the remote and focusing on distributing music to many rooms in the house. Having decided to make the screen on the remote in early user studies, they developed a method for prototyping new remote controls by using a PDA. They could program the PDA to display different screens and then test them with their users.

The second decision—focusing on multiroom audio distribution—motivated the design of the remote control itself. Sonos’ remote boasts the fewest buttons. Many functions use “soft keys”—buttons that change their function depending on state—but escalates key functions to physical buttons. Besides volume and playback and navigation, there are only two other buttons: Music and Zones. The music button brings users to the menu where they can select music and the zones button brings users to the menu to select what room to program. All other controls (for example, shuffle, repeat, music queuing, etc.) are presented in the screen.

As Sonos neared their launch date, they did frequent in-home testing, taking beta units to customers’ houses and observing them. They watched users as they went through the out-of-box-experience, the set-up, and use of the unit. Sonos’ approach represents a departure from the other two philosophies, and I was eager to see how the structure of information would differ among them.

Browsing Music

Before digging into the navigation scheme, I want to set out the underlying conceptual structure for each system, which is the same across all three and resembles that of the iPod. (Squeezebox was around before iPod, and was the first unit to employ this structure.) Songs live in a music library. They are “moved” to a queue of songs to play. Users may move songs one at a time or implicitly by selecting a “natural grouping” of songs—an album or an artist, for example. Conceptually, a music player’s key interaction is moving songs from library to queue. At any given time, users need to know what song is currently playing and what songs will be coming up. They also need to navigate the library to facilitate moving songs to and from their queue.

I don’t know if this is the best structure, but it appears to be employed across the board. Even though the underlying structure is consistent, it’s possible for each system to present a different mechanism for navigating the library and moving songs from library to queue. Possible, but unfortunately not true: despite having differing design philosophies, all three devices use nearly identical information architectures, all of which resemble the iPod’s structure. The root menu of each system varies slightly, but one option takes users to a familiar menu:

  • Browse Albums
  • Browse Artists
  • Browse Composers
  • Browse Genres
  • Browse Songs

In Sonos’ system, this menu is called “Music Library”; SoundBridge calls it “Browse.” Selecting any of the options from this menu will take users to an alphabetical list of albums, artists, etc. Each entry represents a group of songs. Users can move the entire group to the play-queue, or can “open” the group to look at individual songs.

Looking at all the songs in a group, users can select a track and play it, add it to the queue or get more information about it. Specifics vary depending on the system. Soundbridge takes you to a list of options, the first of which is “play songs starting with this one,” allowing users to select the group of songs by selecting one song inside the group.

When compared directly, the core information architectures of each are virtually indistinguishable. Each album, genre, artist, and composer is a separate category and each track fits into one of each. There are relationships between the categories:

  • Genre → Artist → Album
  • Bluegrass → Del McCoury Band → It’s Just the Night

The problem is that music is much more complicated than this architecture, even if it does account for some of the nuances of music libraries. For example, an artist or album can belong to multiple genres:

  • Folk → Eva Cassidy → Songbird
  • Popular → Eva Cassidy → Songbird

Another problem with the architecture is that artists’ names may be rendered differently, depending on what they’re working on:

  • Bela Fleck & the Flecktones → UFO TOFU
  • Bela Fleck and Edgar Meyer → Music for Two
  • Edgar Meyer/Bela Fleck/Mike Marshall → Uncommon Ritual

Each of these instances of Bela Fleck is rendered differently in the architecture, because the architecture is conceived as a straight hierarchy.

“All the problems with navigation can be traced back to a single central issue: lack of data. Creating more complex structures depends on having more comprehensive information about the music.”

All the problems with navigation can be traced back to a single central issue: lack of data. Creating more complex structures depends on having more comprehensive information about the music. Because the artist is rendered as a simple text field, the systems can not match up “Bela Fleck & the Flecktones” with “Edgar Meyer/Bela Fleck/Mike Marshall.” Using the systems’ browse features alone I would not be able to find every track in my library on which Bela Fleck performs. The systems’ search features afford some improvement, but they still depend on having good metadata.

Searching Music

The appalling state of music metadata is no secret. Other authors have already explored the limitations of the available metadata with respect to jazz, a genre that “goes beyond the ‘Great Man’ theory and recognizes the influence of side players…” Whether other genres of music have as rich a metadata landscape as jazz is immaterial. Liner notes from any album in any genre hold more information than currently captured in most digital audio systems. All three manufacturers highlighted in this article believe the lack of good metadata is a crisis facing the entire industry. However, they all feel that once the industry cracks the nut, their devices will be prepared to address it.

Search on the Squeezebox and Soundbridge operate as you would expect them to. Select a search field from a menu, enter keywords video game hall-of-fame style with the arrow keys, and get a list of results. The extra step of selecting a field (eg: Search Artists) seems pointless, but Soundbridge engineer Mike Kobb explains:

[I]f I want to find tracks by “Barenaked Ladies”, it’s only a few key presses to choose “Search Artists,” then enter “ba.” The same 2-letter search would find too many items if it were done as a keyword search. I believe making the initial selection and then entering a smaller term is generally quicker than entering enough letters in a keyword search to get a small result set.

This makes sense from a technical point of view: allow people to limit the scope of their search so they don’t need to enter as many letters with the arrow keys. This approach solves one issue with navigation. So long as “bela” appears in the artist field, I can do a search to find all Bela Fleck’s music in my library. On the other hand, entering “be” to see all Bela Fleck tracks seems like an enormous conceptual leap from browsing a library of CDs. In other words, if the task is “get a list of all Bela Fleck’s tracks,” my inclination is to browse by artist—kind of like what I would do in real life.

The third device, Sonos, does not offer a search mechanism. They intend to offer it in the future, but provide no rationale as to why it wasn’t included in the initial release.

Knowing Where You Are

Digital music players give us two virtual spaces: the library and the queue. Knowing your “location” in the library is relatively easy because a mental image of the virtual space is readily available. When navigating the library, users are focusing on the task at hand. The use case for the queue is a different story; users put the queue together and leave it to do its thing. Only occasionally does the queue become the focus of attention after the initial set-up. All three units have a default view called “Now Playing,” in which the display shows information about the track that’s currently coming out of the stereo. Usually, that’s the name of the track and the amount of time left on the song.

On shelf-bound displays, Soundbridge and Squeezebox both give you “one-click” access to the next song. On Soundbridge, simply push the down arrow on the remote and you’ll see what’s next in the queue. Keep pushing the down arrow, and you’ll scroll through the queue. Sonos offers a bit more information, but not much. The “Now Playing” display shows the title of the next song, and getting to the entire queue is just a click away.

When looking at the queue on Sonos the large up-close display offers a broader view, providing more context. Think about using a CD: a complete track listing in the liner notes; you can see the whole thing and get information like song length. Displays of the shelf-bound devices offer only a limited window into the queue. Sonos’ display offers more information because you can see more of the queue. Still, the experience is not quite the same as looking at a set of liner notes because it lacks all the other information.

Is it fair to compare the user experiences of digital and analog worlds? Until music players carve out a new set of user behaviors, their designers don’t have much choice. People are used to interacting with their personal music collections in a certain way, and deviating too far may slow the adoption of new technologies.

Supporting User Behaviors

With only a few nit-picky exceptions the three devices generally do a good job supporting three basic scenarios:

  • I want to play an album and I know which one.
  • I want to play an album by an artist whose name I know.
  • I want to play a specific song and I know its album/artist/genre.

As an end-user, these tasks are pretty easy, once you get the hang of the IA and the interaction model with the remote control. If you want to create a mix on the fly things can get a little clunky as you run through the last task several times over.

Moving back and forth between your music library and the current queue requires gestures that may be difficult for users to get used to. Also, the idea of a queue is unique to this interaction model. If you’re doing the DJ thing and playing random songs for your friends, you may stack up a bunch of CDs to go through, but the queue is in your head and easily modified.

Each of these scenarios depends on user knowledge. If you know the artist or album, you can easily narrow down the library. Things get difficult when you don’t know the name of the song, or when you know the name of the artist, but not which variation of their name is the correct one.

Browsing is another user behavior that’s been neglected. There’s an aspect to browsing a collection of CDs that’s lost when translated to an iTunes-like environment. People don’t keep their entire music library in their head, and the ability to browse is crucial. Because the browse features on these systems are pre-divided into Track, Artist, Album, and Genre, “browsing” is limited to only text-based information.

Browsing a long list of album names is not the same thing as browsing jewel case spines. Color, typography and organization of the jewel cases give more information than just the album name; I may know that the Yonder Mountain String Band song I want is on their latest album which has a brown spine with orange lettering. The black spine with white lettering is their earlier album. I may not know the names of these albums, just the look of their spines. This free-browsing of a physical CD library is a nut not yet cracked by the industry. To be fair, this is a serious challenge: how do you support existing behaviors when users are used to browsing by more than just the names of albums or artists?

On the other hand, a virtual environment enables behaviors unimaginable in the physical world. Wouldn’t it be great if I could play tracks:

  • Based on how much I listen (or don’t listen) to them
  • Based on how often I play them sequentially
  • That my wife has marked as a favorite
  • That my kids did NOT mark as a favorite
  • Featuring certain kinds of instruments or vocalists
  • That have a special place in music history (like the “definitive” newgrass song)
  • That have been tagged by other listeners with particular keywords
  • I usually play on this day of the week or year
  • That feature a specified combination of musicians
“Virtual spaces with robust metadata models enable the kind of serendipitous browsing you’d find on IMDB, or the “social networking” you’d find on”

As online services emerge that compile this and other information, network audio players will need to tap into that metadata to enrich the music playing experience. Virtual spaces with robust metadata models enable the kind of serendipitous browsing you’d find on IMDB, or the “social networking” you’d find on Music libraries are ripe for this kind of experience, and the proliferation of these players could be the catalyst to bring about the change.


There is something very cool about storing all your music on a single server and being able to play it in any room in the house. Homeowners have an option for whole-house audio that, while still bearing a hefty price tag, doesn’t come close to the cost of “old school” systems. (The cheapest network audio systems are only a few hundred dollars, but you need a unit AND a stereo for each room.) The wireless network is much more appealing than running miles of cable through your walls.

When these manufacturers sought to create a whole-house audio system, they each started with slightly different ideas for the user interface problem. For Slim Devices, the pioneer, it was whether it could be done at all. The others each chose a different aspect: the remote, multiple zones, the display. The purpose of this article is not to recommend one device over another (there are many more than these three). The point is, none of these three devices demonstrate any innovation in the underlying information architecture.

Network audio technology is faced with a chicken-and-egg situation. Innovative IA in audio devices like these will be limited by the available metadata. At the same time, industry fears of piracy will limit the amount of metadata supplied with the music. Until the adoption of audio devices reaches critical mass, the industry won’t face pressure from consumers to expand the quality of data, but audio device adoption may stall without more innovative navigation methods.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Toggling Shapes in Visio: Special Deliverable #12

by:   |  Posted on
“Employing a continuation node that toggles means literally flipping a switch to go from one state to another when you’re moving shapes around on your sitemap.”The last Special Deliverable introduced several Visio techniques, including ShapeSheets and formulas. This issue will expand on those ideas, showing how to create a widget with a toggle built into the shape’s context menu. A toggle-able shape is useful when an element is repeated in your diagram, but can exist in one of two states.

To illustrate this, we’ll use one of the shapes from Jesse James Garrett’s Visual Vocabulary–the continuation node–which can appear in either the horizontal or the vertical state.

Continuation Node

Although the Visio stencil for the Visual Vocabulary includes a shape for each state, it can be clumsy to switch from one to the other as you rearrange your site maps or flows. Employing a continuation node that toggles means literally flipping a switch to go from one state to another when you’re moving shapes around on your sitemap.

The basic idea

The difference between combined shapes and grouped shapes
A shape with multiple geometries is different from a group of shapes. Each shape in a group maintains a unique identity and has its own set of properties. When you change the formatting of a group of shapes, you are really assigning the new property to each shape en masse. You can also still change the properties of individual shapes within the group.

Any shape in Visio is composed of one or more “geometries.” Each geometry represents a different component of the shape. Most shapes have just one geometry, but some have two or more. If you followed the Visio tutorial in the Special Deliverable #11, you’ll recall that to create the annotation shape, we combined a circle and the corner of a square. Each of these is a separate geometry.

When shapes are combined they become one shape, sharing all properties. In a combined shape, Visio can still distinguish between the different parts of the shape. We will take advantage of this feature to create a toggle-able shape.

A toggle-able shape has multiple geometries, each of which can be turned on or off depending on the state of the toggle. Our continuation node will have geometries representing the horizontal format and the vertical format. The toggle will turn off the horizontal geometries when the vertical ones are turned on, and vice-versa.

There are therefore three main steps to creating a toggle-able shape:

  1. Create all the possible states of the shape and combine into a single shape
  2. Add the toggle to the shape as an item in the context menu and define its behavior
  3. Adjust the visibility of the geometries based on toggle state

Step 1. Create all the possible states of the shape and combine into a single shape

The continuation node will end up with four geometries: two for the horizontal brackets and two for the vertical brackets. When you create the four brackets, make sure that each of them is a continuous line by clicking and dragging each leg starting on the end point of the previous leg. Arrange the brackets as they will appear in the final shape.

The four geometries of the continuation node.

Because the horizontals overlap with the verticals, they will appear to be a single rectangle. Now, select all four brackets and choose Combine from the Shape > Operations menu. When you now select the shape, it will appear as if it is a single rectangle, and that all the brackets have been lost. Fear not, the geometries are still hidden in the shape.

Step 2: Add the toggle to the shape as an item in the context menu and define its behavior

There are two parts to this step and both occur in the ShapeSheet. Besides adding the menu item, we will need a place to store the current state of the shape. Since the state is binary (one of two possible values) we will use a Boolean (true-false) variable to store this information. In the next step we associate each value with a different state.

  1. Show the ShapeSheet by selecting the shape and then choosing Show ShapeSheet from the Window menu. Notice that the ShapeSheet has four Geometry sections. (Recall that a section in a ShapeSheet represents a different aspect of the shape.) In the next step we will learn how to distinguish which section corresponds to which bracket.
  2. For the toggle, the ShapeSheet needs two additional sections. Right-click anywhere on the gray area of the ShapeSheet and choose Insert Section… from the context menu. From the dialog box, put checks next to “User-defined cells” and “Actions.” Click OK.
    Insert Section dialog
  3. The User-Defined Cells section is a place where we can store information about the shape that does not appear by default. This is where we’ll store information about the state of the shape. First, give the variable a friendly name by clicking on the red “User.Row_1” label and typing “state.” We can now refer to this variable from functions with “User.state.”
    User defined cells, User State
  4. Give User.state its initial value by entering TRUE into the Value column.
    User state equals True
  5. The Actions section is what allows us to add items to the shape’s context menu. There are two critical cells: Action and Menu. Action specifies the function to execute when the menu item is chosen. Menu specifies the language to appear in the context menu. For Menu, enter “Toggle Horizontal/Vertical” or some equally dry indication of the purpose.
    Action cells
  6. It is in the Action column where the magic happens. In this cell, we’ll use a function that swaps the current value of User.state with the opposite value. Type the following into the Action cell:

    The SETF function

    The SETF() function sets the formula of the cell specified in the first argument. The GETREF() function allows us to refer to the cell itself, and not its value. Using GETREF() is required as the first argument in SETF(). The second argument of SETF() defines what the new formula should be–in this case, the opposite of what it is right now.

You can try it out now. Keep the ShapeSheet open, right-click on the shape and choose “Toggle Horizontal/Vertical” from the context menu. You’ll see the value in User.state change from TRUE to FALSE. Do this until it no longer amuses you.

Shape toggling and User State changing from True to False.

Entering formulas into ShapeSheet cells
The sections of the ShapeSheet resemble Excel spreadsheets and can be a little finicky about having data entered into them. Once you’ve entered a formula or value, be sure to hit TAB or RETURN, or use the arrow keys to move to another cell.

Clicking to another cell will not work. When you click on a cell when another one is active, Visio enters a cell reference into the active cell. This can be confusing and annoying.

Step 3: Adjust the visibility of the geometries based on toggle state

Now we’ll move onto the Geometry sections of the ShapeSheet and modify the property that controls the visibility of the bracket. The visibility will be a function of our new User.state variable.

  1. Each Geometry section represents a different bracket, but Visio does not help us distinguish them. To ascertain which Geometry section refers to which bracket, you need to click on one of the cells in numbered rows of the section. These rows describe the shape using a series of directional commands (like “MoveTo” and “LineTo”.) Click on the first cell of the first numbered row of the section Geometry 1. In the drawing, you’ll see a small square appear on the shape. This shows you what part of the shape this row describes.

    Geometry section

    Hit the down arrow key and move through the rows of the Geometry 1 section. The square on the drawing will move around. As it does, you should be able to discern which bracket is being described. You may want to make a little reference for yourself.
    Bracket reference sketch

  2. Once you have established which section represents which bracket, you need to put the following formulas into the Geometry.NoShow cells. Make sure that you use the same formula for both the horizontal brackets and the OPPOSITE formula for the vertical brackets. In this example, assume Geometry sections 2 and 3 represent the horizontal brackets and Geometry sections 1 and 4 represent the vertical brackets. For Geometry2.NoShow and Geometry3.NoShow use:


    For Geometry1.NoShow and Geometry4.NoShow use:


As you enter these formulas you’ll see one set of the brackets disappear, depending on the setting of User.state. Now when you choose “Toggle Horizontal/Vertical” in the shape context menu, the brackets will switch orientation.

By creating an improved version of Jesse’s continuation node, you’ve had an opportunity to explore Visio’s ShapeSheets and formulas. You also used the following techniques:

  • Combining shapes to create a new shape with shared properties
  • Inserting sections into a ShapeSheet
  • Creating a user-defined variable for storing additional shape data
  • Adding a command to the context menu
  • Using formulas for changing the value of a user-defined variable
  • Modifying the NoShow property of a shape’s geometry

With these techniques you can create other toggle-able shapes (What about a checkbox that can be checked? Or a folder icon that can appear in both the opened and closed state?) and you can use these techniques to create shapes with other behaviors.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.

Wireframe Annotations in Visio : Special Deliverable #11

by:   |  Posted on
“Remember in the first Matrix movie, at the very end when Neo started knowing he was The One? He looked around saw streams of numbers, the building blocks of the Matrix – at once a terrifying and awe-inspiring view of the world.”

Few information architects tap the full power of Visio. For the IA, Visio is a means to an end—a mechanism for capturing some ideas on paper before they are transformed into graphics, HTML, and code. Even so, the information architecture community can take advantage of some of Visio’s advanced features to make developing documentation more efficient.

This article introduces several techniques in the context of wireframe annotations. At the conclusion, you will have learned to create an annotation widget, and you will also have learned several facets of Visio you may not have been aware of.

The widget consists of two parts: the annotation shape, which points out the feature of the wireframe; and the footnote shape, which contains the reference for the annotation.

Creating the annotation widget requires three steps:

  1. Creating the annotation shape and the footnote shape
  2. Establishing a relationship between the annotation and the footnote
  3. Changing the behavior of the annotation

Step 1: Creating the annotation shape and the footnote shape
In this step, you create two shapes, one a basic circle for the footnote and one a circle with a pointer for the annotation. Although we use basic shapes, we use some advanced shape operations techniques.

  1. Draw a circle that’s .25” in diameter and make a copy. One circle will be the annotation shape and one will be the footnote shape.


  2. Draw a square that’s .25” on a side and rotate it 45 degrees. (You can do this by opening the Size & Position Window from the View menu. Select the square and type 45 into the Angle field.)


  3. Position the square directly over the circle that will be the footnote shape. Make sure both the circle and the square are selected. Choose Fragment from the Shape > Operations menu. This operation breaks up the two shapes into component pieces.



  4. Delete three of the square’s corners. Select the circle and the fourth corner and choose Combine from the Shape > Operations menu.



You’re done with step 1. Now you should have two shapes on the page: a plain circle and a circle with a pointer on it.

Step 2: Establishing a relationship between the annotation and the footnote

In this step, you’ll teach your footnote shape to mimic whatever text you type in the annotation shape. This way, if you renumber your annotations, the footnotes will automatically renumber. This step introduces a few techniques: naming shapes, inserting fields, using Visio formulas, and using Visio shape references in formulas.

  1. To establish a relationship between these two shapes, you need to name them. Naming shapes is easy enough. Select the shape and then choose Special&#8230 from the Format menu. (How the name of a shape relates to Format Special is beyond me, but the nuances of Visio are for another discussion.) The Special dialog box includes a field for a name. Type a name for each of your shapes. Name the footnote shape “footnote” and the annotation shape “annotation.” This way, there can be no confusion.
  2. Now, select the footnote shape and choose Field… from the Insert menu. The Field Chooser appears.
  3. Click Custom Formula in the left-hand column. The Custom Formula field at the bottom of the dialog will become active. The field already has an equal sign, which lets Visio know that a formula is coming up.
  4. AFTER the equal sign, enter the following formula:



  5. Click OK.

The SHAPETEXT function returns the text of the referenced shape. In the function’s arguments, we have specified the name of the shape (“annotation”) and the reference to the shape’s text property (“!TheText”). This seems redundant, but the SHAPETEXT function requires it.

You’re done with Step 2. Now you can type a number into the annotation shape and it will appear in the footnote shape as well. For example, select the annotation shape and type “4”. The “4” will appear in both shapes. Be sure you type the number into the annotation shape (the one with the pointer). If you type it into the footnote shape, you will lose the Custom Formula reference and will have to re-enter it.

What is a ShapeSheet?
Remember in the first Matrix movie, at the very end when Neo started knowing he was The One? He looked around saw streams of numbers, the building blocks of the Matrix – at once a terrifying and awe-inspiring view of the world. A ShapeSheet is to a Visio drawing as Neo’s view is to the Matrix: the numbers behind the façade. (Coincidently, I’m terrified and awestruck by Visio.)

Any given shape in Visio is described by a collection of formulas. These formulas are captured on the ShapeSheet. When you adjust a shape – change its height or format the text, for example – you are actually changing the formulas behind the scenes. In some cases, it makes more sense to adjust the formulas themselves, and tapping the full extent of Visio’s power means becoming familiar with ShapeSheets.

Step 3: Changing the behavior of the annotation

The shapes as they stand right now are pretty useful, and will make the internal bookkeeping of wireframe annotations a little easier. This last step will make the annotation shape more elegant. This step introduces several techniques related to ShapeSheets, the backbone of any Visio drawing.

There are three adjustments you need to make to the annotation shape: the text block, the shape rotation, and the orientation of the text.

To adjust the text box, select the shape and then choose the Text Block Tool. You may have to click the arrow next to the text tool to find the Text Block Tool. The Text Block Tool allows you to change where text appears relative to the shape. By default, the text block occupies the entire rectangle of the shape.

With the shape selected with the Text Block Tool, change the shape of the text box to occupy only the circle, dragging the right-hand handle of the rectangle to form a square over the circle. Now when you type text in the annotation shape, it will appear centered on the circle.

To adjust the rotation, select the shape and then choose the Rotation Tool. Notice that the center point is not centered on the circle. This is because the default formula for the rotation point is the geometric center of the entire shape. Move the pointer over the center rotation point. The pointer will change to a small circle.

Click and drag the rotation point to the center of the circle.

Now test the rotation by grabbing one of the rotation handles (the green circles at the corners). The shape will rotate around the center of the circle.

Notice that the text rotates with the shape. By default, the rotation of the text block matches the rotation of the shape. To correct the orientation of the text, we need to adjust the angle of the text block, forcing it to stay absolutely zero regardless of the shape’s rotation.

To adjust the text orientation, you need to make a change in the ShapeSheet. First, select the annotation shape and then choose Show ShapeSheet from the Window menu. The screen will split, with one part showing the original drawing and the other part displaying the ShapeSheet of the annotation shape. A ShapeSheet is made up of sections. Each section addresses a different aspect of the shape and appears as a table made up of cells.

The cell that controls the rotation of the text is in the Text Transform section. Scroll through the ShapeSheet until you find this section. If you cannot find the section, you may need to add it to the ShapeSheet: right-click in the ShapeSheet and select Insert Sections… from the context menu. Be sure to right-click in the dark gray area. Put a check next to Text Transform and click OK. (If Text Transform is grayed, that means it’s already in the ShapeSheet and you just need to have your eyes checked. This happens to me frequently. Very frequently.)

In the Text Transform section is a cell called TxtAngle. At this point it is set to 0 degrees. This may seem right, but that number is not an absolute measurement. Instead, it is measured relative to the angle of the overall shape. Therefore, the appropriate formula for this cell is:



(Don’t forget the minus sign!) Angle is the name of another cell, the cell that defines the angle of the overall shape. Because the TxtAngle cell acts relatively to the angle of the shape, putting a negative value equal to the angle will permanently render it absolutely zero.

You can now close the ShapeSheet and rotate the annotation till the cows come home. The text will remain upright and readable. The cows, I’m afraid, may not.


Exercise for the reader
You may want to lock the text of the footnote shape so you don’t accidentally overwrite the field that automatically matches the annotation text. Although Visio has a dialog box for protecting different aspects of a shape (under Format > Protection…), the shape text is not one of those aspects. There is a way to do this using ShapeSheets.

This little exercise gives you a handy tool that allows you to place annotations without losing track of your numbering scheme. The tool also allows you to rotate the annotation pointer without having to adjust the text every time you turn it. Having built this tool, you now have some experience with Visio shape operations, formulas, and ShapeSheets.

Dan Brown is not the Dan Brown who wrote The Da Vinci Code, but wishes he were.