Learning, Doing, Selling: 2006 IA Summit Wrapup: Sunday

“A major point of interest in both these panels was the near total absence of discussion relating to Visio or OmniGraffle.”

Wireframes: A comparison of purposes, process, and products
Anders Ramsay, Dave Heller, Jeff Lash, Laurie Gray, Todd Warfel
Conference description

and

Wireframing Challenges in Modern Web Development
Nathan Curtis, Bill Scott, Livia Labate, Thomas Vander Wal, Todd Warfel
Conference description

Reviewed by: Anders Ramsay

Wireframes were the focus of two back-to-back panels at the Summit.

The first panel provided an overview of different approaches to producing wireframes, in the form of five short presentations, followed by a brief Q&A. Jeff Lash, who moderated the first panel, led off by clarifying that it was not a debate about the best wireframing methods, rather an opportunity to learn about new techniques.

Wireframe panel

Photo credit: Javier Velasco

Todd Warfel started the first presentation describing the use of paper prototypes to test different designs, in which users often provide feedback by making notes directly on the printouts. Todd then presented the use of InDesign and Illustrator as a powerful combination, particularly for designers who may already know these tools and do not know html. While possibly requiring more initial setup, Todd stated that the environment allows for rapid maintenance and extensive reuse of previously specified elements.

Dave Heller continued with a discussion of using Flash as a wireframing platform for Rich Internet Applications. Describing time as “a primary piece of your canvas” in interaction design, Dave compared passive models, such as storyboarding, with more dynamic environments, such as Norpath, Visual Studio, and iRise, and then presented Flash as the strongest best-of-both-worlds alternative for designing rich interaction, with it’s powerful yet low-cost combination of a drawing environment that also supports defining complex behaviors. A key drawback to Flash, Dave clarified, was that it doesn’t print well, and is therefore not well suited for documenting design. Contrasting Dave’s rich media discussion, Anders Ramsay presented XHTML wireframes, with its focus on structure and semantic markup.

Using a visual comparison between a generic wireframe, Anders showed how a module element appearing on a drawing-based wireframe, such as the header area, would correspond to a <div> tag with the id=”header” in the corresponding xhtml, intentionally showing the code view of the xhtml to emphasize the distinction between that and html wireframes, which often use the browser page more as a whiteboard. Anders clarified that the model requires earlier involvement by visual designers, who work on look and feel in parallel with the IA, either directly in the CSS or using whatever tool is convenient for them. Anders listed annotations as a weakness in the xhtml model, but also stated that there is reduced need for annotations, since xhtml inherently is self-describing.

Jeff Lash followed with a discussion of UI specifications, describing a model based on Word documents containing screen shots and annotations. A key advantage of this model, Jeff stated, is that it can be used regardless of the technology used to produce the prototype, and that it can serve as a comprehensive record of the user interface. Downsides of the tool included that production can be time-consuming and that management of multiple iterations can be difficult.

Laurie Gray concluded the presentation portion with an overview of major prototyping tools, describing their purpose as “needing to explain concepts quickly to a variety of people.” Laurie compared open-source alternatives to more traditional tools, such as the use of The Gimp instead of Photoshop or Nvu instead of Dreamweaver, and then described how her organization had settled on using the Axure prototyping tool, with its support for generating both functional prototypes and Word-based specifications. Major issues that came up during the Q&A that followed were that of reuse and the application of the agile development concepts toward user interface design. Both Dave Heller and Anders Ramsay clarified that the models presented do not exist in a vacuum; rather they are created in the context of sitemaps, conceptual diagrams and other artifacts.

“The audience raised concerns that patterns might stifle creativity, but both Todd and Bill made the case for how patterns can specify behaviors without dictating presentation.”

Because the presentations in the first panel ran long, and little time remained for questions from the audience, the Q&A format of the second panel complemented this well. Moderated by Thomas Vander Wal, panelists responded to questions both from Thomas as well as member of the audience. A major theme revolved around documenting rich interaction. In line with this, Bill Scott presented an “Interesting Moments” grid, which serves to document micro-states, fine-grain interactions often leveraging multiple interface elements working in concert. He used the drag and drop feature, as appearing in the Yahoo! pattern library, as an example. Bill also discussed new models for prototyping rich interaction, such as creating animation using the tweening feature in Photoshop CS2.

Continuing the theme of documenting patterns, Todd Warfel presented samples from the rBuilder tool used at Message First, discussing how patterns can be integrated into wireframes, and showed how business users are able to efficiently make design changes by switching from one pattern to another. The audience raised concerns that patterns might stifle creativity, but both Todd and Bill made the case for how patterns can specify behaviors without dictating presentation.

Nathan Curtis discussed architecting one’s wireframing environment for scalability and reuse, such as only specifying elements appearing on multiple templates in one place and cross-referencing elsewhere. Nathan also stressed the importance of maintaining version histories, and recommended publishing and maintaining versions for specifications documents separately from original illustrations incorporated into the specifications.

A major point of interest in both these panels was the near total absence of discussion relating to Visio or OmniGraffle, which remain the more commonly used tools. This is likely reflective of a trend in which information architects and those in related fields are responding to increasingly complex web sites with new and more advanced models for specifying them.

Ambient Findability
Peter Morville
Conference description

Reviewed by: Jorge Arango

Wireframe panel

Photo credit: Erin Malone

Peter’s talk was based on (and served as an introduction to) his book Ambient Findability, an important and influential work that I (embarrassingly) admit to not having read yet. Despite his soft-spoken demeanor, Peter comes across as an engaging, witty, and highly professional presenter, and some of the ideas in his talk are a call to action for people who care about the design of information spaces in the 21st century: the increasing blurring of the lines between information environments and the “real” world, the expanding scope of search in our everyday lives, “smart” networked objects, among other things, and how information architects can help people make sense of all of this.

Clearly we need to be giving serious thought to this stuff, as it will have an important—perhaps a defining—impact on what it means to live a productive human life in the 21st century. Ambient Findability is now in my reading queue.

“How do you convince content contributors and others with different priorities that metadata should be used and should be accurate?”

Metadata Games: Cutting the Metacrap
Karen Loasby
Conference description

Reviewed by: Hallie Willfert

“People are lazy… Short of breaking fingers or sending out squads of vengeful info-ninjas to add metadata to the average user’s files, we’re never gonna get there” – Cory Doctorow

The journalists at the BBC are not lazy, says Karen Loasby, they just have different priorities. How do you convince content contributors and others with different priorities that metadata should be used and should be accurate?

Karen shared four suggestions:

  1. Convince them that metadata is for them. Let writers know they will benefit from applying good metadata to their stories because with good metadata their stories will appear more appropriately in search results.

  2. Convince them that metadata is also for the audience. Let them know that the readers of the site will find more relevant articles if the articles are tagged correctly.
  3. “Confound them.” A meeting to talk about the importance of metadata sounds really boring. Make sure it isn’t.
  4. Bribe them. Karen says doughnuts work really well.

To prove the possibility of point number 3, Karen and her team from the BBC took us through two different games to play that conveyed the importance of metadata in a fun and creative way.

Wireframe panel

Photo credit: Javier Velasco

Game one: metasnap
This game involves splitting the group into two teams. Team one plays the role of the author and team two plays the role of the searcher (team two can be made up of one or more people). Each team receives a deck of matching cards that have on each a picture and space for search phrases. The authors tag each card/picture in a way that seems appropriate. Once the authors have finished tagging their cards, the searcher picks a card from their own pile and “searches” out loud for the picture on the card. The searchers goal is to just get one image as a result. The authors then tell them if they have made an exact search match to one of the terms from the authors’ cards. Yes, they win. No, they lose.

For instance, if the searcher, wanting information on Queen Elizabeth, searches just for “Queen”, then many results might appear–one for Queen a la Freddy Mercury, Queen Elizabeth II, Mary Queen of Scots….

What we learned from this game is that language is a messy affair. Free text searchs put the pressure on the searcher. Tagging content has to take into consideration homonyms, variations in language, and granularity. Considering all this, automating metadata completely would be difficult.

Game number two: metascoop
Metascoop is all about content reuse. Each team is given a blank storyboard, and some extra assets (photos, sidebars, related advertisements, related content lists). Using the assets available, the team is instructed to write a story that is supported by those assets.

Proving that a picture is worth at least 1000 words, each of the 8 teams at the Summit created stories that explored different aspects of the relationship between mutton, Sir Paul MacCartney, the Royal Family, raising sheep, formal events, Julian Lennon, organic cuisine, and weddings.

And what lessons could we learn from this game? Reusing content can be a creative activity (though I’m sure a little fact-checking goes on at the BBC) and automation that is driven by metadata could save time.

Emotion, Arousal, Attention and Flow: Chaining Emotional States to Improve Human-Computer Interaction
Trevor Van Gorp
Conference description

Reviewed by: Jorge Arango

Trevor’s presentation addressed an issue that I haven’t heard discussed much in our midst: the use of emotions in design. And yes, by emotions he means joy, disgust, love, longing, etc. He argues that these emotions comprise the “experience” bit of the phrase “user experience”, and presents a framework we can use to employ them in our design processes.

One of the first challenges posed by this idea is how to define emotions. Trevor proposes an “Emotional State” diagram, which places emotions on two axes: one stretching from anxiety to boredom, and the other from unpleasant to pleasant. Different emotional states fall at some point in this diagram, some quite extreme, others less so. In the middle are emotions that fall in what he defines as a “flow area”, where people are most effective.

Trevor presented examples of designs that elicit particular emotional reactions in people, contrasting products such as a huge black Dodge truck with a yellow VW Beetle. Clearly these items elicit an emotional reaction, but Trevor argues that effective design requires more than this: it requires a designed approach to state chaining, or the smooth transition between one emotional state and another. He showed an example of how one emotional state (frustration) can be transformed through planned stages to a more useful state (curiosity, motivation to learn).

The presentation concluded with an example of a mobile application UI that iterated through different designs attempting to elicit specific emotions from users. Bottom line: this is very interesting work that holds a lot of promise for further exploration.

Communicating Concepts through Comics
Kevin Cheng and Jane Jao
Conference description

Reviewed by: Javier Velasco

Figures

Photo credit: Liz Danzico

Kevin and Jane unveiled the power of comics as a communication tool for experience design. Comics are very good at helping the readers focus either on a particular area of the interface or the off-screen emotional reaction of the user. They explained how they did this with their clients, how it allowed them to feel more free to make comments, and helped understand the design as an experience.

They then went on to explain us how we could all do these kinds of comics to develop and document our designs, even if we forgot how to draw decades ago. It was a strong and clean presentation, and very useful to take back home.

“Dan Brown’s thoughts about a different metaphor for content management systems (CMS) are revolutionary. At a conference as full of innovative ideas as the IA Summit ’06, that’s really saying something.”

New Approaches to Managing Content
Dan Brown
Conference description

Reviewed by: Fred Beecher

Dan Brown’s thoughts about a different metaphor for content management systems (CMS) are revolutionary. At a conference as full of innovative ideas as the IA Summit ’06, that’s really saying something.

Dan asked the audience about our experience with CMSs, which bore out his next statement, “CMSs suck!” The reason for this, Dan said, is twofold. First, the underlying metaphor that CMSs is based on is wrong. Second, labor is not distributed appropriately between the humans and computers involved in content management. So to fix the problem, we need to replace the metaphor and redistribute the labor.

Dan then showed us how content management is currently based on the metaphor of business as a factory. There are Products which follow a Process that is guided by People who have particular responsibilities. The problem with this is that it forces us to think linearly, when business may not be linear at all. Information as a product is open, not closed and discrete as if the product were in a factory.

A more appropriate metaphor, Dan said, is an organic one. “Business is a living entity,” he said. We speak of it in terms of growing, dying, and nourishing. We can think of content as nutrients, people as catalysts, and workflow as an organic process. Despite display issues, Dan clearly described a graphic that illustrates his point. A “seed” of information is planted in the system, and a ring appears around the seed when an action is performed on that content (as the rings of a tree indicate its growth and change). We can access each “ring” to get the details of the nature of the action and the person who performed it.

Discussing the division of labor aspect of the CMS problem, Dan said that too much of the decision-making power has been given to the computer, when humans could handle that kind of responsibility much better. We need to think of computers and content management as decision-making aids not the arbiters of the decisions themselves. He gave the example of Abraham Lincoln composing the Gettysburg Address. Abe types in the speech in a single text window, and chooses contexts this content will be used in. Selecting any context allows Abe to tag any section of his content with contextually appropriate tags. Enabling the content to be handled differently in different contexts.

New Approaches to Managing Content, continued

Reviewed by: Donna Maurer

Dan’s session was entitled “New approaches to managing content.” Just another content management talk? Far from it.

The underlying idea behind this session was to use some of George Lakoff’s principles to examine content management in a new way. He explained that the predominant underlying metaphor of content management is that of “business as a factory.” The use of this metaphor means that we (and content management systems) approach content creation and publishing in a particular way–that of a factory, where individuals are responsible for creating content, others for approving content and yet others for publishing it.

Dan suggested, as a way to reframe, that we could use the metaphor of business as a living entity. Using this metaphor, more than one person can be involved in content creation (without presecriptive rules), and the content can grow organically. The organisation can enforce the rules instead of the computer. Templates can become living scenarios.

The intent was not to change the metaphor of content management now, but to show that it can be reframed. A great suggestion from an audience member was to use the metaphor of a family, which could also produce interesting approaches.

This was a great session for examining a different approach to thinking about a problem.

Stone Age Information Architecture (Or, You Say Cat, I Say Cat)
Alex Wright
Conference description

Reviewed by: Chris Baum

At one time, our ancestors lived in isolated, small bands of hunter-gatherers. During the Ice Age, the lack of food drove these groups together, creating an explosion of symbolic systems to ease communication and increase chances of survival. These symbol systems became the method by which they formed increasingly complex social relationships, eventually becoming societies and nations.

In his presentation, Stone Age Information Architecture, Alex Wright wants us to be aware of how the symbolic languages formed during this time are still embedded in our thinking patterns and, as an extension, affect the practice of information architecture.

For example, if you see a picture of a feline, a quarter, or a laptop, your brain automatically creates the following hierarchical classifications:

animal > mammal > cat > tabby cat > brown mackerel tabby domestic longhair
money > coin > quarters > 1932 Quarter > 1932 D-PCGS
computer > personal computer > laptop > Toshiba laptop > Toshiba Portege R100

-from Stone Age Information, Alex Wright, IA Summit, March 26, 2006

All people will have at least the first three levels of these classifications. During his research, Wright has found that these patterns seem universal. They are not something that’s been written down or studied; the classifications are implicit in the language.

He posits that these “folk taxonomies” (not to be confused with folksonomies), or shared instinctive classifications, are the basis of how our minds structure information so that it makes sense to us instantly.

Wright’s examination highlights that while some have this utopian image of tag clouds forming magically into grassroots classifications, we need to be aware of the underlying constructs that drive our social impulses. The rise of the social network is really a resurgence of the symbolic networks–arising not from the patterns and knowledge of written history, but rather in the patterns of the oral and tribal social traditions.

We’re already seeing glimmers of these ideas in trust systems– ratings, reputation points, etc.–as we try to negotiate social situation with people who we must trust, but that we do not know well or at all.

Wright is doing the community a great service by exploring these ideas. Armed with this different angle on human cognition, analyzing user research for these patterns can help us create experiences reflective of the folk taxonomies, rather than in spite of them.

Object-Oriented Design
Ann Rockley
Conference description

Reviewed by: David Sturtz

Ann Rockley’s presentation took the concept of object-oriented design and applied it to content with an emphasis on increasing reuse of information. She suggested that this approach is particularly applicable to those organizations using XML-based systems, delivering content through multiple channels, or wishing to cut the time required to produce and deliver content. Employing object-oriented design strategies can also profoundly reduce translation costs.

The information architect’s role in the move towards increased content reuse begins with determining the structure of content through content modeling. A content audit may be then used to analyze the existing material and pinpoint those places where reuse can happen. Ann suggested creating a reuse map, charting out the various applications for each piece of content.

As a unified content framework is developed, she highlighted the importance of determining the correct level of granularity and for determining metadata relating specifically to reuse and promoting internal findability. Standardized formats, including DITA, DocBook, and SCORM, may provide a head start in some situations, but attention should be paid to the amount and type of customization necessary.

Ann closed with a number of concepts that suggest a variety of concerns in planning for content reuse. Opportunistic re-use, relies on a conscious effort made to find and reuse content objects. At the other end of the spectrum, systematic reuse draws on personalization or recommendation technology to offer up appropriate content for use. Locked and derivative re-use each allow differing levels of control over whether copies of items may be made, and how they may be used. Nested reuse involves creating larger content objects and then selectively using portions according to their context. Finally, reuse governance reminds designers to consider issues related to owners, editors, notifications, and approvals.

Mind-shift: is IA equipped for Web 2.0?
Michael Arrington, Dan Brown, Kevin Lynch, Brandon Schauer, Gene Smith
Conference description

Reviewed by: Fred Beecher

The purpose of this panel was to discuss the potential impact of Web 2.0 on IAs, the changes that IAs may have to make to accommodate this new paradigm, and the mindset necessary to succeed within it. The members all represented different voices. Michael was the voice of the developer. Dan was the voice of the IA. Gene was the voice of the user experience generalist. Brandon attempted to stand in for Michael Arrington, who had to cancel, to represent the voice of the venture capitalist.

Dan felt that Web 2.0 will have negligible impact on IAs. After all, we will still be trying to meet user needs, dealing with unpredictable amounts and types of information, and attempting to make user participation meaningful through contextual structure. Michael felt that IAs would no longer be constrained to the idea of the page. Content could be an interaction or a very small, discrete chunk of information. Gene also felt that this would be the case, in addition to the observation that now we will have to account for aggregate data displays. There was some discussion about how it’s relatively frequent now for people to consume content without ever visiting the originating site.

Flickr user model

Photo credit: Liz Danzico

All the panelists agreed that IAs would still be using the same skills. However, each of them felt that we would need to add new skills as well. Michael felt that lack of trust will become an issue, and that we will need to be cognizant of technological content consumers, such as recommendation engines, that help people who have 200 RSS feeds figure out what to pay attention to. Some helpful skills he identified were database fluency and helping developers understand users. Dan felt that we would probably need to hone our skills around findability and usefulness. He also echoed Michael’s observation that we will need to better understand how the content is being used. Gene spent some time emphasizing the importance of content modeling; something the audience indicated they felt was crucial.

Addressing the question of mindset, Dan felt that, again, not much change would be required. We will have to figure out how to show our usefulness, however, in environment hostile to IA (in reference to the now infamous 37 signals “no IAs” comment). Michael felt that, in addition to thinking of places and things, we would also need to think of streams and flows. He also reiterated his point about human beings no longer being the sole meaningful consumers of content. Gene echoed this sentiment.

This panel and the discussion it raised were very eye-opening. Of the two Web 2.0 panels I attended, this one was definitely the more valuable.

IA for Efficient Use and Reuse of Information
Thomas Vander Wal
Conference description

Reviewed by: Donna Maurer

Thomas started this presentation with a reminder that people live within the real world, not on the web, and that most of their information use is in the real world. He reminded us that information is not only found and used, but re-used, and that much of the re-use takes place in the real world. In order to design for re-use we need to analyze the type of information we have, think about what people do beyond the first use, understand the context where information is used and what actions follow use.

Thomas discussed a range of standards (from open-source to proprietary) that we can use to share information.

This was a good, forward-looking presentation and I intend to explore some of the ideas and offer better information use for next year’s IA Summit.

“Theories created must fit the data, data must not be made to fit the theories.”

In Search of Common Grounds: Introducing Grounded Theory to IA
Lada Gorlenko
Conference description

Reviewed by: Donna Maurer

I was excited to see this on the program as I have been using a variant on grounded theory to analyze user research data.

Lada explained how the results of grounded theory (which comes from social science research) are rooted in the behaviours, words and actions of those in the study. Theories created must fit the data, data must not be made to fit the theories.

She provided a good overview of data collection and analysis methods. The presentation slides are very detailed and will provide a good overview for those who were not able to attend the session.

Clues to the Future: What the users of tomorrow are teaching us today (Or, In Millsberry We Trust)
Andrew Hinton
Conference description

Reviewed by: Chris Baum

Presentations like Andrew Hinton’s Clues to the Future make you hope for a day when all questions are so interesting. We try to argue for “innovation” in our day-to-day work; even Business Week sports a section solely about innovation. Still, we struggle to get the “innovation” past simplifying the content, sites, and functionality over-produced during the Boom.

Hinton made a very strong case that the ground-shaking innovation is happening right now, driven by teens and their technological environment. He encouraged us to look at gaming environments, especially MMOGs (Massively Multiplayer Online Games), for direction in how we design information spaces and use technology for social interaction.

After considering this seriously, holes could not be easily poked in these ideas. He presented research, backed it up with numbers (both populations and money), and examined how the interfaces innovate to let the users do what they need to do.

Throughout the talk, Hinton projected humility even as he reinforced his authority on these subjects. It was one of the most interesting and well thought-out presentations that I saw at the Summit, and his personable demeanor further reinforced his argument as he did not seem eager to convince us of his position, rather to unpeel some very intriguing ideas.

Download the presentation and leave him a note. It will be well worth your while.

Bonus Points: Hinton helped the audience “experience” his talk. He mentioned at the start that he would be providing all of the materials along with his speaker’s notes so that we could engage in the presentation rather than trying to capture it.

“There are ways to use existing, business-friendly data to make your personas into a tool that can be adopted by people outside of the UX team.”

Bringing More Science to Persona Creation
Steve Mulder, Ziv Yaar
Conference description

Reviewed by: Hallie Willfert

Steve Mulder has a confession to make: at one time he wasn’t using personas. Why not? Well, he felt that 1) he didn’t have a way to put them explicitly to use, and 2) he was “making stuff up.” His session took us through ways to bolster the qualitative data that often makes up the ‘meat’ of a persona by integrating quantitative data that will satisfy the most business-y of managers and marketers.

A typical process for building personas involves scoping out the goals and attitudes of the intended audiences and adding some behavior data that is pulled from user interviews and field studies. Steve’s process adds more concrete data that is gathered from market segmentation, log files, CRM data, and user surveys. When the hard data is added, you are able to test the assumptions that your soft data made—do the personas hold up? Are there tweaks that need to be made the personas more accurate?

I too have a confession: I am not a statistician and I will make a mess of if I try to regurgitate some of what Steve talked about. What I can say is that Steve took us through some very impressive looking analysis, and my notes tell me to “find clusters in the data that can be developed into personas” and to “force segmentation by an attribute.” However, I can’t tell you how to do that.

Nevertheless, my take-home point from this talk was that there are ways to use existing, business-friendly data to make your personas into a tool that can be adopted by people outside of the UX team. The marketing department and other business stakeholders will be much more receptive to using personas as a tool to guide the business if you can prove that they fit into the data has been relied upon for years.

The Impact of RIA on Design Processes
Matthew Moroz, Jeanine Harriman, Jenica Rangos, Christopher Follett
Conference description

Reviewed by: Tom Braman

Mentoring

Photo credit: Javier Velasco

I was feeling smug upon entering “The Impact of RIAs on Design Processes.” Other sessions confirmed I’d been doing information architecture right. User research? Check. Wireframes? Check. Etc? Check. Then comes Garrick Schmitt, west coast user experience lead for Avenue A | Razorfish, knocking me out of my comfort zone with his talk on Rich Internet Applications.

“RIAs challenge everything we’ve done,” Schmitt announced. In 12 to 24 months, he said, tools such as wireframes, processes such as page-by-page user flows, even roles such as information architect will cease to exist. “We believe RIAs are the future of the internet experience.”

Yikes. What’s a soon-to-be-extinct IA gonna do?

Not to worry, said Schmitt. After walking attendees through several company RIAs (including Disney Weddings, where the newly engaged apparently can reduce a nine-month offline nightmare to a nine-minute online snap), Schmitt said that the average IA will evolve into a new role, either interaction designer on steroids, interactive data strategist (determining what data goes where), or both.

But we’ll have to play taps for our tools: Sitemaps really have no place when there’s only one “web page,” to use another apparently soon-to-die metaphor. Wirefames and traditional design specs, too. In their place will be hierarchical data inventories, occasional HTML mockups, and—and here’s the critical one—crude to hi-fidelity prototypes that user-experience teams rely upon as living, morphing design specs throughout the design phase.

“Design data, not pages,” Schmitt told the audience. Dang. All this, after I’d mastered the tricks of information architecture in a page-by-page world. Alas, we evolve.

Reviews of conference sessions are available by day:

Posted in Conferences and Events, Reviews | Comments Off

Sorry, comments are closed.