Icon Analysis

Posted by

“She gave up the search for the mouse settings icon in seconds and opted to just use the ridiculously over-sensitive mouse.”

An icon search task that lasts longer than anticipated can result in user annoyance or even premature abandonment. I once changed the mouse settings on my laptop to be overly sensitive, and had a colleague use it to show me a data analysis technique she had been working on. She immediately noticed and asked permission to change the settings. At my resolution of 1400×1050, the icons in the Windows control panel folder render at 16×16 pixels. In addition, I had the list pre-sorted by comment rather than application name. Not used to these settings or dealing with mouse preferences, she gave up the search for the mouse settings icon in seconds and opted to just use the ridiculously over-sensitive mouse while demonstrating her analysis technique.

You may think she was justified if only using my system for a short time. If so, you’d be surprised to know this was no small demo! It went on for almost a half an hour. She surfed the web to retrieve various files, used several applications, accessed her FTP space to download some of her own work, and showed the technique twice with different sets of user data. Scientist and user throughout, she sprinkled obscenities about the mouse amongst her thoughtful discussion of data analysis. I was astonished, and now far too afraid to tell her I had fooled with the mouse on purpose.

Two weeks later, I was discussing the analysis technique with another coworker and he said, “By the way, I heard your mouse is all messed up. I can fix that if you want.” Bad human computer interaction (HCI) experiences travel fast! The issue could have been avoided if only the mouse settings icon had been more identifiable.

Inability to discriminate one icon from another and/or find an icon in a set can be far more disastrous than my anecdote above. Systems used by first responders in hazardous materials incidents (see MARPLOT, for example) rely on icon design to signify entity classification (e.g. small icon of a schoolhouse) and level of critical danger to an entity (e.g. a school icon is painted red on a map). Immediately recognizing danger to a school amongst lumber yards, garbage dumps, and plant nurseries is imperative; any time-slip in the search and discrimination task could delay notification and evacuation of hundreds of children. How then can we diagnose problems with icons that fail in this regard?

Search and discrimination of icons

The human visual system is a complex mechanism that encodes information using many channels in two major pathways. The magnocellular pathway (M pathway, or “big neurons”) contains channels sensitive to gross shape, luminance, and motion. The parvocellular pathway (P pathway—“small neurons”) contains channels sensitive to color and detailed shape (Nicholls et al, 1992). In order to discriminate between two different visual signals—icons, in our case—the signals encoded in available channels must differ beyond some threshold. A common distinguishing technique is color. For example, try to find the red network settings icon on the right in figure 1.

icon_analysis-fig-1_th.jpg

Figure 1: Original icon list shown in the Windows control panel (left) and the icon list with the network icon highlighted red or feature-based search (right). Click to enlarge.

Searching by some distinguishing feature like color is called (not surprisingly) a feature-based search. Feature-based searches are limited in a few ways: their effectiveness drops if we apply a unique color to all icons in the image set and distinguishing by color only employs purposeful differences in only one of the two visual pathways (the P pathway). Additionally, icons tend to be small in a UI, thereby restricting differences in shape to “detailed shape” information—also encoded in the P pathway. Ideally, we would like to design icons that purposefully differ along channels in both M and P pathways.

Fig 2

Figure 2: Original Network Connections Icon with constituent M and P pathway representations.

An elegant technique to do this involves leveraging the core difference between the pathways. Large neurons are less densely packed in the retina of the eye than small ones. The spatial density leads to fundamentally different encodings of the visual image. Figure 2 shows an image of an icon that has been filtered to simulate the way it would be encoded in the M and P pathways.

Images filtered in this manner can be judged for distinctiveness along 2 pathway dimensions, assisting in economy of discrimination and search tasks. Distinctiveness in P pathway representations is easy enough to judge without the use of filtering techniques; designers weigh color and detailed shape decisions directly during the design process. The only tool a designer has to judge M pathway distinctiveness is the “squint test” (i.e. squint your eyes to obstruct sharp focus and rely mostly on dark and light values). However, the squint test is not very practical for HCI and Usability assessments; spatial frequency filtering is a better tool to simulate M pathway representations of icon images for evaluation purposes.

Spatial frequency filtering

The visual system maintains a set of scales that we associate with distance. If we see an object thought to have great size—say, a building—but that takes up little space on the retina (i.e. it looks very small), we immediately “perceive” it as being far away rather than perceiving it as a miniature building. The perception of scale is actually based on the encoding of visual spatial frequency (Schyns & Olivia, 1994).

This is interesting because you can encode images in specific spatial frequencies (Schyns & Olivia, 1999). View figure 3 from a foot away. Now stand back and view it from farther—say, 10 feet. Up close it is difficult to make out the image of Bill Frist in the center image. From farther away the image of Hillary Clinton disappears altogether in the center image. However, at both distances the outer images of Frist and Clinton are easily discernible. This phenomenon is based on our inability to perceive high frequency information from greater distances; if the image has no distinctive low frequency component, it simply disappears when viewed from a distance.

Fig 3

Figure 3: Hillary Clinton (left), frequency composite of Hillary and Bill Frist (center), and original Bill Frist image (right).

Not surprisingly, we hold specific spatial frequency registers for icons. Just as the color and shape choices for an icon design should be unique, so too should the frequency composition of the design. When a user searches through a UI in order to compare or find icons, his or her eyes jump all over the screen. Where eyes land are called fixation points, while the sharp eye movements are referred to as saccades. Users only see roughly 1.5 degrees of visual angle in sharp focus (roughly the size of your thumb nail held at arms distance); the rest of the image is processed in the M pathway and at lower spatial frequencies. At each fixation point, most of the icons in a UI fall outside of 1.5 degrees. The key is to filter the icon images to ensure that they differ in low spatial frequency so as to preserve their uniqueness during visual search. (Filtering methods discussed here are based on the work of Loftus and Harley, 2005, who used filtering to create representations of faces at a distance.)

The technique I show here requires the use of the R package (R Development Core Team, 2005) and the add-on package called “rimage” (Windows, Linux, and OSX versions are available here). Once you have downloaded and installed R, you can download and install the “rimage” addon from within the R program. (On Windows: Start R, then select packages » Install package(s). Choose a mirror from the dialog. Then select the “rimage” package.)

Filtering instructions

After the R program is set up and the rimage package has been installed, you are ready to start. Collect a set of icons you wish to analyze and put them all into a single image using your favorite image editing program, as shown in figure 4. Save the image as a jpeg.

Fig 4

Figure 4

Start the R program. Load the rimage library and the icon collection image into R using the following commands in the console window:

<br /> > library(rimage)<br /> > icons <- read.jpeg("address to your file”)<br />

Where “address to your file” is the full directory address where you saved the icon collection image. Make sure to enclose the address in quotes and use “/” instead of “” to signify subdirectories. Mine looks like this:

<br /> > icons <- read.jpeg("C:/Documents and Settings/Queen/My Documents/icon-collection.jpg”)<br />

Press Enter and then view the image in a display window by typing:

<br /> > plot(icons)<br />

Resize the window so that the images are full scale and not distorted. Now we’ll filter the images:

<br /> > plot(normalize(lowpass(icons,27.8)))<br />

Fig 5

Figure 5: Filtered icon set

Some explanation is necessary here. The number 27.8 defines the radius of the frequency filter in frequency space. I’ll spare you the math lesson and give you a short table of calculations that solve for radial lengths based on user distance from the screen (calculations based on size-distance-invariance equations; see Gilinsky, 1951).

 Icon Pixel Dimensions   Viewer Distance   Radius 
128×128
128×128
128×128
18 in.
24 in.
36 in.
98.8
74
49.4
48×48
48×48
48×48
18 in.
24 in.
36 in.
37
27.8
18.5
32×32
32×32
32×32
18 in.
24 in.
36 in.
24.7
18.5
12.3
16×16
16×16
16×16
18 in.
24 in.
36 in.
12.3
9.2
6.2

Using this table, you can see that I chose to assume the icons are 48×48 pixels in dimension and the viewer is 2 feet from the screen. As a general practice, filter the icons using all settings that might actually occur at use time and make sure that icons remain sufficiently unique (there are no studies that elaborate on what is sufficient—so be overly cautious).

I feel compelled to note that spatial frequency filtering is very different than just blurring the image; blurring removes detail that the M pathway relies on for recognition. Figure 6 shows the very different results of frequency filtering and blurring.

icon_analysis-fig-6_th.jpg

Figure 6: The filtered image (left) is far more representative of what a user actually sees than the blurred image (right). Extreme differences can be seen in icons with tight detailed patterns such as the second icon on the bottom row. Click to enlarge.

How effective are spatial frequency unique icons?

The following is a short study showing the benefits of using icons that have unique low spatial frequency compositions. 10 users were shown 20 icon images (called “trial icons”) of varied size. Simultaneously, they were presented with two additional icon images and asked to click on the icon that matched the trial icon, as shown in figure 7. Response times were recorded. The idea was to see if low frequency unique icons were easier to identify, and therefore result in faster response times.

Fig 7

Figure 7: Experiment screenshot

With each presentation of a trial icon, the match icon (fig. 7 – right) and distracter icon (fig. 7 – left) had either similar or different low frequency compositions. The response time data was then analyzed to determine if having all 3 icons contain similar low frequency compositions slowed responses. If responses were slower, the match icon was assumed to be more difficult to identify. A box plot of the resulting dataset is shown in figure 8.

Fig 8

Figure 8: Dataset box plot

As you can see, on average, users identified icons with unique low spatial frequency compositions faster than those with compositions similar to distracter icons. In fact, 75 percent of the time the fastest response times under normal conditions were just about average when frequency differences were present. The frequency-unique icons result in almost a half-second faster identification times. Summing that difference for every icon search task during use-time adds up to quite a bit of what could be critical decision-making time. Unique low-frequency compositions in icon designs make a noticeable difference.

References

  • MARPLOT: http://archive.orr.noaa.gov/cameo/marplot.html
  • Nicholls, J.G., Martin, A. R., and Wallace, B. G. (1992). From Neuron to Brain. Sinauer, 3rd edition.
  • Schyns, P.G., Oliva, A. (1994) From blobs to boundary edges: Evidence for time and spatial scale dependent scene recognition. Psychol Sci 5:195–200.
  • Schyns, P.G., Oliva, A. (1999) Dr. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition 69:243–265.
  • Loftus, G. R., & Harley, E. M. (2005) Why is it easier to identify someone close than far away? Psychonomic Bulletin & Review, 12(1), 43-65.
  • Gilinsky, A.S. (1951) Perceived size and distance in visual space. Psychological Review, 58, 460-482.
  • R Development Core Team (2005) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, http://www.R-project.org.

28 comments

  1. This kicks ass. I don’t have any better words.

    Very technical, but now we can evaluate the usability of our icons. Sweet!

  2. Great piece. I love the analysis of M versus P channels.

    But I would add a ‘third dimension’. This is illustrated in figure 1: the red icon on the right is clear to see. But not because of either the M or P channel. It’s clear to see because it’s red and the other icons aren’t. Color is a great way to ‘lead’ the eye even before the M or P channels kick in.
    I myself recently designed some icons for a image editor. In the design, the selecting tools are distiguished from other tools by giving them a green overall cast. They are easy to distinghuish because their color is different, not because of their shape or details. Colors work primarily in the overall view of the whole design. You can’t attribute it’s effect to a single icon only.

  3. I have studied Image Processing and used filtering conceptually. It is inspiring to see practical application of such significance explained so clearly.

    The explanation of “M and P Pathways”, the use of the “Box Plots” the reasoning, and conclusion are all exemplary– of presenting the complex subject lucidly and also of the method to do so for any complex subject.

  4. This is great! I have some unique icons sizes that I would like to use this for. Could you provide the equation you used to calculate the radius?

    Also, do you have any suggestions for how the icons should be spaced in the trial file? If the icons are close together in reality, should they be close together in the trial file (which would possibly blur them together)?

  5. Great work here; I’d really like to see more things of this nature… specifically how to actually design icons which will bring out the necessary differences for quick identification.

    I work on in-vehicle navigation systems and we’re definately more than attune to the ‘driving condition’ where the system needs to be usable to a point, but with as little distraction as possible. (Honestly much of our particular interface is disabled while driving, but what is functional is primarily large icons with supporting short-text.)

    I’m really thankful for this kind of work being done.

  6. I just mentioned in an email to Dustin Hamilton that he hit the nail on the head, distance filtering of icons is an evaluative technique — not a design technique. My knee-jerk reaction of how to coerce this method into a design tool is to create a plug-in for Photoshop or GIMP (preferably) that would allow an auto-updated window of the design space filtered at set distances. That way all design decisions (small and large) could be made in the context of the filtered frequency views. I imagine there is someone out there with enough talent and time to create such a tool. It seems like it would be useful.

  7. I’ll be brave and admit I don’t understand everything here — the prose is clear, but my brain is small.

    My gut impression is that designers rely too much on icons. They seem difficult to distinguish the more graphically elaborate and three-dimensional they become. The greater the number of distinctions icons try to make, they less successful they convey these distinctions, especially when viewed in the M-channel.

    Seems it is difficult to provide sufficient discrimination when there are numerous icons. I don’t have a sense of how many different unique low frequency icons are possible to display at once. Could one have 20 icons that would all have sufficiently unique frequencies so that users could tell each any pair of them apart?

  8. I’m not sure whether designers rely too much on icons though I know they are a popular way to represent actions (cut/paste) and categories (media file vs. word document) – which requires care (ex. “is this icon on the web an action or a category, or both?”).
    The necessity of having 20 visually unique and distinguishable icons sounds like quite a design problem … the main problem being, “why is that a necessity?” Suppose we cut this 20 to 15 and partition them out. For example we have a web application that has 3 modes while each mode has a toolbar holding 5 icons. The user only sees 5 icons in the toolbar at time and we will decide that icons appearing in menus are preclassified differently in the mind of the user than icons that persist in the UI. The point isn’t that a user can distinguish between a low frequency representation of an icon in the first toolset and the second. The point is that they can easily distinguish between the currently available tools (then we leave the problems arising from modal issues to another type of analysis). Finally, suppose we would rather not rely on icons (after all they can be expensive) and opt for well thought out labels in our UI. Believe it or not, labels have low frequency components as well! The large letter forms of capitols, risers, and descenders (ex. “X”, “t”, “g”) coupled with the amount of kerning and line space contributes to the recognition of labels. We use the P channel (specifically, “detailed shape”) to read the labels yet rely on low frequency components to aide the identification of a word. Some reading research suggests that we actually don’t “read” all the letters in a word. Rather, we recognize a word by its large distinguishing visual features. We could argue that large visually distinguishing features are low frequency components. Does this make sense (perhaps improve the usefulness of this technique for you at all?).

  9. Great stuff! One of the best article I’ve read in a while.
    This type of research strengthens the theory of how the brain works from ‘On intelligence’ by Jeff Hawkins that I’ve been reading. He suggests that our brain make predictions and process information at the same time. In this case, our brain makes a prediction of what the icon is from the blurry details received and from our own memory.

    I’m interested to know if there’s research with familliar icons/signs together with spatial frequency. I think what the user already knows add a big difference to recognition, e.g. red and ‘!’ usually means warning, danger. The common shape of ‘arrow’, ‘home’ and ‘refresh’ icons are already in our memory. I’ve personally experienced the difference it makes when Yahoo decided to flip the icons horizontally, even though the elements in the icon was the same, I just couldnt recognise it. E.g. if we put the chimney of the house icon to the left side, it just doesn’t look right.

  10. Yes I agree Rex. The current state of knowledge in neuroscience and cognition supports the mix of top down (recognizing/identifying) and bottom up (sensing/describing). Following the discussion into icons and signs, you could look up the works of Joan Peeck, “The Role of Illustrations in Processing and Remembering Illustrated Text” and Robert B. Kozma’s “Learning With Media.” I think those are good references for looking at icons and illustrations. They aren’t neuroscience so you won’t find any discussion of retinal architecture but they are chalk full of wisdom about how users learn and absorb information from media. As for research specific to icons and spatial frequency – as far as I know – you just read it! If others have found articles of this nature post them: I’d be interested.

  11. Matt, thanks for your detailed response. You asked why 20 icons might be necessary. I agree for navigation purposes one could partition along the lines you suggest. But it is common to have many icons even on tool bars. I must have 25 on my browser, with just one add-in to the browser’s main navigation.

    What is very common as well is the use of icons in rows of data, especially in enterprise applications. Think about icons in a list of emails, where you can delete or forward an item from the list. It saves space to show an icon only. But the number of icons is not limited to a small sub-set. I think SAP have over 200 icons available for use, but the distinctions between them are minute, and it must be a burden to learn what they represent if there are too many.

    I agree with your point about the visual form of text presenting similar issues, but would speculate that the peripheral recognition of text is superior than for icons, simply because we are so familiar with letters and their combinations we can “fill-in” meaning more easily than with a idiosyncratic icon.

  12. Wow. This is very good! Although, I am sad to find out that my squint test is no good 🙂 So, making good icons is very important. Granted. But I can’t help to think about if graphic designers shouldn’t take this article and substitute the word “icon” for “billboard” or “print add” or “TV commercial” or… er… well, you get the idea. Good work.

  13. Matt, fantastic article.
    It would certainly be interesting to see more research done on the effects of deliberatly categorising similar functioned icons together using P/M channels rather than proximity.

  14. Matt, I like the work, have you compared the M & P paths with other icon related work? There is work by Barnard and May on icon design that shows positive results for users interpreting icon structure (at the cognitive and not perceptual level). Their work seems to be concerned with user interpretation of icon after perceptual processing, however it is the next stage of recognition and discrimination.

  15. Great article, Matt! I am also co-vice chairing an HCI group here in Ottawa – CapCHI – http://www.capchi.org/. Some of the members are Masters and PhD students completing their thesis in HCI at the Hot Lab at Carleton University who I think would find great value in this article.

  16. This is one of the best articles I’ve read here. Quantitative analysis, if you will, for an area typically ruled by the heavy hand of subjectivism. The only other material that I’ve read with solid theories on this subject (and this may give you an idea of how much more I need to read on this), from a different angle, was from the classic “Designing Visual Interfaces” by Mullet and Sano.

    I’d like to second the comment made by Dustin Hamilton calling for use of this theory as an integral part of design technique. For instance, it would be highly valuable to set the direction in the experience planning phase, rather than after GD’s run even their first iteration.

    Nevertheless, while I completely see the value of this technique, when you’re on a tight schedule/budget, this type of analysis tends to become a luxury. Perhaps you have a few examples of real-world systems that employ optimal iconography (produced utilizing these techniques or even luckily complying with these theories by way of intuition alone)? Case studies reinforcing best practices have done well for me as part of a conceptual business sell when you don’t have time or budget for your own analysis.

  17. I googled around for research information on Spatial Frequency Filtering, and I found several bits of information from academic institutions that described various blurring methods (guassian, box, triangle, bartlett, average) as low-pass filters. So I tried to find information specifically describing how the low-pass filter in ‘rimage’ works and I didn’t turn up anything. Matt do you know where to find the information about what goes on under the hood of rimage’s low-pass filter. If I could find that documentation, I could probably get some programmer friends of mine to implement a tool for it. Or I may even be able to come up with a way of doing it myself.

  18. Interesting analysis, and as another person metioned, a commendable attempt to quantify something so inherently subjective. I especially enjoyed the explanation of how our brain proceses visual information via M/P pathways.

    My only concern with such an analyis is that the data derived from such a small sampling (ie, 10 users). In a number of usability studies where ‘gross level’ user behaviour and performance is measured, a user sampling of 8-12 users is sufficient to uncover 80% of the issues. Any more users and you start geting diminishing returns.

    With the short study that you conducted, you’re measuring user comprehension and performance at a very detailed level of user manipulation(ie, icon matching within a limited spatial area). Such a study, in my mind, would require a significantly larger sampling of users (eg, 30min-50max) in order to derive any meaningful data. And the users would have to come from a cross section of society vs narrowly defined group.

    Another thing I question about such a study, although very insightful, is whether we’re giving somehting more importance than necessary. In this case, your study makes the argument that icons need to take into account a specific formula in order to be more effective communication devices. I’m not sure icons as we know them via desktop and sw applications can ever meet that goal. If anything they are mere supplementary devices which provide some method of visual distinction at a more human level in an ‘inhumane environment’ – the computer. Thsi is evidenced by the windows icons displayed in Fiqure 4.

    Many of the icons consumers/users encounter in the computer environment are meant to serve more as visual relief or variety amid a sea fo text — or a more colorful bullet point if you will. Expecting such small visual devices to achieve more would be to get to an even lower frequency level that you describe on par with Chinese characters, which evolved over 5 thousand yrs from pictograms to distlled line patterns.

    So in short, maybe we’re asking too much of icons. And if not, maybe they need to evolve to much ‘lower frequency’ visual devices that convey meaning with the minimum of visual noise. All and all though, wonderful analysis and thanks for sharing.

  19. I appreciate the word of caution from nemrut dagi, but I have to say that I think the proposed line-of-thinking only sounds reasonable because it is based on a use of icons which is flawed in nature. He said “Many of the icons consumers/users encounter in the computer environment are meant to serve more as visual relief or variety amid a sea fo text—or a more colorful bullet point if you will.” True. But, one could argue those icons are poorly designed.

    Good icons should do more than constitute “visual relief”. They should support the typography surrounding them and help readers to absorb and retain information better. Every one of us can atest from personal experience that icons and graphics can do as much to convey information as a good line of type. Remember the last time you saw an unfamiliar icon on an alien interface, but you intuitively discerned the function of the icon because it looked so obvious?

    All in all, Nemrut Dagi has a good point, but I think that the light shone by Matt Queen’s Analysis could lend more to intelligent design than we may even realize.

  20. Well – I have some response to perform here! Economically, and in order of appearance:

    Andrews:
    Good point about text. There is less variation and more standards for text patterns resulting in easier pattern matching from peripheral vision images—so long as you read the language and have a healthy vocabulary, and perhaps are familiar with domain specific terms, font choices are typical or familiar, etc. However, 25 icons in the browser is a lot. I use the back, forward, refresh, stop, “kill the tab”—in firefox, minimize, maximize, close the window, and I suppose you could make an argument for the firefox window logo. While we’re at it I suppose you could also argue the fringe use of icons in the bookmarks menu (folder vs. doc), but the window logo and bookmarks icons are marginal arguments and could be classed outside of browsing behaviors for most of use time (I would argue based on my behaviors anyhow). That’s a thorough 11 and while there may be more (in the context of web pages and such) you could also argue that is different software. Though I wouldn’t try to instruct on the appropriate quantity of icons for general cases, I feel most applications employ a reasonable number yet allow them to persist in inappropriate ways. A good technique (by counter example) is: for icons representing local actions for, say, a list item—have the icons appear when the user places the mouse in the vicinity of the list item. It doesn’t persist, serves as a good reinforcement (your mouse has to be here when you do this), provides feedback (i.e. “you mean this list item, right?”), and classes the local options in a different cognitive mode for the user. I’ve seen that work well in apps before.

    Zapata:
    So long as you maintain good form, your squint test is still alright 🙂 — Good points about applicability, yet try hard not to use this technique for evil (I’m envisioning all sorts of subliminal brain washing attempts involving low spatial frequency).

    Laird:
    Categorizing by function starts to get into looking at other components of the icon rather than sticking strictly to perceptual acuity. This is a great idea because it would start to draw relationships between the various components of an icon. I’ll say more about this later I suspect as I make my way down the list here.

    Roast:
    I checked out Barnard and May (I assume you meant, “Modelling User Performance in Visually Based Interactions”). I got stuck reading several of those pubs—great stuff. The issues they were teasing out were far more in the territory of what Laird was proposing above. There is a bit more resilience in investigating those issues because of the extra muck involved with higher cognitive processing. The analysis I’ve proposed here mixes some of that muck in the breakfast cereal though doesn’t it? A user has to compare a perceived pattern (low level) to one in stored memory (pretty high level) in order to respond in a measurable fashion. That isn’t the largest problem here though – I’ll say more about this later.

    Parks:
    I checked out your site and it looks like you have some interesting work going on there! I also looked at the school/dept. pages and about Ottawa and your local area too. I just have to visit! I’m told Paesani’s Caffe on Preston is a must as well.

    Chee:
    I also read Mullet and Sano, great work—if you liked that you would also enjoy the early works of Cleveland and McGill, and for some cool background reading (and weird stories) a book called “wet mind” by Kosslyn). I don’t have any case studies involving this technique or similar methods. When I get a chance I’ll look around at some systems and make some notes. So I’ll get back to you and post what I’ve got.

    Houx:
    First, thanks for taking this on! Second, thanks for emailing me and alerting my attention to the mounting list of comments here! Finally, the R code simply uses the fast fourier transform to convert the image to frequency space (it does this in two dimension for the images sake though) then cuts the high frequencies out using our angle argument—then inverse fourier’s the result to return the signal back to image space. You may have already figured this out but if you just type “lowpass” on the command line in R it gives you the code used. The image has to be normalized first, but the R code will guide you to the appropriate source code file that actually does the pass filtering and you can also chase down the fast fourier implementation from there too. Email me with any progress, questions, and what not if you’ve got ‘em.

    Darren:
    That sounds like you got it. The M pathway is only sensitive to “gross” form and while the P pathway is sensitive to sharp details, owing to the difference in size of the receptor parts (anatomy). And the image you linked is, in fact, from Schyns and Olivia.

    Dagi:
    The 10 users weren’t supposed to be a sample size – they were supposed to be an “example size” (Emma Rose really knows how to sell that line). This is the way I look at it. Icons have several components: perceptual, semantic, contextual, etc. Each of these components can be engineered and optimized. The optimization will almost always involve relationships between components, Ex. “this graphic results in less confusion [semantic] on a button vs. on a map [context]” (of course, you run into the problem of trying to measure “confusion”, but that is a different story). With a research endeavor it is often the same process, and so too is it here. To really make this a useful experiment design, as I have stated before, the problem here isn’t stats and sample sizes and populations, etc. The real problem is the inability to strike a good robust relationship between the engineered difference of low frequency patterns and response time standard deviations. Ex. The icons used in group A were different in low spatial frequency components which lead to a conformational drop in overall response time and deviation of times among group A user trials. The glaring problem with that statement is that “different in low spatial frequency” part. How different are we talking here? What’s more, how does “really” different relate to a standard deviation of X? Without robust relationships here we end up ascribing causality when the evidence only suggests correlation (the responses were just faster, it didn’t actually have to do with frequency anything though – in fact it could have just been ordering effects, like “more practice.”). To use an analogy, in order to put low spatial frequency improvements to the test we might construct an obstacle course for our low frequency unique bicycle (stay with me here). If our bike can make it through the course in reasonable time (by say an alpha level of X) then our technique will be provable. We need to realize that it’s the robust relationships between the bike parts that will prove this bikes worth. In other words, we will not be able to negotiate the necessary turns if our bike has handle bars made out of cheese. In fact, no amount of doping is going to help us—it’s the cheese handle bars. So as I stated before, it’s the quantification of pattern difference that is the problem—that is the cheese handlebars.
    You make some good points about the current use of information graphics in software. You are very right. However, we could rephrase your statement, “maybe we’re asking too much of icons” to, “maybe we’re asking too much of traffic signs.” I generally hold that even if something like an icon is meant as nothing more than a trendy marketing mechanism, it still must conform to the same tests of effectiveness as those on our HAZMAT maps. The icons that must maintain an extreme economy and effectiveness of communication to users who are frantic and desperately trying to hold composure while making the best decisions with the information available. As I’m packing up the proverbial soap box, I’ll admit that some find this attitude to be a bit drastic and opt for, “these are good until the users show us they aren’t.” Of course, how do you know if you don’t measure? And, how do you know you are measuring what you think you are measuring (internal validity of analysis).

    I guess that was Dagi and Houx part 2.

  21. It seems to me the icons you’re talking about (within applications) are more like in-building signs (there are tons of them in the field of facilities management and hundreds in the domain of airport and train station signs alone) while icons used on Web pages are those that have a lot in common with traffic signs.

    By the way, I’m always struck or even shocked at how little icons or general shapes and colors are used on traffic signs in US compared to my home province. When I cross the border in the US I find I am snowed under signs with too much verbiage in them. My mind has to give full attention to reading all that text, making it dangerous for driving and creating missed exits or the opposite.

  22. Just a few points on colour blindness to get the facts straight:
    Hardly anyone is really colour blind. Some people have a deficiency in their visual perception. But that’s it.
    This means colour is a usefull medium to get information across, even for the colour blind. There are some plugins for photoshop out there that can alter an image to match the perception of several types of colour blindness. But taking the statistics into account, you’ll only have to deal with deutans and protans anyway.
    See: http://www.colorjinn.com/en/6oncolour/grey/2/2.html

  23. Vaillancourt:
    This has kept my attention ever since you posted it. In-building signs vs. traffic signs is a difficult analogy to strike with app icons and web icons for me. In building signs are often times markers of paths (Ex. “An exit is this way”, “Escalators are here”) or places (Ex. “This is the bathroom/telephone”). There is temporary signage (Ex. “Wet floor/paint”), advertising (Ex. “get Cingular free for the first month!”), and maps (i.e. the “you are here” diagrams). But here is part I can’t get my head around – all the same (save the you are here maps in most circumstances) can be said of traffic signs. Here may reside the issue. When I say traffic signs, I’m actually thinking of “transportation graphics”. Tell me your line of thinking on this, it is really interesting, and might come to bare on some work I’m doing right now (meaning that I am now both types of “interested”).

    Asselbergs:
    Good point. Is the purpose of the colorjinn plugins to simulate how the composition would look through the eyes of a color blind individual? And, are you affiliated with this group in anyway?

  24. This IS good stuff! However, a plug-in tool to give real-time (or rapid) filtered feedback on icon designs may sound like a good idea but may also carry unwanted consequences. Back in the 1990s when James Noble and I were researching real-time feedback of design and usability metrics we noticed designers in many cases optimizing to the metric(s) at the expense of good judgement or experience. We all know design is multifaceted and say oh we wouldn’t be so naive as to let one factor dominate our decision making, but the effect is real and works even unconsciously. Decades ago Gerry Weinberg demonstrated that designers and developers optimize to whatever is measured and to the most immediate feedback at the expense of everything else.

  25. This article is fantastic. It provides scientific methodologies for evaluating the efficacy of visual communication as represented in icons. I used to lead team of designers who were responsible for developing icons displayed on medical equipment, both hardware and software. There was a constant debate among the designers over the granularity of detail to that should be incorporated into a 16×16 pixel icon. Many times the excessive detail was a detriment to searching tasks. This information will be helpful to my former collegues. Thank you for your thorough description

  26. Very interesting article. It is always good to have the design supported with science. I also think that an additional problem with icons is not in the ability to recognize or distinguish one icon from another, but to understand what the icons mean. Suppose, I have never seen a set of icons before and now I have figure out what each icon means. I cannot use recognition in this case because I’ve never seen these icons. Is there any good literature about improving understandability of the icons?

Comments are closed.