Why Is That Thing Beeping? A Sound Design Primer

Posted by

“The sounds a product makes are there to contribute to its usability, enjoyment, and brand identity—in some cases in more compelling ways than its form or functionality.”

As designers, we tend to spend a lot of time talking about the visual identity of a project, but who thinks about its audible identity? Do we need to consider it at all? Art forms such as theater, film, and video games have grown to include carefully considered sounds and are clearly better off for it. By learning to include audio as an important design parameter in web or product design, we might achieve the same successful results.

While a composer or sound designer’s concerns can seem esoteric to the visually oriented design world, we can engage these team members on some familiar territory when we need to work together. In composing sounds, the basic parameters of good design and process always apply. These parameters will be key in learning how to incorporate new sounds and new team members into a project.

Why make a sound?

Historically, sound has been used in everything from animal communication to computer-human interfaces to warn us that something bad is about to happen: a loud sound warns you that you’re about to be squashed by a garbage truck, for example. This may seem obvious, but it’s central to the discussion of audio feedback in any interface. Though they’re not life-threatening warnings, the sounds a product makes are there to contribute to its usability, enjoyment, and brand identity—in some cases in more compelling ways than its form or functionality.

First, a short history of sound design.

A short history of buzzers

Even if we don’t hear them regularly, historical acoustic sources of sounds (such as whistles, church bells, and trains) are still very much with us, so much so that we often think about and define modern digital sounds metaphorically in terms of their old counterparts. To make a noise without electricity, you had to strike, blow, shake, or otherwise vibrate something that would resonate. Even today, the universe of possible sounds is closely linked to how the technology of sound production has evolved. Understanding some of this evolution can give you a valuable perspective on today’s sounds.

Telephone rings

Classic telephone ringer
This is the sound of a metal bell and bakelite.

Typical modern phone ring
This phone has a speaker and a ringer circuit that’s somewhat imitative of a classic ringer.

Early cell phone ring
Early cell phone rings had a grainy digital quality, and the ringer circuit was still primitive.

Modern polyphonic ringtone
New cell phones can play multiple tones at once, and typically have a less abrasive tone. This is an example of an abstract contemporary ring that I am partial to.

Consider the evolution of the telephone ringer from an electro-mechanical bell to a plastic mini-computer that can chirp the Mexican Hat Dance. Though the modern possibilities for ringtones are astounding, it’s helpful to think about what was so striking about the way telephones used to ring. A mechanical telephone receives a voltage on its line that tells it to ring, and provides just enough power to repeatedly slap a tiny hammer against a metal bell to produce a ringing sound. The familiar telephone ring is all about producing the maximum sound pressure level in the air from minimum voltage. The bell produces a torrent of energy in the frequencies that our ears are most sensitive to. It’s likely that no one will ever get to invent something this elegant or pervasive again. Old hand-cranked air raid sirens, church bells, and organ pipes were designed based on the energy one person could deliver by cranking, shaking, or blowing. The qualities of these historical sounds were entirely dependent upon how they were produced. Listen to the four different telephone rings in the sidebar and note how the sound production technology influences the tone of each ring.

There were many innovations in electronic sound during the early 20th century, but until the 1950s it was impractical for any product that wasn’t a radio to produce an amplified, electronically generated sound. Reproducing even the simplest electronic tone required bulky and expensive vacuum tubes, transformers, and speakers. The beep-filled modern age began when post-war researchers learned to laminate a newly developed ceramic to a disc of sheet metal, creating the now-ubiquitous piezoelectric buzzer. Finally, product designers had an efficient, low-power way to make any device emit a tone. Most importantly, unlike telephones, vacuum tubes, and police sirens, it was so cheap to manufacture that it could be included in a toy robot.

The landscape changed again when the miniature transistor replaced the vacuum tube and it became feasible to include a versatile tone generator and amplifier as part of a larger product. Eventually this technology shrunk down to the size of a penny and now doesn’t cost much more. An inexpensive, flat integrated circuit and rudimentary plastic speaker can now be included in a greeting card.

The lack of control over the quality of an electronic sound constrained the designer until the 1980s. Many innovative attempts to expand the vocabulary of electronic sounds in commercial products were unsuccessful. Texas Instruments developed a chip that contained all the features necessary for any electronic device to speak, but the robotic quality of the voice was so whimsical that it only found use in the popular line of Speak and Spell toys.

Finally, digital memory and personal computer technology introduced the possibility of including pre-recorded sounds as part of an interface. Yet the constraints of weight, size, battery power, and manufacturing costs continue to delineate the sound design possibilities for many devices. Most consumer products use tiny plastic speakers that cost a few cents to manufacture. While theoretically capable of reproducing any sound, these runty transducers are better suited to emitting the familiar high-pitched chirps and beeps that make up the modern vocabulary of digital devices.

The transmission medium is and always has been a factor in a sound’s effective volume and legibility. To understand some of the challenges of designing sounds for tiny buzzers, listen to the audio examples in this sidebar.

Learn to listen! Legibility and musicality

A sound is “legible” if the designer’s intent is properly understood by the listener. And listening—like all cognition—is based upon our existing mental model of what things ought to sound like. This model is based largely on two things: our perception of basic acoustic phenomena in the physical world and our experience listening to music.

Buzzer noises
The half-inch buzzer used in these examples is typical of the kind used in low budget electronic products.MusicThis is a recording of this buzzer trying to reproduce music. Obviously this doesn’t work so well, though the song is identifiable.

Frequency sweeps
As the tones rise and fall, you can hear that the buzzer can only properly reproduce a narrow range of frequencies.

Infernal beeping
This buzzer was designed to reproduce a 2,000 Hz square wave. This type of sound would require very few electronic parts to produce, and would be audible across the room.

Fancy beeping
If more complex circuits are practical in a product, it is possible to program much more complex tones.

What we are capable of hearing has been well documented by years of diligent psychoacoustic research. Our ears are poor at detecting the absolute values of sound, such as pitch, volume, and duration. Audible communication consists of changing patterns, and this is what our ears are sensitive to: minute modulations of pitch and sound quality. Sound quality or “color” is a primary consideration in communication. We listen to the different qualities of consonants and vowels in a song sung at any pitch or volume to understand the words.

Just as our sensitivity to pitch is conditioned by our understanding of music, so are our instinctive emotional reactions to different sounds. We are conditioned to feel “warned” when a device we’re using emits any sound other than something we recognize as music. When scientists invented the piezo buzzer, only John Cage found use for it in the concert hall. Its abrasive nature has become an iconic warning sound.

Sounds that are not intended to be warnings most often resemble what we consider culturally to be musically pleasant. Certain patterns of tones will be pleasing (musicians call these consonant intervals), and often form major chords. The timbres of pleasing sounds are most often like a musical instrument such as a flute or piano, rather than the metallic sound of an alarm. Think of the difference between a buzzer-style doorbell and an old-fashioned “bing bong” doorbell. Compared to the buzzer, the ringing bell sounds like a friend, rather than an unwelcome intruder, at the door.

The harmonic interval used in this doorbell example is also important. Listen to the sound clips in the doorbell sidebar and note how different the bell sounds when the pitch relationships are altered.

Attention, time, and fatigue

Context is a critical consideration for all design work, but what does context mean for sound design? Here are some considerations:

  1. New sounds a designer introduces must compete with existing environmental sounds.
    We experience sound in time, and consequently we have difficulty listening to two things at once. While visual designers talk about relative visual weight, the analogous issue in audio is masking. One sound can completely hide another when heard simultaneously, a condition that leaves us overly sensitive to intrusive, unwelcome, and especially insistent sounds. We are more offended when a loud car tears down the street than when an ugly one does.
  2. Designers must find sounds that do not become tired-sounding when we hear them often.
    A sound’s musicality (or lack thereof) is the main consideration in the sound’s likelihood of fatiguing the listener. Musical sounds are easier to absorb over a long period of time, and provide a natural background for the other sounds in our life. You can listen to a simple chord progression or a well-loved piece of music for a long time before it becomes tiring. But if you were to walk into the office and begin counting to one hundred in a loud monotone, you might only make it to thirty before someone strangled you.
  3. Current research is focused on understanding people’s assumptions about what their environment ought to sound like.
    This kind of research provides product designers with empirical data on what sounds and volume levels are considered acceptable. Much of the discussion regarding the development of sound in consumer products is focused on how to avoid annoying the products’ users. Both of these issues are extremely important in predicting how new products will be received by customers when the devices start making noise. Sound is simply unavoidable, and ensuring that it is inoffensive to customers is often the primary consideration.

Usability and identity

It is very common for poor sound reproduction facilities, such as a bad phone connection, to adversely affect user experience. It is also routine for unwelcome or loud sounds to damage an experience. It is rare, however, for even the most poorly executed sound design to adversely affect the usability of a product. Audio doesn’t usually muddle a UI, but this has more to do with how it is typically used, rather than our skill in applying it.

Doorbell noises

Doorbell sample
This bell plays the musical note E, then C below it, sounding like a major third.

This interval on piano
In Western music a major third is considered pleasant, but with some tension: the right emotion when someone’s at your door.

Other intervals
Listen to these examples of other intervals, and note how inappropriate they seem in comparison. The first two seem to resolve without the same sense of urgency, while the last two seem too dramatic and weird.

Since audio feedback is an unreliable communicator of complex information (at least of anything other than panic), it is often relegated to a secondary or illustrative role. Sound effects of this nature are often assigned to the relatively benign role of affirmative feedback. Positive sounds are often used to signal successful task completion, such as saving a file or taking a picture. Since these sounds have only to signal the completion of a task that the user knows is in progress, there are few problems with ambiguity. This predictability gives the designer great latitude in crafting these sounds, permitting creativity in expressing a product’s brand attributes.

Affirmative sounds can go a long way towards delineating the brand of an otherwise featureless desktop application without introducing any difficult user interface issues. One instant messaging client called Adium uses an icon of a cartoon duck as its mascot. As you may expect, it also makes whimsical little quacks and squawks when your buddies log in and message you. Yes, it’s more fun to use than its more bland competitors.

But you can have too much of a good thing. When the designer gets carried away with affirmative audio feedback, the result is something film music professionals call “Mickey Mousing.” In the popular animation styles of the 1930s, every tiny bit of on-screen action was reflected in the soundtrack. To use a modern example, is it really necessary for Microsoft Word to whoosh a paragraph away when you cut it, when the paragraph has so obviously just left the page? This type of illustrative nonsense does little to deepen the experience of using the product and dilutes the effect of important sounds. The effect on a person sitting next to you may even be worse.

I have this reaction because I have been conditioned by years of blissfully silent word processing in that era after typewriters and before the cartoon noises of Microsoft Office. Mature products that make a lot of noise, such as cars and vacuum cleaners, have years of accumulated sonic baggage. Product designers work within basic boundaries established by a long history of market presence. Conformance to these expectations, or lack thereof, is a powerful design statement. A new product’s sounds may make it instantly familiar, or even nostalgic. There is also quite a tradition of styling products “futuristic” by including appropriately space-age sounds. In extreme cases, a signature sound is an integral part of a product’s appeal. Who would want a silent Harley Davidson?

New products such as desktop applications and cell phones are only beginning to assume the same basic vocabulary of sounds in the popular consciousness. As people form expectations, those variations from the expected differentiate products. Even more importantly, we expect their sounds to evolve appropriately along with their physical form and changing role in our daily life. The touch-tone phone evolved from a basic technical requirement into a multitude of subtle (and not so) sound effects triggered by the new interactions people now have with their phones. Ultimately, an effective sound design becomes a natural extension of the product’s user experience and feels as logical as the clunk of setting a glass on the table.

Talk it over

“Talking about music is like dancing about architecture, but sound design isn’t music and experienced practitioners use a universal design vocabulary that should be familiar.”

The equipment and technical knowledge required for synthesizing sounds from scratch are inaccessible to most outside the sound design field. So developing audio for a new product usually requires working with a sound designer, another collaborator who speaks a different language. Unless he or she comes from an agency background, assume that any audio engineers you work with will not be intimately familiar with your regular design process. If you have expectations for how the sound design process will work, then these should be defined. In my experience, the discipline has the potential to be quite unstructured. There are other industries, however, in which musicians and sound designers must collaborate with different creative groups, and we can use these as models.

In the film industry, for example, composers and audio professionals of all stripes work closely with many other groups in a traditional manner. The brief that a movie director gives his composer is organized as a set of deliverables that are called cues. Each cue is a piece of music that begins at a certain time in the film, lasts N seconds, and has a specific emotional message to communicate to the audience. The success of the score is judged by the functionality of each cue, as well as the cues’ coherence as a complete body of work supporting the arc of the film. This approach may be good for experience designers to use for a DVD menu design, a CD-ROM project, or an interactive kiosk. An example of a requirement used in this type of process may be: “When the user steps up to the ticket kiosk, it plays a 10 second passage welcoming the passenger to the airport.”

If a narrative approach is inappropriate, then most often the situation involves an application in which each sound can be thought of as an audio icon. Many consumer electronics products fall into this category. In this case, the same brief you might give a visual designer to develop an icon set is appropriate for a sound designer as well. Each icon must have an appropriate level of legibility or learnability, and reinforce the product’s identity. A common example of this type of process may be a request for an “off sound,” one that clearly should not be confused with the sound the gizmo makes when turned on.

Information architects with experience collaborating with visual designers should already have a feeling for the type of brief that is most effective. It describes the purpose and the meaning of each sound, without presupposing the exact qualities or implementation thereof.

A last note about process: When reviewing and critiquing possible sounds for a project, it is important to listen to them in a realistic context. Just as it doesn’t make sense to critique a business card at poster size, so should you listen to how that chirp sounds through the right plastic speaker, in an appropriately noisy environment. Experienced sound designers will present sounds for review with this in mind.

How do you describe it?

Talking about music is like dancing about architecture, but sound design isn’t music and experienced practitioners use a universal design vocabulary that should be familiar. Still, talking about the specific qualities of different sounds can be challenging. Most people have a handle on pitch, duration, and volume. When describing a sound’s texture, people begin using a lot of made up words and obscene vocalizations.

There is no universally accepted “color wheel” of sound, but I’ll share mine. Two people will never agree upon absolutes, so it’s practical to talk about sounds in relative terms like “louder,” “deeper,” and “more metallic.” A simple diagram I use has been very helpful in helping people express the relative texture of sounds.

 

A circular diagram showing headphones in the center surrounded by the words metal sounds, wood sounds, pure sounds, and skin sounds.
Figure 1: My “color wheel” of natural timbres

Figure 1 illustrates a continuum of sound through the basic, naturally occurring timbres. Starting at the bottom, “pure sounds” describe tones generated electronically, or acoustic sounds with few overtones, like flutes. Moving counterclockwise, the sound becomes more “woody,” like a classical guitar, and then more “metallic,” like a piano. Eventually the fundamental tone disappears and you have the sound of a cymbal, or a car door. Around the other side are percussive sounds produced by drums or “skin” sounds. This diagram is hardly comprehensive, but it is a useful starting point when discussing the qualities of natural sounds.

Your water is boiling

With some products, thinking about the sound is essential. But sound can be another dimension to explore in order to achieve the design goals for almost any project. If crucial brand attributes and other qualities are lost in a design through the inevitable compromises that occur during its march to realization, it can be possible to rediscover them through sound design. This type of give-and-take is routine in film, where there is almost always a constant flow of elements that are on screen, audible, or both. So I encourage you now to tune up your ears and exercise the same creativity the next time you get to make some noise.

For More Information

Human-Computer Interaction Handbook (Chapter 12), Andrew Sears and Julie Jacko (Department of Computing Science, University of Glasgow)

5 comments

  1. Great article – here are some additional links which are useful to people interested in both HCI – ID and sound.

    ICAD – Internation Conference on Auditory Display
    http://www.icad.org/ (lots of papers both academic and professional on Sound Design and many are from a HCI perspective)

    Design Sonore – Sound Design Workshop held every two years by the French Acoustic Society.
    http://www.design-sonore.org/
    A review of this year’s workshop which I wrote may give a better indication of the area’s in this workshop
    http://richie.idc.ul.ie/eoin/trips/Paris04-SoundDesign/sounddesign04_tripreport.htm

  2. Great article

    If you want to read about sound design for films you can visit http://www.filmsound.org with glossaries, articles by prominent sound designers (as Randy Thom , Sound Edit oscar for Incredibles 2005) as well as leading academicans

Comments are closed.