While you’re humming along to the Talking Heads, I’d like to consider another group who can listen to the Talking Heads without really hearing them.
For a person with a cochlear implant, a surgically implanted device that restores hearing in someone who is profoundly deaf, listening to music isn’t the rich, sensory experience that a hearing person enjoys. The cochlear implant could translate the catchy rhythm of "Once in a Lifetime," but David Byrne’s lyrics would be distorted and lost as the implant struggles to smooth out the melody’s signal into something that resembles speech.
A person with a cochlear implant is outfitted with an internal electrode array that winds through the inner ear and an external microphone and processor. Sounds that are captured by the microphone are handled by the processor, which converts the sound into digital information. The experience of hearing for this person rests in the processor, a device that has shrunk from a bulky belt pack to a slim pocket size or component of the microphone.
In the processor, sound is coded by an algorithm that optimizes the user’s understanding of speech. This process works well; people with cochlear implants are able to keep up with conversations, particularly in quiet environments. The part of the sound wave encoded by the implant to represent speech is known as the envelope. But with music, detailed information about the instruments, pitches, and melodies is contained beneath the envelope in the fine structure of the sound wave. These nuances are just not captured by the cochlear implant.
A new scheme for processing melodies could improve for cochlear implant users what many see as an undeniable perk of occupying the hearing world. As Fan-Gang Zeng, Ph.D. of University of California, Irvine presented at last week’s Acoustical Society of America’s annual meeting, the cochlear implant could get at the fine structure of music using a tool of our perception called spectral constancy.
Think about spectral constancy in terms of colors, another form of sensory input. When you look at an image online, our brain ignores fine differences in colors to help us perceive the colors the same way, regardless of differences in illumination or brightness. Blue is still blue, whether it’s the color of a neon sign or the water in Greece.
Hearing works the same way: a vowel is still a vowel whether it’s yelled or whispered. A cochlear implant that processes sound in this way gets at both the envelope and the fine structure of sound, and as a result doesn’t compromise the user’s understanding of speech at the cost of hearing music. Taking advantage of spectral constancy allows the user to hear speech despite changes to pitch. In this scheme, the processor output is increased for the time period of the music.
While a cochlear implant can be life-changing by some measures, research on quality of life such as music perception still has room to make a difference for users and their enjoyment of the audible world. I think this room for improvement is exciting because while the initial implantation of the device into the cochlea requires major surgery, tweaks to the encoding scheme happen externally.
In other words, a better perception of music would be an update to the software, not the hardware, of the implant. With a visit to the audiologist (and some practice getting used to the new sounds), a cochlear implant user’s perception of the world can change dramatically.
About the Author: Allison Bland aims to make health information more accessible on the web through her work on Cancer.Net and as a graduate student in Georgetown University's Communication, Culture and Technology program. This post is inspired by a recent project on the future of cochlear implants. Connect with Allison on Twitter.
The views expressed are those of the author and are not necessarily those of Scientific American.