Skip to main content

SwM meets #Sfn11 Day One: Words, Pitch, and Rhythm

Words, pitch, and rhythm. How do these three elements meld together in your brain when you listen to the sung lyrics of a song? Julia Groh of the Max Planck Institute Leipzig explored these questions during her poster session on the first day of the Society for Neuroscience conference.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Words, pitch, and rhythm. How do these three elements meld together in your brain when you listen to the sung lyrics of a song?

Julia Groh of the Max Planck Institute Leipzig explored these questions during her poster session on the first day of the Society for Neuroscience conference. She explained that most studies on this topic compare the brain's response to spoken speech to its response to singing. The flaw in those types of comparisons lies in the fact that there are many elements of musical song that can exist in varying degrees in speech. Is speaking in a rhythmic monotone more similar to singing or to speaking? Is varying the pitch of a sentence more like regular speaking or more like singing? How can the brain tell when something is being spoken versus when it is being sung?

One great example of the grey area between speech and song exists in Diana Deutsch's "Sometimes behave so strangely." Check out how when the phrase is repeated, it begins to sound like it is being sung (hat tip to this 2007 episode of Radiolab), in an effect that even 5th graders can perceive.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To address the issue of how the brain responds to voices in the grey area between speaking and singing, Groh designed a few different pieces of stimuli to play for the subjects of her study.

She took a short song and created three levels to test how the brain reacted differently to each:

1. Song versus speech that included words, pitch, and rhythm (A recording of the lyrics of the song being sung vs. a recording of the song's lyrics spoken in a natural speaking pitch with the same musical rhythm, kind of like if you were to rap the words of a song like "Happy Birthday")

2. Song versus speech that included pitch and rhythm but no words. (A recording of the song's melody being hummed without words vs. a recording of the song being "hummed" in a natural speaking pitch with the same musical rhythm, kind of like rapping "Happy Birthday", but without the words)

3. Song versus speech that included just rhythm (a recording of the song's musical rhythm being "hummed" in a wordless monotone vs. a recording of the song being "hummed" or "spoken" without the musical rhythm in a wordless monotone, kind of like you were the worst rapper in the world because you used no words, never varied the tone of your voice, and had a rhythm [or colloquially, a "flow"] that was just like regular talking)

The idea behind making these different kinds of recordings of the same song was to separate the three elements of singing and speech from each other. This way Groh would be able to tell which particular element activated the subjects' brains, either on its own or in combination with another element of sound. After creating all six of these recordings herself, she stuck her subjects in an fMRI brain scanner, played them the songs and speech at each level, and took a look at the effects of removing either words, pitch, or both from each recording. She suspected that the different types of song and speech recordings might cause different patterns of brain activity. Her suspicions were confirmed by results that showed different patterns of brain activation for spoken words than for words in song.

The two parts of the brain that showed differential activation were the intraparietal sulcus (IPS) and the inferior frontal gyrus (IFG). Groh found that the IPS was more activated when the subjects listened to the singing pitch of words than to the speaking pitch of words. In the IFG, the left side of the brain was more responsive to words when they were spoken rather than when they were sung, and more responsive to pitch changes in natural speech than to pitch changes in sung speech. The IFG on the right side of the brain reacted in just the opposite way. It was more responsive to sung words and to the pitch of a musical voice rather than spoken words and the pitch of a speaking voice. So in general, the IFG on the left side of the brain seemed to specialize in processing pitch in speech, while the IFG on the right side of the brain was more activated when processing pitch in music.

These findings align with previous studies that have suggested that structures on the left side of the brain deal with language processing while structures found on the right side of the brain deal with processing musical pitch. Even though Groh found these differences in the activation of the IFG and IPS, she still found that both speech and singing activated both brain regions. The difference here is in the degree of activation for each region. The fact that the left and right IFG have different degrees of activation to speech and song suggests that one side is more attuned to processing one type of auditory stimulus over the other. The study also suggested the IPS as a brain region that processes singing pitch better than speaking pitch. This brings us just a little bit closer to understanding what happens when your brain hears the words of a song and the sound of a sentence.

Poster Session, Saturday November 12, 2011 Society for Neuroscience conference: Groh et al. “Patterns in song and speech” Max Planck Institute Leipzig, 92.14/VV25

Images:

"Lyrics" by Flickr user Anna Oates under Creative Common licensing.

"Experimental Design" by Julia Groh from her poster at Society for Neuroscience 2011.

"Inferior Frontal Gyrus" from Wikimedia Commons.

About Princess Ojiaku

Hey there! I'm a graduate student at the University of Wisconsin Madison in the Neuroscience and Public Policy program. I'm also a musician who played in two bands in North Carolina, one called Pink Flag and another called Deals. My personal passions are science, music, and cycling as transportation.

I got into science as a kid while tagging along and watching my mom do experiments in her lab. I found that while I loved science, I didn't want to be alone in an ivory tower, crunching data that few others would understand. I also noticed that many other people thought science was this scary and incomprehensible entity of obscurity. When I realized that there were people working to make science fun and accessible to everyone, I knew that this was exactly what I wanted to do. The two things I find the most immensely interesting and continually impressing are music and neuroscience, so these are the topics that I'll focus on in my blog. Philosophy and politics are my second loves, so I might pop in an occasional post on these topics as well. Ultimately I am here to share things that give me wonder. I hope that reading Science with Moxie gives you a bit of that wonder too.

More by Princess Ojiaku