We all know how it feels to get lost in a great book. Sometimes the characters and emotions can seem every bit as real as those of our everyday lives. But what’s happening in our brains as we dive into those pages? How is it different from what happens as we experience real life – or is it really so different after all?
This week, a team led by Leila Wehbe and Tom Mitchell of Carnegie Mellon University’s machine learning department has published a paper that provides partial answers to those questions. Thanks to this data, researchers are gaining new insights into how we read literature with the help of an unexpected source: a machine learning algorithm.
A novel approach
Since reading comprehension is a highly complex process, earlier studies tried to break that process down and focus on just one aspect at a time: mapping fMRI signatures associated with processing a single word or sentence, for example. “It’s common to study reading in a very controlled way,” Wehbe explains. “It’s usually not like reading a book; usually the stimulus consists of out-of-context sentences designed specifically for the experiment.”
While those experiments have given us some useful insights about some facets of the reading comprehension process, they’ve never given us a clear picture of how it works as a whole.
This latest study, on the other hand, takes an entirely different approach. Researchers scanned the brains of volunteers as they read a chapter of an exciting novel and then broke down their brains’ reading comprehension into its component parts. The result, they say, is the world’s first integrated model of how our brains process written words, grammar and stories.
Parsing and prediction
The researchers started by gathering a group of eight volunteers, and recorded their brain activity in an fMRI scanner as they read Chapter 9 of Harry Potter and the Sorcerer’s Stone (the scene where Harry and his friends take their first flying lesson) for 45 minutes.
In the second phase of the study, the investigators fed the volunteers’ fMRI data into a computer program they’d written. They’d designed their algorithm to look for patterns of brain activity that appeared when the volunteers read certain words, specific grammatical structures, particular characters’ names and other aspects of the story – a total of 195 different “story features.”
The researchers had their program make predictions about which parts of the chapter a person was reading, based solely on his or her brain activity. In order to make these predictions, the program used the patterns of activity it had learned to associate with each of the different story features. When the researchers used all the 195 story features they had derived, the program was able to guess which of two passages were being read with an accuracy of 74 percent, significantly higher than mere chance prediction could score.
Finally, the researchers repeated the prediction test at every brain region, for each of the different types of story features. This allowed them to find associations between story features and the activity of different brain regions – enabling them to pinpoint which brain regions are processing which types of information. While those findings fit expectations in some ways, they were highly surprising in others.
As the researchers expected, our brains run individual words through an initial round of processing in the visual cortex–the brain area that processes all visual input–and through higher-level processing areas like the left inferior frontal gyrus, the bilateral angular gyrii, the left pre-central gyrus and the medial frontal cortex. But that’s only part of the story.
When the volunteers read descriptions of physical movement in the story, the descriptions modulated the activity in the posterior temporal cortex and angular gyrus, regions involved in perceiving real-world movement. A variety of characters, meanwhile, were correlated with the activity patterns in the right posterior superior region.
Dialogue was specifically correlated with the right temporoparietal junction, a key area involved in imagining others’ thoughts and goals. “Some of these regions aren’t even considered to be part of the brain’s language system,” Wehbe says. “You use them as you interact with the real world every day, and now it seems you also use them to represent the perspectives of different characters in a story.” The findings appear today in the journal PLOS ONE.
This all seems to confirm the existence of what researchers call the “protagonist’s perspective interpreter network” – in other words, a network of brain regions that enable us to “become” the protagonist of the story we’re reading.
If these hypotheses are right, we may be on our way not only toward a more precise neural model of language processing, but also toward a clearer understanding of how and why it can go wrong.
That’s exactly what Wehbe and Mitchell hope to study next: the many distinct ways in which language processing can go awry. “If we have a large enough volume of data,” Wehbe says, “we could isolate the specific ways in which one brain–for example, the brain of a dyslexic person–is performing differently from other brains.”
Diagnostic tools like these, the researchers hope, may someday help us design individually tailored neurological treatments for dyslexia and other reading disorders. And if those treatments prove effective, many people may in the future find it easier to get lost in the pages of a good book.