How Infants Predict Other People's Behavior

Valerie Kuhlmeier and Tania Tzelnic
Queen's University, Kingston, ON

When it comes to watching the actions of others, we all have a little Nostradamus in us. When someone begins a physical action we can often "predict" the outcome before it occurs -- that is, our eyes move to the action's end point before the actor reaches it himself. In 2003, Randy Flanagan of Queen's University in Canada and Roland Johansson of Umea University in Sweden demonstrated this in an elegant way in a paper in the journal Nature.

Adults watched an actor pick up blocks from one side of a table and stack them on the other side of the table. The action had a start point (lifting up a block), a middle point (moving the block to the other side of the table), and an end point (the location the block was placed). Using an eye-tracker, Flanagan and Johansson found that the observers' eyes did not track the hand and block's movement directly; instead, the eyes preceded the movement, landing on the end point before the block did. Interestingly, when the scene was modified such that the hand was no longer moving the blocks (they seemingly moved themselves to the other side of the table), gaze was no longer predictive.

Perhaps, then, when a human engages in this type of simple action, we predict the ultimate goal, and our eyes shoot ahead to that expected outcome. What would infants do if presented with the same events? A recent paper tackles this very question. But before getting to their findings, we should tackle another question: Why would we even hypothesize that infants would show predictive gaze?

The expanding world of infant cognition 
Research over the last ten years has shown that young infants can distinguish between animate and inanimate entities. When watching animate entities (for example, humans or even computer-animated objects), infants focus on aspects of behavior relevant to the entity's intentions and goals.

For example, Amanda Woodward showed infants an experimenter repeatedly reaching for and grasping one of two toys; as the experimenter repeated the action, the infants would become bored, as shown by the decreasing amount of time they paid to the action. Then the positions of the toys were switched, and the person either grasped the old goal toy (which now required moving in a new direction) or the other, previously neglected toy (moving in the old direction). Infants as young as 6 months showed little interest when the person to continued to reach for and grasp the old goal toy -- even though the arm movement changed. Yet they showed renewed interest (as indexed by increased watching time) when the person reached for the new toy. This suggested that infants recognized the reaching as a goal-directed, intentional action toward a particular object -- and found the sudden change in goal more noteworthy than a change in direction to reach the old toy.

This research indicates that infants a posteriori recognize when certain behaviors "fit" with previous behaviors they have observed -- for example, after seeing the actor grab the old goal toy, they show boredom when that action is repeated, but after seeing the actor grab a new object, they show new interest because the action does not "fit" the old scenario. Until recently, however, it remained an open question whether infants actually make a priori predictions about a person's goal-directed behavior.

Predictive gaze in infancy 
Enter Terje Falck-Ytter, Gustaf Gredeback, and Claes von Hofsten of Uppsala University in Sweden. As reported in Nature Neuroscience in 2006, they presented 6- and 12-month-old infants and adults with movies that were almost identical to the block-stacking events described above, except in this case the actress moved balls across a table and into a bucket. As you would assume, the adults' gaze predictively reached the bucket before the hand. What is exciting, though, is that the 12-month-olds' gaze did as well. Six-month-olds, on the other hand, did not show evidence of predictive gaze; their eyes reached the bucket after the hand had arrived. So even though earlier studies have shown that 6-month-olds understand goal-directed movement, they have not yet begun to predict the end states. Predictive gaze may be a developmental accomplishment that requires a few more months to achieve.

There's more to the study. Two other types of events were shown to adults and 12-month-olds. In one, the balls moved on their own into the bucket while the actress sat passively in the background. The other condition was similar, except that the balls had faces. Neither the adults nor the 12-month-olds showed predictive gaze in these two conditions. We see at least two possible interpretations that fit this finding. One interpretation is that adult and infant predictive gaze is limited to the actions of human agents; the balls with faces, though "animated," did not activate predictive gaze. This was Falck-Ytter and colleagues' interpretation.

Another interpretation -- the one that we prefer -- is that predictive gaze may be activated by nonhuman agents, it is just that these particular balls did not contain enough motion or behavioral cues to be considered intentional and goal-directed agents.

Predictive gaze in infancy
Enter Falck-Ytter and colleagues' human-only interpretation of their results is consistent with a broader mechanism that they wish to address, namely, the Mirror Neuron System (MNS). When humans and monkeys observe other's actions, their own motor neurons tend to be activated. This has been most clearly seen in single-cell studies with monkeys in which a group of neurons ("mirror neurons") are active both when the monkey observes an act and engages in the same act.

So what is the connection with predictive gaze? Well, the original Flanagan and Johansson study also demonstrated that if you were actively engaged in moving the blocks yourself, your gaze would predictively arrive at the end location before your hand reached it. In other words, eye gaze during both observing an action and completing an action yourself are identical. Falck-Ytter and colleagues argue that since the mirror neuron system is currently assumed to allow for such matching across observation and action, it is likely involved in the type of predictive gaze they found in adults and 12-month-olds.

Furthermore, since it is assumed that the MNS is activated in humans only when we observe the actions of human or human-like agents, an MNS account of predictive gaze would require that it occur only in the hand condition of this study (that is, when the actress pushed the ball across the table), which was of course the case. We cannot really test this last point until examination of predictive gaze toward the actions of nonhuman agents is more thoroughly explored.

However, another point can be addressed at this time. Falck-Ytter and colleagues (as well as many other mirror-neuron researchers) make the strong claim that the MNS allows for action understanding; it is, they argue, the mirroring of the action that allows us to recognize and understand the goals of others through a process of internally simulating the action.

Recently, however, this type of claim has been systematically refuted by other researchers such as Gergely Csibra, who points out that since the MNS appears to activate only for goal-directed actions, some other system "upstream" must actually determine whether the action is goal-directed.

Furthermore, Csibra argues, infants and adults can interpret actions as being goal-directed in the absence of being able to simulate the actions (e.g., if the action is completed by a simple computer-animated shape, or is a motion that the observer cannot replicate). It may be, then, that the mirror neuron system does allow for the elaboration of one's understanding of goal-directed action, but it does not seem to what determines whether an action is goal-directed. And as for predictive gaze, future research will be necessary to determine whether any of the elaboration that the MNS may allow for entails the actual prediction of goals.

Valerie Kuhlmeier is the director and Tania Tzelnic a graduate student at the Infant Cognition Group at Queen's University.

Elsewhere: A 2005 Yale Scientific article looks at a Kulhmeier study on how infants tell animate from inanimate objects. The Phineas Gage Fan Club (a blog) examines "Why mirror neurons isn't the whole story" -- a reaction to the many articles (including the one I wrote for Scientific American Mind (full, fee version with art; free version, no art) -- emphasizing the broad explanatory power of the mirror-neuron theory. Also: other infant cognition labs around the web; and an October 2005 Scientific American Mind profile ofinfant-cognition researcher Elizabeth Spelke. -- Mind Matter editor David Dobbs -- Edited by David Dobbs at 11/26/2007 8:01 PM -- Edited by David Dobbs at 11/27/2007 6:20 AM -- Edited by David Dobbs at 11/30/2007 7:10 AM -- Edited by David Dobbs at 11/30/2007 11:31 AM -- Edited by David Dobbs at 11/30/2007 11:32 AM