This question was not proposed by a mad scientist bent on world doggie domination. The idea to see whether dogs follow life-sized videos is actually entirely sensible.
Researchers studying non-human animals want to know whether their species of interest will attend to artificial stimuli—like photographs, slides or films—because if a species realistically attends to artificial stimuli, you can have more control over stimulus presentation, and you can even manipulate and ask questions about the stimulus itself.
For example, a few years back, Lever and Reimchen from the University of Victoria investigated the effect of tail-docking on dog-dog interactions. Their artificial model of choice: a robot dog who looked somewhat like a Labrador Retriever. Over the course of the study, the only part of the robot dog to change was its tail, which was presented as either long or short, wagging or straight. The researchers explored whether real dogs would approach the robot dog and under which conditions. Their main finding: when it comes to social communication, dogs prefer that other dogs have tails. You can check out more detailed summaries of the Lever and Reimchen study here or here.
What’s notable about the robot dog study is that it plays entirely on visual cues, not olfactory cues. This can throw people for a loop because aren’t dogs driven by their noses? Sure, dogs are big into their noses, but dogs, and other species, don’t always need all sensory channels to get a sense of something. For example, you can hear a person’s voice over the phone and know it’s a person. You could even know that it’s a specific person, like your mother. You don’t need to also see a picture of a person, or more specifically your mom, to know what’s going on. The same applies to other species. When a dog sees the outline of a dog, although no olfactory cues are available, the outline could still contain something meaningful and ‘dog-like.’
Which brings us back to dogs watching television. In 2003, Pongrácz and colleagues from the Family Dog Project in Budapest set out to investigate whether dogs attend to a two-dimensional image (a person on a screen) the same way they would a three-dimensional image (a real person in front of them). No olfactory cues; just visual cues. The specific test was whether dogs would follow a person’s ‘pointing gesture’ in both the 2D and 3D conditions.
The ‘pointing gesture’ has to be one of the most investigated areas in canine science because it’s intimately tied to sociality and interspecific communication (communication between members of different species). I tease that every day, somewhere in the world, a canine researcher is pointing for a dog. Many studies report that dogs, particularly companion dogs, are champions at following human pointing gestures to food, even when controlling for odor cues. In the typical pointing gesture set-up, an experimenter gets a dog’s attention and then points to one of two bowls (or pots) to their right or left. The dog is then released by the owner to see if the dog goes to the bowl that was just pointed at, or does any number of other things from not moving to approaching the other bowl to taking a jaunt around the room to scratching (let’s just say that companion dogs in studies have a sense of humor). Companion dogs overwhelmingly approach the pot that was pointed at.
What do dogs do when they see a 2D image of a person on a screen pointing to a pot? “[Dogs] responded similarly to the projected image of the experimenter pointing to the pots as if he were present in the room,” write the researchers. Yes. Your dog could listen to a life-sized TV.
But there’s more. In that initial study, dogs saw a live-feed video, which allowed for feedback between the dog and the human projection. Would dogs respond the same way to a pre-recorded, non-interactive, life-sized video? In a subsequent study from the same group, Péter and colleagues changed the set-up to a visible displacement task where dogs watched a recording of a person hiding an object behind one of three locations. The dog could then choose to approach one of three hiding locations positioned directly in front of the screen (see above image).
As in the earlier study, dogs played along with the life-sized image on the screen, following the pre-recorded video to find the hidden object. Dogs in this study could locate the hidden object only if the location referenced in the video was close to the screen. If the dogs had to walk into another room to find the hidden object, their performance was worse.
The researchers suggest that when it comes to picture processing, dogs generally fall in the category of ‘confusion mode’—meaning that dogs “react the same way to the picture as to the real object.” On the other hand, if dogs’ picture processing falls in the category of ‘equivalence mode,’ as in humans and chimpanzees, they would “understand that the picture is a representation of the depicted object… as standing for another entity in the world.” If a 2D image refers to something that is not immediately recognizable, then in confusion mode, the dog will not get what’s going on, but an animal in equivalence mode might recognize that the image refers to something else.
It’s possible that with more training, dogs could respond to 2D images in an equivalence mode. The referential nature of picture processing remains a topic of continued interest for canine researchers. We need more mad scientists to investigate.
Image: Figure 3 from Péter et al. (2013); Markus, education tv via Flickr creative commons.
References and recommended reading
Anthens, E. (2013). Dog Tails and Social Signaling: The Long and the Short of It. PLOS Blogs
Fagot J. & C. Parron (2010). How to read a picture: Lessons from nonhuman primates, Proceedings of the National Academy of Sciences, 107 (2) 519-520. DOI: http://dx.doi.org/10.1073/pnas.0913577107
Hecht, J. Skin Deep. Looks aren’t everything, but they do play a role in communication. The Bark, Issue 71: Sep/Oct 2012
Kaminski J., Josep Call & Michael Tomasello (2009). Domestic dogs comprehend human communication with iconic signs, Developmental Science, 12 (6) 831-837. DOI: http://dx.doi.org/10.1111/j.1467-7687.2009.00815.x
Leaver (2008). Behavioural responses of Canis familiaris to different tail lengths of a remotely-controlled life-size dog replica, Behaviour, 145 (3) 377-390. DOI: http://dx.doi.org/10.1163/156853908783402894
Péter A. & Péter Pongrácz (2013). Domestic dogs' ( Canis familiaris ) understanding of Projected Video Images of a Human Demonstrator in an Object-choice Task , Ethology, 119 (10) 898-906. DOI: http://dx.doi.org/10.1111/eth.12131
Pongracz P., Antal Doka & Vilmos Csanyi (2003). Successful Application of Video-Projected Human Images for Signalling to Dogs, Ethology, 109 (10) 809-821. DOI: http://dx.doi.org/10.1046/j.0179-1613.2003.00923.x