April 29, 2014 | 2
Astronauts on a routine repair mission for the Hubble Space Telescope find themselves coping with more than they bargained for in the pulse-pounding opening sequence of Alfonso Cuaron’s Oscar-winning film, Gravity. Debris from the destruction of a defunct Russian satellite kills one colleague and detaches Dr. Ryan Stone (Sandra Bullock) from the repair shuttle, sending her tumbling in a freefall through space as veteran astronaut Matt Kowalski (George Clooney) frantically shouts instructions over the comlink. Most astonishing is that Cuaron shot the scene as a seamless whole. The camera zooms in and around the screen, focusing first on one character, and then another, pulling back occasionally to capture the full jaw-dropping panoramic vista of near-earth orbit.
“It is visual poetry,” marveled director Scott Derrickson (The Day the Earth Stood Still, Sinister) when we chatted back in December, all the more noteworthy because Cuaron’s technique is in such sharp contrast to the visual style that dominates most blockbuster action movies these days, in which the average shot length is typically less than five seconds. Think Transformers, Battleship, the Bourne trilogy, or Pacific Rim, all of which feature long action sequences comprised of a series of short, rapid cuts – pure sensory stimulus.
Yet Gravity’s action sequences run as long as 17 minutes without a single cut, giving the film a very different feel for audiences accustomed to a more frenetic visual pace. Small wonder the Director’s Guild of America awarded Cuaron its top prize for a feature film, and he just snagged the Oscar for Best Director this year.
For instance, here’s the opening sequence from Quantum of Solace:
Now compare the look and feel of that scene with this extended three-minute sequence from Gravity, without a single cut:
Cuaron has flirted with this approach before: he used a method called stitching to create the illusion of seamless shots in key battle scenes in his 2006 film Children of Men; Gravity takes it to the next level, thanks to the magic of blue screens and digital technology. (The zero-gravity spacewalk scenes were computer-generated, with Bullock mounted in a mechanical rig controlled by automated robots.) “I think as a director, Cuaron is tracking this idea of the power of cinema without the edit,” Derrickson told me.
That should make Salk Institute neuroscientist Sergei Gepshtein very happy. He envisions a cinematic vocabulary that has no need for cuts at all, using new tools based on an improved understanding of how the brain organizes perceptual elements. While much of his research focuses on how we see, he has become increasingly fascinated in recent years over the ways in which we don’t see. Gepshtein’s work is the subject of an article I wrote for Pacific Standard, which just appeared in their May/June issue.
I first met Gepshtein several years ago while visiting a pal at the Salk Institute, and was instantly intrigued about his potentially revolutionary ideas about making movies. But I kind of let it simmer on the back burner for awhile, because it was such a tough concept to describe — to anyone, including other scientists. As Sergei says in the Pacific Standard article, we just don’t have the words, the vocabulary, to talk about this yet. We don’t know what a new vocabulary for cinema would look like, and we won’t, until filmmakers start playing around with the kinds of tools that would enable them to manipulate viewers’ visual perception at unprecedented levels. After all, our current cinematic vocabulary didn’t spring instantly into being; it evolved, as filmmakers learned about what worked and didn’t work through trial and error. Gepstein’s approach might take some of the guesswork out of it by providing a kind of “map” of visual perception, but the new vocabulary will need to evolve, all the same.
Of course, the entire film industry is built upon a quirk of visual perception: when we view a series of images in sufficiently rapid succession, those pictures appear to move, like the flip books we played with as children. It is a remarkably robust illusion: despite inevitable variations in how different people perceive the world around them, the continuous effect created by film is universal. “Cinema is creating an experience of time and motion, even though it is all illusory,” says Derrickson. “If you put one shot after another shot, you inevitably create a third thing, and so much of the cinematic language is built upon an understanding of how the audience will create that third thing in their minds.”
Good directors are masters at manipulating the viewer’s attention, according to Jeffrey Zacks, a cognitive neuroscientist at Washington University in St Louis and author of a forthcoming book (in December 2014) on how movies work in the brain, called Flicker. As evidence, he points to several studies that tracked people’s eye movement and visual attention as they watched movies and raw unedited footage. While the subjects’ visual attention sometimes gravitated to the same objects in the raw footage, there was considerable variation in what caught their attention. In contrast, when they watched a scene from There Will Be Blood, almost everyone’s attention focused on the same things.
All this is well known to cognitive neuroscientists like Zacks who study what is happening in the brain when we watch our favorite films – more of a top-down approach. Gepshtein is taking a ground-up approach. He wants to expand the filmmaker’s arsenal of tricks by devising new cinematic methods from first principles, rather than by trial and error. Ultimately, this would enable filmmakers to build a scene from the ground up and perhaps even design tailored experiences for the viewer.
We tend to think of visual perception as a series of snapshots progressing frame by frame. In film, this translates into a sequence of episodes divided by abrupt transitions (cuts).
But Gepshtein argues that, instead, there are multiple threads or elements that coexist in the mind, occasionally being brought to the foreground of conscious perception. “In conventional cinema, the story shifts from one object to another using sequences of shots, separated by cuts, each shot emphasizing different objects,” he says. He believes it is possible to create the same shifts without cuts, “by smart organization of the visual scene.”
Directors have always manipulated their images. Back in the dawn of cinema, Georges Melies (Voyage to the Moon) hand-colored several of his films, frame by frame. Digital filmmaking has made it possible to manipulate elements within the frame, pixel by pixel, providing an unprecedented level of control. Auteur director Richard Linklater put this aspect to good use when he adapted an animation technique from the 1920s called rotoscoping — tracing over footage frame by frame — to create the unique surreal look of A Scanner Darkly (2006).
Yet according to Gepshtein, filmmakers haven’t even begun to explore what might be possible with today’s digital tools; they are still telling stories with the same cinematic vocabulary they’ve always used. “The film industry rests on a narrow selection of possibilities that got discovered early on and then got canonized by the force of inertia and entrenched by film-making technology and habit,” he laments.
Why write a story about Gepshtein now? Well, he’s just completed an intriguing proof of principle project designed to take what works in the highly controlled environment of the laboratory and test its robustness in a real-world environment — something he was able to do with a small seed grant from the Academy of Neuroscience for Architecture. He’s been working with Alex McDowell, a production designer and self described “world builder” known for his work on such films as Minority Report (2002), Watchmen (2009), and last year’s Man of Steel.
Their collaboration was designed to explore the conditions under which the so-called “window of visibility” (described in the Pacific Standard article in greater detail) can predict human response in large-scale environments, taking into account multiple perspectives of angle and distance.The basic concept is that certain movements that are too fast or too slow won’t make it past the brain’s initial perceptual filter. We experience this all the time without realizing it.
For instance, we know the hour hand of a clock moves, but it moves too slowly for us to perceive that motion with our eyes. We also know that each individual spoke of a spinning wheel is sending information to our eyes, but all we see is a blur. That window of visibility is plastic, morphing as needed so we can better adapt to a constantly changing environment, although mathematically, as a function, it retains a telltale boomerang shape. (It’s called the Kelly function, after D.H. Kelly, who first mapped spatio-temporal sensitivity in the 1970s.) It’s a kind of “sweet spot” of visual perception.
McDowell had his students don head-mounted displayed to reproduce Gepshtein’s perceptual experiments (done with a 2D computer display) in 3D virtual space. Then they moved to the lab of UCLA professor of architecture and urban design Gregg Lynn, where video screens are mounted on two gigantic robotic arms, enabling the screens to be moved about the space to play with viewing angle and distance. Data taken while both the viewer and the screens are in constant motion, relative to each other, will help Gepshtein hone his map of visual perception, taking into account how the window of visibility adapts in response to that movement.
The tricky part is taking everything to the next level: that of perceptual organization. This is where things get Gestalt. For instance, we will perceive things that are similar — such as shapes (circles or squares) — as being grouped together; the same is true for objects in close proximity, in either space of time. The point is that optical information must first fit within the “window of visibility” in order to be perceived, and then it must be organized by the brain before it can be truly “experienced,” according to Gepshtein
Eventually this could help with the design of public spaces, from theaters, concert halls, conference rooms, and the like. It would be possible to selectively target visual information to viewers at different locations. Imagine large flight information boards at airports that can display different messages to people depending on how close or far away they are from the display, or how fast or slow they are moving. You would see a detailed flight schedule from a short distance, for example, but those further away would see any urgent updates, sch as a flight’s imminent departure or a security alert. Both messages would be present at the same time on the display, but each would only be perceptible to those people at the right distance or moving at the correct speed. There would be no cross-interference. You could, in principle, achieve the same effects in a movie theater.
Of course, the obvious question is, why do we even need a cinema without cuts? Gepshtein’s research is pretty fundamental; he’s focusing on articulating things from first principles. But doing so should one day make it possible to incorporate his ideas into, say, existing film editing software
like Maya, putting powerful new tools of perceptual manipulation in the hands of working directors. And directors have always been fascinated by the artistic possibilities of a scene with no cuts.
For instance, Alfred Hitchock’s 1948 Rope was cleverly shot and edited to look like one unbroken take, and Alexander Sokurov’s 2002 drama Russian Ark was filmed in a single 96-minute Steadicam sequence shot. Then there is the famous opening sequence of Orson Wells’ Touch of Evil, whereby a camera mounted on a crane swoops around a busy border town scene, ending with a bang. And of course, there are Cuaron’s own experiments with using longer unbroken takes for maximum impact on the viewer.
Fresh alternatives would be welcome for similar-minded directors. There is a sense that despite the utility of the cut, it may also take audiences out of the story, reminding viewers, if only subconsciously, that they are watching a movie. Derrickson recalls hearing Steven Spielberg express his preference for longer takes and fewer cuts for this reason. With a single take, says Derrickson, you can “fall under the spell of a moment that does not end.” Good movies are downright magical in their ability to transport our imaginations — and neuroscience could make them even more so in the future.
Frith, U. and Robson, J.E. (1975) “Perceiving the language of films,” Perception 4: 97-103.
Germeys, F. and d’Ydewalle, G. (2007) “The psychology of film: perceiving beyond the cut,” Psychological Research 71: 458-466.
Gepshtein, S. and Kubovy, M. (2000) “The emergence of visual objects in space-time,” Proceedings of the National Academy of Sciences 97(14): 8186-8191
Gepshtein, S. and Kubovy, M. (2007) “The lawful perception of apparent motion,” Journal of Vision 7(8):9, 1-15
Kelly, D.H. (1972) “Adaptation effects on spatio-temporal sine-wave thresholds,” Vision Research 12: 89-101.
Kelly, D.H. (1979) “Motion and Vision II: Stabilized spatio-temporal threshold surface,” Journal of the Optical Society of America 69: 1340-1349.
Koffka, K. Principles of Gestalt Psychology. New York: Harcourt, Brace & World, Inc, 1935/1963.
Kubovy, M. and Gepshtein, S. “Perceptual grouping in space and in space-time: An exercise in phenomenological psychophysics,” Perceptual Organization in Vision: Behavioral and Neural Perspectives, M. Behrmann, R. Kimchi, and C.R. Olson (eds). Mahwah, NJ: Lawrence Erlbaum, 2003.
Murch, W. In the Blink of an Eye: A Perspective on Film Editing. Los Angeles: Silman-James Press, 2001.
Ramachandran, V.S. and Anstis, S.M. (1983) “Perceptual organization in moving patterns,” Nature 304: 529-531.
Smith, Tim J., and Henderson, John M. (2008) “Edit Blindness: The relationship between attention and global change blindness in dynamic scenes,”Journal of Eye Movement Research 2(2):6, 1-17.
Smith, T. J., Levin, D. T., and Cutting, J. (2012) “A Window on Reality: Perceiving Edited Moving Images,” Current Directions in Psychological Science 21: 101-106
Smith, T. J. (2012) “The Attentional Theory of Cinematic Continuity,” Projections: The Journal for Movies and the Mind. 6(1), 1-27.
Smith, T. J. “Watching you watch movies: Using eye tracking to inform cognitive film theory,” Psychocinematics: Exploring Cognition at the Movies, A. P. Shimamura (ed.). New York: Oxford University Press, 2013.
von Schiller, P. (1933) “Experiments on stroboscopic alternation,” Psychologische Forschung 17: 179-214.
Wertheimer, M. (1912) “Experimental studies on seeing motion,” Zeitschrift fur Psychologie 61: 161-265.
Zacks, Jeffrey M. and Magliano, Joseph. “Film, Narrative, and Cognitive Neuroscience,” Art and the Senses, D.P. Melcher and F. Bacci (eds). New York: Oxford University Press.
Get 6 bi-monthly digital issues
+ 1yr of archive access for just $9.99