An entry from The Best Illusion of the Year Contest started off as a representation of the Loch Ness monster, but has grown to become one of the most intriguing, and potentially most important, illusions. The effect stems from a jumping ring: line-segments arranged randomly in an annulus rotate smoothly, and periodically rescramble into a new pattern of randomly arranged line-segments. Bizarrely, the rescrambling appears to viewers as a rapid backward jump in rotation, despite that there is no real motion (or direction of motion) during the rescrambling. Pretty cool.

Mark Wexler of the University of Paris V in France, who discovered the original  Loch Ness effect, took third-prize in the contest. He named it the Loch Ness aftereffect after a classic illusion known to ancient Greeks, which Robert Addams later rediscovered in 1834 at the Falls of Foyers (the waterfalls that feed Loch Ness in Scotland). If you stare at the waterfalls for a while, the stationary rocks near the falling water will appear to drift upward. But unlike in the waterfall effect, Wexler’s illusory motion aftereffect is 100 times faster than the inducing movement! So this is not your parent’s waterfall effect: something new is happening.

This kind of illusion is called an aftereffect because you perceive the illusion—in this case the illusory motion—only after the veridical motion has already stopped. Another example of an aftereffect occurs when you see spots after a camera flash—that’s a luminance aftereffect. Wexler, at the time of the contest, did not know much about the effect: it was new. But in the years since, he and his colleagues have had a chance to study it further: they published a paper in the Proceedings of the National Academy of Science on their results.

One result is that the apparent speed of the annulus’s illusory rotation varies as a function of the space between the line-segments. More granularity in the sprinkles on the donut leads to faster illusory rotation. The conclusion is that large-scale image variations—what vision scientists call low spatial frequencies—are linked to higher illusory speeds. That’s why the donut with the mixed toppings seems to rotate fast near its middle (low spatial frequencies) than at its outer edge (high spatial frequencies). And when you measure it, it’s literally faster than fast. The illusory motion goes faster than the fastest motion that your visual system can see. The authors of the paper therefore renamed the Loch Ness effect to be the “highphi” effect, which is a scientific play on words since the “phi effect” is the ability to see motion from two or more sequentially blinking but stationary lights (as in a theater marquis). The highphi effect demonstrates the actual speed limit of your fragile little mind. Boom! Think on that and try not to cry yourself to sleep tonight.

One of the most interesting aspects of this study is its implications for brain function. At this year’s ECVP in Barcelona I had a long discussion with Andrew Glennerster—a professor at the University of Reading, and one of the authors of the study—about those implications. In his view, the illusion stands as evidence that the brain’s function is to perform a type of computation called “Bayesian Analysis” (described further below). This relates to how the brain knows what’s happening in the world. This issue may at first seem trivial: we see stuff, it happened, ‘nuff said. But the problem is actually very deep, and you can intuit this if you think about the fact that you are an imperfect data collector. Your inner self, the person that is created by your brain—who watches the movie of your life from within your brain, and experiences all the joys and pains—is fed information about the world by your senses, which you know are limited by their resolution, signal strength, and by your own cognitive abilities (i.e. how well you pay attention to one thing versus another). So, in a word, your senses suck. They do not provide accurate data—you know that must be true because illusions exist!—so your brain fills in a lot of information about the world to make it seem real, coherent, and seamless. But how does it fill in? Does it extrapolate from the evidence in your senses, or does it infer from your internal model of the world?

Certainty about evidence from the world is clearly a real problem for your brain. Philosophers, scientists, and statisticians have thought a lot about this, and the current thinking comes down to Bayesian (inferring the model of the world) versus Classical (extrapolating from evidence) approaches to evidence gathering. A deep discussion of Bayesian versus Classical approaches is best left to the statisticians who are currently battling about these paradigms. But allow me to summarize concisely here by oversimplifying: if the brain is a Bayesian device, then it has a theory about what is going on around it, and evidence is gathered to shore up that theory. That means that incoming data from the world—from your senses or from memory or other cognitive inputs, such as cause-effect inference—is basically used to confirm the probability that your theory is correct. The Classical view would be that the brain fundamentally has no theory: it simply collects data about what is going on, and based on the data, forms a theory of the world. Either way, your consciousness of the world is only a theory (how could it be anything else given that your brain is a collection of microscopic sacks of salt water?). But if your brain is a Bayesian device, you have preconceived beliefs about what the world, whereas if your brain is a Classical device, you are seeking to know an undiscovered country. It’s a vast philosophical divide, and it’s very important because it bears on perhaps the primary function of the brain, in terms of how it collects data about the world.

Traditionally scientists have taken the Classical view: hence the name. Statisticians call this the frequentist approach. You make decisions about what to do next by essentially counting the events in the world around you and predicting what will happen next based on the data from your life. Illusions—when the physical reality doesn’t match the perception—are thus caused by errors in data collection or by shortcuts the brain takes to solve specific problems based on the data from the world. But Glennerster—the Bayesian empathizer—took the view that highphi is evidence that the brain, instead, has a model for what is happening at any given time, and that the highphi illusion occurs because the model is wrong.

You may object that the brain could be doing both Classical and Bayesian analyses (and that we must be capable of both if we can conceive of both in this discussion). And that’s true, but Glennerster is correct in thinking that something as fundamental as how we see motion in random patterns is probably not a mix of Bayesian and Classical approaches. It’s a low-level fundamental computation in the brain, and the brain’s paradigm for solving it certainly does not involve high-cognition: if we can show that the circuits in the brain that solve this illusion use a Bayesian—or Classical—approach, it would provide the first evidence that I know if that the brain does indeed favor one paradigm over the other.

And that’s huge. It’s of similar import as to whether Darwin was right about natural selection, or whether Lamarck was right, that traits were inherited by offspring based on the specific needs the parent had in its life. In fact, these dichotomies—Darwin/Lamarck and Classical/Bayesian—may fundamentally be the same. Natural selection, I would argue, results in an organism that seems to have been designed for a purpose (like you and me), through fundamentally random and arbitrary processes. Like a Classical theory: stuff happens and we deal with it as it comes. Lamarckian inheritance, it follows, results in an organism based on a model of the world that is formed by prior beliefs (parental experience) of how the world works. Lamarckian offspring are thus literally designed for the purpose of dealing with its ancestor’s problems. A Bayesian approach is similar: we know that stuff is going to eventually happen and so we are watching for it when it comes.    

These approaches are the flip side of the same coin. They both may have the same value in their use, but they can’t both be face up at the same time. If the brain is fundamentally based on one approach, then it’s not based on the other. That is: whether the brain is a Bayesian versus Classical device is fundamental to understanding who and what we are. It would certainly be remarkable is our brains are Bayesian (which I see as fundamentally Lamarckian), despite being the result of Darwinian natural selection.

The highphi illusion may be just the thing we need to work that out in future research.