Right now, your brain is decoding these symbols with the help of Bayes’ theorem, a formula devised by a British cleric more than 250 years ago. Or so some scientists suspect.
My last post, “Bayes’ Theorem: What’s the Big Deal?”, points out the theorem’s power and limitations. Invented by Presbyterian minister Thomas Bayes as an aid for calculating odds in games of chance, the theorem provides a way to update the plausibility of hypotheses based on new information.
The formula has spawned an enormous range of applications, especially in artificial intelligence. Inspired by these successes, some scientists conjecture that our brains employ Bayesian algorithms. If they can help a computer perceive, recognize, reason and decide, perhaps they help our brains carry out these tasks; brains are, after all, just weird, squishy computers.
Given that many brains, mine included, have a hard time grasping Bayes’ theorem, the Bayesian-brain thesis might seem surprising--and in fact it has provoked pushback. Seeking insight into the debate, last month my brain and I attended a two-day meeting at New York University, “Is the Brain Bayesian?”
The meeting was organized by philosophers Ned Block and David Chalmers of the NYU Center for Mind, Brain and Consciousness. The Center has been busy. In November, it sponsored a workshop on integrated information theory, which I critique here. Whereas integrated information theorists seek to explain consciousness, how the mind feels, Bayesians focus on cognition, what the mind does. The announcement for the NYU Bayes-bash stated:
Bayesian theories have attracted enormous attention in the cognitive sciences in recent years. According to these theories, the mind assigns probabilities to hypotheses and updates them according to standard probabilistic rules of inference. Bayesian theories have been applied to the study of perception, learning, memory, reasoning, language, decision-making, and many other domains. Bayesian approaches have also become increasingly popular in neuroscience, and a number of potential neurobiological mechanisms have been proposed. At the same time, Bayesian theories raise many foundational questions, the answers to which have been controversial: Does the brain actually use Bayesian rules? Or are they merely approximate descriptions of behavior? How well can Bayesian theories accommodate irrationality in cognition? Do they require an implausibly uniform view of the mind? Are Bayesian theories near-trivial due to their many degrees of freedom? What are their implications for the relationship between perception, cognition, rationality, and consciousness?
Good questions, which the conference aired rather than resolved. In this post, I’ll summarize the positions of the first two speakers at the meeting, who gave terrific overviews of the pros and cons, respectively, of the Bayesian-brain hypothesis. I will then declare a winner.
Kicking things off was Joshua Tenenbaum of MIT’s brain and cognitive science program, who tries to “reverse engineer” human minds and replicate their performance in computers. As he explains on his website (which links to papers on the Bayesian brain and related topics), “bringing machine-learning algorithms closer to the capacities of human learning should lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition.”
At the NYU meeting, Tenenbaum reminded us just how clever we are even as infants. If we see a tower of blocks, we instantly know whether it is stable or likely to topple. We quickly recognize faces and guess based on facial expressions what people are feeling. In situation after situation, we rapidly jump from to particular facts to generalizations that can help us understand new facts and situations.
Bayesian programs can master these and countless other cognitive feats better than other artificial-intelligence approaches, Tenenbaum contended. Bayesian programs are especially effective at replicating “how we get so much out of so little”—that is, how we glean knowledge even from sparse, ambiguous data.
Shortly after the NYU meeting, The New York Times hailed research by Tenenbaum and two co-workers on a Bayesian program that “rivals human abilities.” The program recognizes hand-written characters from many different alphabets, including Greek and Sanskrit. It also generates novel characters that can pass a “visual Turing test.” Human judges had difficulty distinguishing between characters drawn by humans and by the Bayesian program.
Tenenbaum and his co-authors claim in Science that their model “captures” our capacity for “action,” “imagination, “explanation” and “creative generalization.” The program outperforms non-Bayesian approaches, including the much-touted “deep learning” method, which typically gleans knowledge only after sifting through large data sets. (For details on the research, see this press release.)
In an interview with Times technology reporter John Markoff, Tenenbaum emphasizes the relevance of the research to human cognition: “With all the progress in machine learning, it’s amazing what you can do with lots of data and faster computers… But when you look at children, it’s amazing what they can learn from very little data. Some comes from prior knowledge and some is built into our brain.”
Extolling Bayesian programs at NYU, Tenenbaum sounded like a proud parent. He left the podium only after repeated reminders that he had exceeded his time limit from Chalmers, his host. Tenenbaum tempered his enthusiasm a bit. While insisting on the superiority of the Bayesian paradigm for modeling cognition, he conceded that it is probably “insufficient” and will need complementing with other approaches.
But Bayesian models might be unnecessary as well as insufficient, according to Jeffrey Bowers, who followed Tenenbaum. Unlike Tenenbaum, who exuded excitement, Bowers seemed faintly mournful during his talk, as if he hated bearing bad news.
In my previous post, I said Bayes’ theorem reminds me of the theory of evolution, since both yield nonsense as well as profound insights. Bowers, a psychologist at the University of Bristol, made the same analogy.
His presentation reprised his co-written 2012 paper “Bayesian just-so stories in psychology and neuroscience”, which evokes a famous complaint by biologist Stephen Jay Gould about the flimsy, ad hoc style of some evolutionary accounts of biological traits. Gould compared such explanations to “just-so stories,” fanciful tales about how the leopard got his spots and the camel his hump.
In the same way, Bowers contended, Bayesian models can replicate virtually any cognitive task, given tweaking of prior assumptions and input. They are so flexible that they are immune to falsification, much like the explanations that evolutionary psychology offers for human traits.
If anything, Bowers noted, the comparison of Bayesian and Darwinian theories is unfair to the latter. For all its faults, evolutionary psychology provides a plausible account of our irrationality: It stems perhaps from conflicts between our conscious desires and the compulsion of our selfish genes to reproduce, or from mismatches between modern environments and those in which Homo sapiens emerged.
A Darwinian perspective, Bowers said, also contradicts the Bayesian claim that the brain employs highly efficient, even “optimal” methods for carrying out cognitive tasks. Natural selection, which cobbled our brains together from pre-existing biological features, designed them to be “good enough” rather than optimal.
Other information-processing models, such as neural networks, can replicate the results of Bayesian models, Bowers added. And neuroscience, contrary to the claims of Bayesians, has provided little or no support for the idea that neurons carry out Bayesian-style processing of information.
Bowers concluded with a final ironic jab. A Bayesian analysis of the Bayesian-brain hypothesis, he suggested, reveals how weak the hypothesis is, and how susceptible Bayesians are to confirmation bias.
As I mentioned in my previous post, Bayes’ theorem implies that your hypothesis cannot be deemed credible until you have scrupulously considered all alternative explanations for your evidence. Bayesian-brain enthusiasts too often fail to heed this precept, Bowers told the NYU audience.
What Would Poe Think?
So who wins the Bayesian-brain debate? I hate to be so predictable, but I must give the nod to Bowers, the skeptic. My coverage of brain-and-mind research over the last few decades has left me with a strong bias against alleged breakthroughs. (See Further Reading.)
Moreover, the Bayesian-brain thesis can be boiled down to a dubious syllogism: Our brains excel at certain tasks. Bayesians programs excel at similar tasks. Therefore our brains employ Bayesian programs.
There are obvious limits to this logic. Peregrine falcons excel at flying, and so do F15 jets. No one claims that peregrine falcons must therefore employ jet propulsion, because any fool can see that the mechanics of peregrine and jet propulsion are utterly unlike. If the analogy between our brains and Bayesian machines isn’t self-evidently foolish, that’s only because the mechanics of our cognition remain largely hidden from us.
I ended my previous post with a warning about the dangers of Bayes-style inference from Edgar Allen Poe, whom I happen to be re-reading lately. Researching this post, I stumbled across another apt Poe-ism.
This one addresses a key assumption of Bayesian-brainers, that we are largely rational in our choice and pursuit of goals. Poe complains that theorists of the mind too often base their conjectures not on what minds do but on what they should do. Substitute “natural selection” for “God,” and Poe’s rant would have made a fine contribution to the NYU conference:
“The intellectual or logical man, rather than the understanding or observant man, set himself to imagine designs--to dictate purposes to God. Having thus fathomed, to his satisfaction, the intentions of Jehovah, out of these intentions he built his innumerable systems of mind… It would have been wiser, it would have been safer to classify, (if classify we must), upon the basis of what man usually or occasionally did, and was always occasionally doing, rather than upon the basis of what we took it for granted the Deity intended him to do.”
Poe fans will no doubt recognize this passage from “The Imp of the Perverse,” which dramatizes how irrational our minds can be. If an imp tempts you to jump on the Bayesian-brain-wagon, (re)read Poe’s disturbing tale.