March 18, 2013 | 3
In 1956, a legion of famed scientific minds descended on Dartmouth College to debate one of mankind’s most persistent questions: Is it possible to build a machine that thinks? The researchers had plenty to talk about – biologists and mathematicians had suggested since the 1940s that nerve cells probably served as binary logic gates, much like transistors in computer mainframes.
Meanwhile, computer theorists like Alan Turing and Claude Shannon had been arguing for years that intelligence and learning could – at least in theory – be programmed into a machine of sufficient complexity. Within the next few decades, many researchers predicted, we’d be building machines capable of conscious thought.
Fifty-odd years after that first Dartmouth Conference, our sharpest supercomputers still struggle to hold basic conversations. We’ve created software that can drive our cars and predict our purchases, but the dreams of a true artificial brain – and of a working neuron-by-neuron model of the human brain itself – look even more distant than they did in the 1950s. The more we learn about how the brain works, the more interwoven and inextricable we realize its components and processes are – and the less like a computer it seems.
Take synapses, for example – the points where neurons link up and exchange information. Neuroscientists estimate that a human brain may contain about 150 trillion of them, and no two are quite identical – either to one another, or to any synapse in anyone else’s brain. On top of this complexity, every neuron in a brain is constantly learning, adapting, fine-tuning its sensitivity, tinkering with its synaptic connections – rarely wired the same way from one day to the next. In light of all this, it’s not hard to see why many scientists seriously doubt that we’ll map an entire human brain any time this century – much less engineer a digital version from scratch.
That’s not to say, though, that we can’t take concrete steps toward a human brain map – which is where the Human Connectome Project comes in.
A massive undertaking
Taking a cue from the Human Genome Project of the 1990s, the $30 million Human Connectome Project aims – ultimately – to help create a digitized map of every neuron and synapse in the human nervous system. Launched in 2011 with a team of more than 100 scientists at 10 institutions throughout the U.S. and Europe, the HCP has only taken the first few steps of its brain-mapping journey. Even so, HCP researchers have already begun releasing sizable data sets into the scientific domain.
This March, the HCP published its latest results: A two-terabyte data set of functional MRI and diffusion-MRI connectivity scans of 1,200 volunteers – much of it at cutting-edge resolutions and frame rates; all of it available for free to qualifying scientists and institutions. “We’ve used some new fMRI techniques to accelerate our speed of acquisition,” says David Van Essen, a lead HCP researcher who serves as professor of neurobiology at Washington University School of Medicine in St. Louis. “These techniques allow us to capture each frame of a brain “movie” in seven tenths of a second, rather than three or four seconds – so we’re capturing fMRI videos at a much higher frame rate than ever before.”
Whereas many fMRI researchers scan patients in a single scan session lasting 20 or 30 minutes, the HCP team have recorded and analyzed four hours of scans per volunteer - including one hour of “task-fMRI” scans taken as the subjects viewed moving shapes on a screen, listened to stories, did mental arithmetic, wiggled their fingers and toes, and performed other cognitive and behavioral tasks. “Our scans provide coverage most of the brain, so we can help other researchers identify areas that are specialized for certain tasks,” Van Essen says. “But the large size of our subject pool means that our data can also be correlated with individual differences in task performance, and how those differences relate to differences in brain structure and function.” The researchers also made a point of brain-scanning families with twins (both identical and fraternal twins), whose cerebral similarities and differences may help scientists find some new answers in the age-old debate of “nature versus nurture.”
Orders from the top
The HCP’s new data set arrives at an opportune time. This February, the White House proposed a comprehensive brain-mapping effort tagged the “Brain Activity Map” – and although funding for a BAM project has yet to materialize, it’s clear that such an effort would benefit from contributions not only from HCP researchers, but also from computational neuroscientists around the world, many of whom are already working to build digital models of neural systems.
In fact, according to a report recently published in the journal Science, the BAM’s asserted goals would extend beyond just mapping the activity of a brain’s entire complement of neurons – already an ambitious goal – and into the tasks of developing new models and tools for manipulating neural circuits at the cellular level.
Though some scientists support the idea that this goal is a realistic one, many others contend that the BAM’s got serious launchpad problems – not the least of which is the yawning gap between micro-scale models of neuronal activity and macro-scale scans of overall brain activity.
“If the BAM gets funding, there will be exciting advances – but it will take years, if not decades, to move those technologies forward in the way that their advocates are dreaming about,” Van Essen says. He also points out, however, that a similar revolution took place in genomics research during the 1990s. “To make sense of those data,” he says, “new analysis methods needed to be invented; and what emerged was a world of bioinformatics. Even the smartest investigators didn’t know what we were getting into when we set out to decipher the human genome, so they had to bootstrap themselves into a realm of generating more and more powerful analysis and computational tools.”
Van Essen says he hopes, in his more optimistic moments, that we’ll eventually see a similar revolution in neuroinformatics and computational neuroscience: A revolution whose new approaches and technologies will combine micro-scale and macro-scale models of brain function. Such models could enable us not only to handle, store and visualize neurbioological data sets, but also to generate new tools for exploring and interpreting the complex circuitry of the human brain.
Where, exactly, does this latest research leave us in our quest to unravel the brain’s ultimate mysteries? “The idea of monitoring activity in the whole human brain at the level of individual cells is, I would say, far-fetched – at least in my lifetime,” Van Essen says. Building accurate computerized models of insect brains – perhaps even of rodent brains – might become feasible in the next few decades, he says. But a human brain contains more than 1,000 times as many neurons – and perhaps 10,000 times as many synapses – as the brain of the world’s smartest mouse.
What’s more, there really is no such thing as “the human connectome.” Although most of our brains are wired pretty similarly in areas that deal with sensory stimuli like sound and vision, the situation isn’t nearly so straightforward in brain areas that’ve expanded most in our species’ recent evolution – the areas, in other words, that are considered crucial for uniquely human abilities like speech and self-reflection. In these newer areas of the brain, neural wiring and activity show striking differences from one person to the next.
This means that even if we manage to map every synapse of one lucky person’s brain, we still won’t have a clear picture of why your neural wiring gives you your own preferences and beliefs – any more than you can diagnose a problem with your own computer by poking around your friend’s hard drive.
For all these reasons, the Brain Activity Map is an ambitious effort that veers – at least, according to many scientists – into the realm of the quixotic. Still, there’s something to be said for chiseling away at longstanding problems like the workings of the human brain. Isaac Newton never guessed at the existence of quantum particles, but his celestial calculus freed us from the false and rigid concept of heavenly spheres. By the same token, our growing body of brain data may steer us away from outdated theories that tie us down to false premises, and toward a more integrated multi-level understanding of the human brain’s structure and function.
Twenty years ago, researchers compared the brain to a supercomputer packed with billions of microchips. At the turn of the twentieth century, it was a great steam engine; a hundred years before that, an intricate piece of clockwork. And so on, back through the millennia – until we reach the ancient Greeks, who seem to have unleashed this torrent of metaphors by likening the human brain to a catapult. In every age, the brightest scientists and philosophers find themselves tempted to describe the brain in terms of the moment’s latest technology – that is, until new technologies and brain breakthroughs turn those descriptions into clunking relics of bygone eras.
The brain and its workings, in other words, have a way of defying easy classification. Peer inside a neuron and you won’t find any binary switches or churning gears – only an ecosystem of protein structures and neurotransmitter molecules; a sub-cellular country that differs profoundly from any machine built by human hands.
Our models of the brain aren’t precisely accurate yet, but they’re more accurate – or, more to the point, less wrong - than models from 100 years ago were. We may not map every synapse of the human brain in our own lifetimes – but after all, front-page breakthroughs aren’t really what hard science is about. If we can ship away, even a little, at our unconscious misconceptions and assumptions, we’ll be doing a favor to the generations that follow our own. And that goal – in any age – is one worth striving for.