ADVERTISEMENT
  About the SA Blog Network













Cross-Check

Cross-Check


Critical views of science in the news
Cross-Check Home

Do Big, New Brain Projects Make Sense When We Don’t Even Know the “Neural Code”?

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Does anyone still remember “The Decade of the Brain“? Youngsters don’t, but perhaps some of my fellow creaky, cranky science-lovers do. In 1990, the brash, fast-growing Society for Neuroscience convinced Congress to name the ’90s the Decade of the Brain. The goal, as President George Bush put it, was to boost public awareness of and support for research on the “three-pound mass of interwoven nerve cells” that serves as “the seat of human intelligence, interpreter of senses and controller of movement.”

One opponent of this public-relations stunt was Torsten Wiesel, who won a Nobel Prize in 1981 for work on the neural basis of vision. When I interviewed him in 1998 for my book The Undiscovered Mind, he grumbled that the Decade of the Brain was “foolish.” Scientists “need at least a century, maybe even a millennium,” to understand the brain, Wiesel said. “We are at the very beginning of brain science.”

I recalled Wiesel’s irritable comments as I read about big new neuroscience initiatives in the U.S. and Europe. In January, the European Union announced it would sink more than $1 billion over the next decade into the Human Brain Project, an attempt to construct a massive computer simulation of the brain. The project, according to The New York Times, involves more than 150 institutions. Meanwhile, President Barack Obama is reportedly planning to commit more than $3 billion to a similar project, called the Brain Activity Map.

Some scientists are criticizing these big initiatives in ways that remind me of Wiesel and the Decade of the Brain. The New York Times quoted brain researcher Haim Sompolinsky saying of the Human Brain Project, “The rhetoric is that in a decade they will be able to reverse-engineer the human brain in computers. This is a fantasy. Nothing will come close to it in a decade.”

The U.S. mapping project, neurologist Donald Stein told The Times, is based on a view of the brain that “is, at best, out of date and at worst simply wrong. The search for a road map of stable neural pathways that can represent brain functions is futile.”

Henry Markram of the Swiss Federal Institute of Technology, the leader of the Human Brain Project, has been bragging about his computer model, Blue Brain, for years. But as I pointed out three years ago, his computer simulations can’t perform any cognitive functions, such as seeing, hearing, remembering, deciding, so there is no way of telling whether they are capturing essential features of brains.

I compared the models of Markram and others to those plastic brains that neuroscientists like to use as paperweights. Another analogy is the “planes” that Melanesian cargo-cult tribes built out of palm fronds, coral and coconut shells after being occupied by Japanese and American troops during World War II. “Brains” that can’t think are like “planes” that can’t fly.

In spite of all our sophisticated instruments and theories, our own brains are still in many respects as magical and mysterious to us as a cargo plane was to those Melanesians. Neuroscientists can’t mimic brains because they lack basic understanding of how brains work; they don’t know what to include in a simulation and what to leave out.

Proponents of the big brain projects are comparing them to the Human Genome Project. There are two problems with that analogy. First, the Genome Project was an impressive technical achievement, but since its completion 10 years ago it has failed to deliver any significant medical breakthroughs. [See Postscript.] Moreover, the Genome Project built upon a basic understanding of genetics. Decades before the Genome Project was launched, researchers deciphered the genetic code, the set of rules whereby specific sequences of base pairs in DNA generate specific proteins.

Neuroscientists have faith that the brain operates according to a “neural code,” rules or algorithms that transform physiological neural processes into perceptions, memories, emotions, decisions and other components of cognition. So far, however, the neural code remains elusive, to put it mildly.

The neural code is often likened to the machine code that underpins the operating system of a digital computer. According to this analogy, neurons serve as switches, or transistors, absorbing and emitting electrochemical pulses, called action potentials or “spikes,” which resemble the basic units of information in digital computers.

But the brain is radically unlike and more complex than any existing computer. A typical brain contains 100 billion cells, and each cell is linked via synapses to as many as 100,000 others. Synapses are awash in neurotransmitters, hormones, neural-growth factors and other chemicals that affect the transmission of signals, and synapses constantly form and dissolve, weaken and strengthen, in response to new experiences. Researchers have recently established that not only do old brain cells die, new ones can form via neurogenesis.

Far from being stamped from a common mold, like transistors, neurons display a dizzying variety of forms and functions. Researchers have discovered scores of distinct types of neuron just in the visual system. And let’s not forget all the genes that are constantly turning on and off and thereby further altering the brain’s operation.

Assuming that each synapse in the human brain processes ten action potentials per second and that these transactions represent the brain’s computational output, then the brain performs at least a quadrillion operations per second. Some supercomputers have already exceeded this information-processing capacity, encouraging claims by artificial-intelligence enthusiasts—notably Ray Kurzweil and other members of the Singularity cult–that computers will soon become vastly more intelligent than their creators.

But the brain may be processing information at many levels below and above that of individual neurons and synapses. Indeed, some researchers suspect that each individual neuron, rather than resembling a transistor, is more like a computer in its own right, engaging in complex information-processing. Moreover, brains may employ many different methods of encoding information.

The first neural-code candidate was discovered in the 1920s by the British neurophysiologist Edgar Adrian. When Adrian increased the pressure on tactile neurons, they fired at an increased rate. This so-called rate code has now been demonstrated in many different animals, including Homo sapiens. But a rate code is a crude, inefficient way to convey information, akin to communicating solely by humming at different pitches.

Neuroscientists have therefore long suspected that the brain employs subtler codes. In so-called temporal codes, information is represented not just in a cell’s rate of firing but in the precise timing between spikes. For example, whereas a rate code treats the spike sequences 010101 and 100011 as identical, a temporal code assumes that the two sequences have different meanings.

On a more macro level, researchers are searching for “population codes” involving the correlated firing of many neurons. The late Francis Crick favored a code involving many neurons firing at the same rate and at precisely the same time, a phenomenon called “synchronous oscillations.” Others propose that information is carried not by spikes per se but by electromagnetic fields—generated by millions of electrochemical pulses–constantly sweeping through the brain.

So far, however, the evidence for any particular code remains tentative. The brain could utilize all these codes, or none. Complicating matters further, research on artificial cochleas and other prostheses suggests that brains may devise new codes in response to novel stimuli. Given all this confusion, you can see why some neuroscientists worry that cracking the neural code may take a long, long time. Maybe a century or longer.

Of all scientific fields, neuroscience has the most potential to produce revolutionary discoveries, with enormous philosophical as well as practical import. (Particle physics is so over.) Optimists will no doubt say that the Human Brain Project and the Brain Activity Map—by boosting funding and collaboration–might help us decipher the neural code, or codes. But I fear that these big, much-hyped initiatives will turn out to be as disappointing as the Decade of the Brain. Rather than boosting the status of neuroscience, they may harm its credibility.

Self-plagiarism alert: This post contains prose from several previous articles and from The Undiscovered Mind. These passages were so perfectly crafted that I didn’t see any point in trying to improve upon them.

Image: Saad Faruque,Flickr.

Postscript: Some readers challenge my claim that the Human Genome Project “has failed to deliver any significant medical breakthroughs.” Here’s what Nicholas Wade of The New York Times, whose reporting on genetics is if anything excessively positive, said in 2010: “Ten years after President Bill Clinton announced that the first draft of the human genome was complete, medicine has yet to see any large part of the promised benefits. For biologists, the genome has yielded one insightful surprise after another. But the primary goal of the $3 billion Human Genome Project–to ferret out the genetic roots of common diseases like cancer and Alzheimer’s and then generate treatments–remains largely elusive. Indeed, after 10 years of effort, geneticists are almost back to square one in knowing where to look for the roots of common disease.” Wade’s assessment still holds. The Genome Project was supposed to lead to gene therapies that could cure or treat diseases stemming from genetic mutations. Last summer, European health officials approved a gene therapy for a lipid-related disorder that affects about one in a million people. So far, not a single gene therapy has been approved for commercial sale in the U.S. I reiterate: the Human Genome Project has failed to fulfill its promise, and it had a much stronger scientific foundation than the new brain projects. Do I oppose funding for genetics and neuroscience? Of course not. The potential of this research is so vast that we can never stop supporting it, even if payoffs are slow in coming. But precisely because the research is so vitally important, it should be marketed honestly.

Post-Postscript: Henry Markram, in a comment below, criticizes my criticism of the Human Brain Project, for which he is “Coordinator.” He calls my views “nonsense” and “mind-boggling,” and he urges me and other critics to “elevate your discussion a little–it sounds like the house of Babylon…and just maybe we can get out of the dark ages here.” If I didn’t know Markram’s history, I might assume that his rant was posted by an imposter trying to make him look bad. But his comments recall his 2009 diatribe against Dharmendra Modha, leader of an IBM effort to model a cat brain. After Modha received some positive attention, Markram called the cat-brain model a “scam” that is “light years away from a cat brain, not even close to an ant’s brain in complexity. It is highly unethical of Mohda to mislead the public in making people believe they have actually simulated a cat’s brain. Absolutely shocking.” Okay, Modha was guilty of hype. But Modha’s hype pales beside that of Markram. Just months before he slammed Modha, Markram said at a TED Conference: “It is not impossible to build a human brain and we can do it in 10 years.” He indulges in more hype in his comments below, calling the Human Brain Project “probably the most rigorously reviewed proposal in the history of grants.” I find it, well, mind-boggling that the European Union has invested more than $1 billion in a project led by someone with so little credibility.

John Horgan About the Author: Every week, hockey-playing science writer John Horgan takes a puckish, provocative look at breaking science. A teacher at Stevens Institute of Technology, Horgan is the author of four books, including The End of Science (Addison Wesley, 1996) and The End of War (McSweeney's, 2012). Follow on Twitter @Horganism.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 21 Comments

Add Comment
  1. 1. bigbopper 7:14 pm 03/23/2013

    Agree that these brain projects are not the same as the Human Genome Project. But we heard the same type of negativity when the Human Genome Project was being proposed, yet it’s been a huge success. The statement that it has not led to any medical breakthroughs is clearly incorrect: the information from whole cancer genome studies which for the first time provide a complete view of all the genetic abnormalities in a cancer cell would not have been possible without it. I don’t see why we shouldn’t try to do what we can know both in terms of mapping the brain and attempting to duplicate it in silico. We’re bound to learn a lot, and there will almost certainly be practical applications. Naysaying and carping are easy to do. But they’ve never led to any progress.

    Link to this
  2. 2. zstansfi 7:22 pm 03/23/2013

    So John, your criticism of attempts to better understand the human brain is the cranky argument that we need centuries to do it? How is this anything other than retrogressive: “Don’t even strive because it will take too long.”

    Your “sophisticated” analogies are less than illuminating. Current attempts to produce brain models are are “paperweights” or planes that cannot fly? Very nice attempt to address the intellectual basis of this work.

    Even your criticism of past projects is flawed. The Human Genome Project has been an enormous success: the biggest part being the realization of just how much we still have to learn about genetic function. The only way to discover this was in sequencing the code on a grand scale, rather than muddling forward without any kind of road map.

    Moreover, the “neural code” by your estimation is far more expansive than the genetic code (which contains a puny 64 total number of possible combinations) making this a horribly uninformative analogy. Do we need to understand exactly how neural machinery produces experience to even study it? To correct your analogy: that’s like saying we need to understand how all human RNAs and proteins interact before we can even begin mapping out the human genome. You’ve got it entirely backwards. We’ve saved enormous amounts of effort in this task as a result of having sequenced the genome.

    (And then you draw Ray Kurzweil into the mix–this just helps to conflate the recently proposed Brain Activity Map with nonsense and pseudoscience, without actually illuminating whether this project is reasonable).

    If you would read the actual Brain Activity Map proposal you might realize that researchers want to do is develop techniques which allow data to be collected from many thousands of cells at once. The exact form of this data (e.g. whether it represents a rate, temporal or synaptic code or something entirely novel) doesn’t have to be restricted in the manner that Horgan assumes. Thus, there is no reason why we need to have already solved “The Neural Code” or whatever Holy Grail you think is needed. Rather, the aim is to develop techniques which might actually allow us to understand such a code at some point in the future.

    One thing is for sure: if we balk at every big project because “we haven’t solved everything yet”, then it certainly will take hundreds of years to understand how the brain functions. Let’s address the real challenges associated with these sorts of proposals instead: are they technically possible, where will the funding come from, etc.

    Summaries of the proposal have been published at the following urls, and I have a brief summary on my blog:

    http://www.sciencemag.org/content/339/6125/1284
    http://www.cell.com/neuron/abstract/S0896-6273%2812%2900518-1
    http://neuroautomaton.com/human-brain-map-set-for-completion-by-2030/

    Link to this
  3. 3. littleredtop 8:00 pm 03/23/2013

    Spending BIG taxpayer money on any scientific project is foolishness and the Obama $3 billion big brain project is perhaps the most foolish and the most wasteful.

    Link to this
  4. 4. Tanmay 9:02 pm 03/23/2013

    Yes, all the people who are investing 4 Bn big ones are stupid, and you are the only smart one, Dear Author.

    Link to this
  5. 5. Chryses 9:52 pm 03/23/2013

    Dear Mr. Horgan,

    Although you are likely correct in your assessment, you are being seriously non-PC.

    Link to this
  6. 6. EricHalgren 11:36 pm 03/23/2013

    Is there any evidence that Neuroscientists lost credibility as a result of the Decade of the Brain?
    Everyone agrees that the human brain is incredibly complicated and that it will take a long time to understand it. Does that mean that we should not do everything we can, including spending 1/40,000th of the annual government spending in the US to make some progress? Sure, we should spend the money wisely and avoid creating unrealistic expectations, but we can do that while expressing honestly the enthusiasm we feel to be pushing back this frontier.

    Link to this
  7. 7. sjfone 8:09 am 03/24/2013

    Much work to do.

    Link to this
  8. 8. Andrei Kirilyuk 9:00 am 03/24/2013

    These big science projects have the same, evident sense as all other big science projects, it’s their big money itself going to few self-chosen sages, without any real possibility of investor (finally always public) control of the result efficiency. Come on, John, we know well this unique sense of modern science spending (as well as “science support” by “activists”) – and the great one, in proportion to astronomical sums wasted by all but indeed gained by few triumphant parasites. The list of those “great” (as well as smaller) projects will be long, you know it. And it’s already a very good case when money is simply wasted for nothing, like in those last-time Fundamental Physics Prices of $3 million each going to theories explicitly recognised as futile, even by the establishment (which continues, however, its totalitarian investments in those misleading theories and yet much more in their senseless “experimental verification”). There are other, more “successful” cases, when they spend it to produce real modifications in systems whose dynamics they do not understand at all, like in genetic and ecological/climate engineering projects (but actually in all high-energy projects as well). When we are already in a huge degradation tendency, let’s use all our free resources to enrich a small caste of “intellectual” frauds and irresponsible tricksters because we don’t have enough problems yet… What about the old-fashioned criterion of real (fundamental and practical) problem solution? What about the honest competition of scientific projects and results according to this criterion (rather than subjective opinions of intrinsically corrupt “peer-review” mafia)?

    Link to this
  9. 9. DivisionByZero 9:15 am 03/24/2013

    Wow! What’s with all of the whining about BAM? So far, all I have heard is that it is hard. So what? That’s why it’s worth doing. Will it take more than 10 years? Probably. But if we don’t start now, when? At worst it’s the accumulation of data and the creation of tools for analysis that can be used to generate new theories. Of course the prevailing theories of the researchers that participate in BAM will determine which data is recorded and which tools are developed but that’s no different than any other research program. But, here’s where we get to the real source of rancor in my uncharitable opinion. The whiners are whining because their pet theories didn’t get funded. Will BAM work? Maybe or maybe not. Personally I’m skeptical because despite what this article suggests it’s not even clear that thought can be reduced to computation or even modeled by it for simulation but it’s worth trying.

    Link to this
  10. 10. DivisionByZero 9:31 am 03/24/2013

    Also, in case I wasn’t clear: There may be no “neural code” at all and therefore waiting for it before doing any research would prevent us from making any progress at all.

    Link to this
  11. 11. jsweck 10:55 am 03/24/2013

    “Neuroscientists have faith that the brain operates according to a “neural code,” rules or algorithms that transform physiological neural processes into perceptions…”

    This not how computers work – in that realm, software does the perception, never hardware.

    Link to this
  12. 12. bgrnathan 11:57 am 03/24/2013

    HOW DO EGG YOLKS BECOME CHICKENS? (Internet Article) When you divide a cake, the parts are smaller than the original cake and the cake never gets bigger. When we were a single cell and that cell divided, the new cells were the same size as the original cell and we got bigger. New material had to come from somewhere. That new material came from food. The sequence in our DNA directed our mother’s food, we received in the womb, to become new cells forming all the tissues and organs of our body. Understand how DNA works. Read my Internet article: HOW DO EGG YOLKS BECOME CHICKENS? Just google the title to access the article.

    This article explains how DNA and cloning work.

    Babu G. Ranganathan
    (B.A. Bible/Biology)

    Author of the popular Internet article, TRADITIONAL DOCTRINE OF HELL EVOLVED FROM GREEK ROOTS

    Link to this
  13. 13. marclevesque 7:54 pm 03/24/2013

    Yeah, I’m also getting the impression the BAM as presented or at this stage is misguided to say the least, and, I don’t think it can be compared to the genome project in any significant way.

    And now we need to map glia function too :

    http://blogs.scientificamerican.com/guest-blog/2013/03/07/human-brain-cells-make-mice-smart/

    On a tangent, I just stumbled on this link :

    http://bigthink.com/experts-corner/attention-ray-kurzweil-we-cant-even-build-an-artificial-worm-brain

    Link to this
  14. 14. markram 3:37 am 03/25/2013

    What a load on nonsense is written in this article and in some of the comments. Torsten Wiesel is the Chairman of this so-called controversial Human Brain Project and has been involved in its formation from the beginning. So go figure that one out.

    It is mind boggling to see this level of confusion by intelligent people…so far from understanding even what is actually happening, what is being proposed, and so busy with self-affirmation.

    Before you criticize (and after doing your homework), why don’t you try to propose an alternative plan that will make a real difference…you have one?

    Do any of you realize that this is probably the most rigorously reviewed proposal in the history of grants? Four years of pruning from 120 projects with a final selection of 2 projects by a panel of 25 reviewers, that included Nobel prize winners. This is a 600 page plan. Are they all fools? Do you really believe that the EU dishes out 1B euro just like that? What kind of plan can you imagine you’d have to have to be able to prove delivery and success every 12 months – required to get the next stage of funding.

    Seriously guys elevate your discussion a little – it sounds like the house of Babylon…and just maybe we can get out of the dark ages here.

    Henry Markram
    Coordinator, Human Brain Project

    Link to this
  15. 15. rshoff 12:09 pm 03/25/2013

    @Markram – You are surely correct about the confusion of intelligent (and educated) people. That is a core problem across all disciplines. This article, and articles like these, gives you an opportunity to see and hear some of the dissension and that gives you an opportunity to educate the general public and provide a better argument against misunderstandings of the project. So you could actually use the dissension to build support. In my ignorance, I came away from the article with my own feeling that the project(s) will result in good science, but that a decade will not be enough time to realize the goals.

    Link to this
  16. 16. marclevesque 12:41 pm 03/25/2013

    markram -

    “so far from understanding even what is actually happening, what is being proposed”

    I’m surely open to understanding more, and maybe you can help us a little to see through the fog…

    I’ve looked here : http://www.humanbrainproject.eu/index.html : and can see the project is divided into 11 pillars one of which is the Molecular Neuroscience Pillar that states part of the work is “to integrate the data in molecular level models of neurons, glia and synapses” so the implications from my comment about glia seems to be at least be acknowledged by the project.

    From what I’ve read so far, I’m seeing a lot of work and research that should obviously go on, and a lot of potentially productive and grounded speculation. But throughout the project’s website there is also a lot of wild speculation, speculation that is so disconnected or far removed from where we are today, that it can’t bring anything useful to the table, and it might even be detrimental to brain research in the future because if(as) these levels of expectation are not met, then all areas of related research could end up having more trouble marketing themselves.

    “have to be able to prove delivery and success every 12 months – required to get the next stage of funding”

    That is a good thing for sure. Though I didn’t find any specifics on the web site about conditional funding requirements, and at first glance, parts of the review and audit structure also seem to be at least a little prone to self-affirmation and bias.

    Link to this
  17. 17. dubina 4:11 pm 03/26/2013

    Two negatives:

    (1) “First, the Genome Project was an impressive technical achievement, but since its completion 10 years ago it has failed to deliver any significant medical breakthroughs.”

    (2) “…the primary goal of the $3 billion Human Genome Project–to ferret out the genetic roots of common diseases like cancer and Alzheimer’s and then generate treatments–remains largely elusive.”

    ******

    One positive: (one of many)

    “The first multi-gene test that can help predict cancer patients’ responses to treatment using the latest DNA sequencing techniques has been launched in the NHS, thanks to a partnership between scientists at the University of Oxford and Oxford University Hospitals NHS Trust.”

    “The test detects mutations across 46 genes in cancer cells, mutations which may be driving the growth of the cancer in patients with solid tumours. The presence of a mutation in a gene can potentially determine which treatment a patient should receive.”

    “The researchers say the number of genes tested marks a step change in introducing next-generation DNA sequencing technology into the NHS, and heralds the arrival of genomic medicine with whole genome sequencing of patients just around the corner.”

    ******

    @ Anybody who knows,

    Please address this glaring discrepancy. I know several individuals who are thinking of shelling out 300 British pounds for the aforementioned 46 gene panel in hopes of finding better treatment than might otherwise be available on the health service conveyor belt of usual chemotherapy regimes.

    Link to this
  18. 18. dubina 4:49 pm 03/26/2013

    Re “neural code candidates”, the following excerpt from “Brain Cells for Grandmother” by Rodrigo Quian Quiroga, Itzhak Fried and Christof Koch, published online in Scientific American 14 January 2013.

    OUR RESEARCH is closely related to the question of how the brain interprets the outside world and translates perceptions into memories. Consider the famous 1953 case of patient H.M., who suffered from intractable epilepsy. As a desperate approach to try to stop his seizures, a neurosurgeon removed his hippocampus and adjoining regions in both sides of the brain. After the surgery, H.M. could still recognize people and objects and remember events that he had known before the surgery, but the unexpected result was that he could no longer make new long-lasting memories. Without the hippocampus, everything that happened to him quickly fell into oblivion. The 2000 movie Memento revolves around a character who has a similar neurological condition.

    H.M’s case demonstrates that the hippocampus, and the medial temporal lobe in general, is not necessary for perception but is critical for transferring short-term memories (things we remember for a short while) into long-term memories (things remembered for hours, days or years). In line with this evidence, we argue that concept cells, which reside in these areas, are critical for translating what is in our awareness—whatever is triggered by sensory inputs or internal recall—into long-term memories that will later be stored in other areas in the cerebral cortex. We believe that the Jennifer Aniston neuron we found was not necessary for the patient to recognize the actress or to remember who she was, but it was critical to bring Aniston into awareness for forging new links and memories related to her, such as later remembering seeing her picture.

    Our brains may use a small number of concept cells to represent many instances of one thing as a unique concept—a sparse and invariant representation. The workings of concept cells go a long way toward explaining the way we remember: we recall Jennifer and Luke in all guises instead of remembering every pore on their faces. We neither need (nor want) to remember every detail of whatever happens to us.

    What is important is to grasp the gist of particular situations involving persons and concepts that are relevant to us, rather than remembering an overwhelming myriad of meaningless details. If we run into somebody we know in a café, it is more important to remember a few salient events at this encounter than what exactly the person was wearing, every single word he used or what the other strangers relaxing in the café looked like. Concept cells tend to fire to personally relevant things because we typically remember events involving people and things that are familiar to us and we do not invest in making memories of things that have no particular relevance.

    Memories are much more than single isolated concepts. A memory of Jennifer Aniston involves a series of events in which she—or her character in Friends for that matter—takes part. The full recollection of a single memory episode requires links between different but associated concepts: Jennifer Aniston linked to the concept of your sitting on a sofa while spooning ice cream and watching Friends.

    If two concepts are related, some of the neurons encoding one concept may also fire to the other one. This hypothesis gives a physiological explanation for how neurons in the brain encode associations. The tendency for cells to fire to related concepts may indeed be the basis for the creation of episodic memories (such as the particular sequence of events during the café encounter) or the flow of consciousness, moving spontaneously from one concept to the other. We see Jennifer Aniston, and this perception evokes the memory of the TV, the sofa and ice cream—related concepts that underlie the memory of watching an episode of Friends. A similar process may also create the links between aspects of the same concept stored in different cortical areas, bringing together the smell, shape, color and texture of a rose—or Jennifer’s appearance and voice.

    Given the obvious advantages of storing high-level memories as abstract concepts, we can also ask why the representation of these concepts has to be sparsely distributed in the medial temporal lobe. One answer is provided by modeling studies, which have consistently shown that sparse representations are necessary for creating rapid associations.

    The technical details are complex, but the general idea is quite simple. Imagine a distributed—as opposite of sparse—representation for the person we met in the café, with neurons coding for each minute feature of that person. Imagine another distributed representation for the café itself. Making a connection between the person and the café would require creating links among the different details representing each concept but without mixing them up with others, because the café looks like a comfortable bookstore and our friend looks like somebody else we know.

    Creating such links with distributed networks is very slow and leads to the mixing of memories. Establishing such connections with sparse networks is, in contrast, fast and easy. It just requires creating a few links between the groups of cells representing each concept, by getting a few neurons to start firing to both concepts. Another advantage of a sparse representation is that something new can be added without profoundly affecting everything else in the network.

    This separation is much more difficult to achieve with distributed networks, where adding a new concept shifts boundaries for the entire network. Concept cells link perception to memory; they give an abstract and sparse representation of semantic knowledge—the people, places, objects, all the meaningful concepts that make up our individual worlds. They constitute the building blocks for the memories of facts and events of our lives. Their elegant coding scheme allows our minds to leave aside countless unimportant details and extract meaning that can be used to make new associations and memories. They encode what is critical to retain from our experiences.

    Link to this
  19. 19. pauldenice 6:14 am 03/28/2013

    What if the most difficult problem in understanding human brain was that what makes it so exceptional is specifically that it is imperfect, contrary to the computer models, dear to many neuroscience researchers.

    Human Brain Imperfection is the open door to true “hard creativity”; term coined by Margaret Boden: “in “The Creative Mind, Myths and Mechanisms”, Basic Book, 1992
    Hard Creativity [It is when] “the world has turned out differently not just from the way we thought it would, but even from the way we thought it could”

    Here is now a short visionary text extracted from an old Isaac Asimove Robots series, which illustrates my comment:
    Human thinking vs machine thinking

    Isaac Asimov, Robots and Empire
    Balantine Books, NY 1985 Page 54

    In the whole series, appears a human detective “Elijah Bayley and two high performing humanoid robots who serve as deputee to the detective…

    The two robots in this fiction book are named Giskard and Daneel. Giskard who is trying to understand human thinking tells Danneel “Human beings have ways of thinking about human beings that we have not. Giskard’s is searching for the “laws of humanics” which he is assuming to regulate human thinking just as the famous Asimov’s three laws of robotics regulate completely “robots thinking’ and actions. http://www.auburn.edu/~vestmon/robotics.html
    For that, Giskard says he had searched whole libraries trying to discover if such laws governing human behaviour ever existed or if they could be deducted from past human behaviour analysis.

    Giskard continues: “Every generalisation that I try to make, however broad and simple has its numerous exceptions. Yet if such laws existed and if I could find them, I could understand human beings and be more confident that I am obeying the Three robotic laws in better fashion.

    Giskard keeps on going: “Since detective Elijah understood human beings, he must have had some knowledge of the laws of Humanics.”
    Daneel answers: “Presumably. But he knew through something that human beings call intuition, a word I don’t understand, signifying a concept I know nothing of. Presumably it lies beyond reason at my command.”

    Giskard again: “That and [robot Memory!] Memory that doesn’t work after human fashion, of course. It lacked the imperfect recall, the fuzziness, the addition and subtractions dictated by wishful thinking and self interest, to say nothing of the lingering and lacunae and backtracking that can turn memory into hour-long day-dreaming. it was robotic memory ticking off the events exactly as they had happened, but in vastly hastened fashion. The seconds reeled off in nanoseconds…

    Kurtzweil’s singularity is likelly to be way off target when he and his team count on increased computer power and faster connectivity as well as ever larger and fester memories.

    A French author, B. Kullmann, clinical neurologist, with more than 30 years experience, wrote a couple of years ago a book with a self explanatory title “L’esprit Faux” “The wrong mind” brushing aside neurosciences computer models of a perfectly logical human brain and vast memory capabilities.

    Link to this
  20. 20. pauldenice 6:34 am 03/28/2013

    Back to the question “What is a brain project worth” I aggree that in the 10 year time frame being proposed, this project makes little sense.
    However, who said that humanity should only allow a 10 year timeframe for such a difficult project”
    Such complex projects could be likened to medieval cathedral building projects: extending at times over multiple centuries, and setting goals for which know-how or technologies didn’t exist at the time the project was set up.
    Let’s say that this brain project could be a twentyfirst century cathedral building project.
    Like medieval cathedrals projects did, long term goals will foster many new windfall technologies, know-how, scientific discoveries and give a lifelong purpose to lots of researchers.
    An example of modern times cathedral building was the Apollo human Moon landing project. We all benefit from technologies that were invented and developped for the Apollo project both hardware and software.
    Paul

    Link to this
  21. 21. Chryses 11:35 am 04/13/2013

    “I find it, well, mind-boggling that the European Union has invested more than $1 billion in a project led by someone with so little credibility.”

    Oh, I don’t know about that. If you think about it, who would be better to lead a project doomed to failure than someone with so little credibility?

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now! >

X

Email this Article

X