ADVERTISEMENT
  About the SA Blog Network













Cross-Check

Cross-Check


Critical views of science in the news
Cross-Check Home

Artificial brains are imminent…not!


Email   PrintPrint



Scientists are on the verge of building an artificial brain! How do I know? Terry Sejnowski of the Salk Institute said so right here on ScientificAmerican.com. He wrote that the goal of reverse-engineering the brain—which the National Academy of Engineering recently posed as one of its "grand challenges"—is "becoming increasingly plausible." Scientists are learning more and more about the brain, and computers are becoming more and more powerful. So naturally computers will soon be able to mimic the brain’s workings. So says Sejnowski.

Sejnowski is a very smart guy, whom I’ve interviewed several times over the years about the mysteries of the brain. But I respectfully—hell, disrespectfully, Terry can take it—disagree with his prediction that artificial brains are imminent. Sejnowski’s own article shows how implausible his prediction is. He describes two projects—both software programs running on powerful supercomputers—that represent the state of the art in brain simulation. On the one hand, you have the "cat brain" constructed by IBM researcher Dharmendra Modha; his simulation contains about as many neurons as a cat’s brain does organized into roughly the same architecture. On the other hand, you have the Blue Brain Project of Henry Markram, a neuroscientist at the Ecole Polytechnique Fédérale de Lausanne.

Markram’s simulation contains neurons and synaptic connections that are much more detailed than those in Modha’s program. Markram recently bashed Modha for "mass deception," arguing that Modha’s neurons and synapses are so simple that they don’t deserve to be called simulations. Modha’s program is "light years away from a cat brain, not even close to an ant’s brain in complexity," Markram complained.

Talk about the pot calling the kettle black. Last year Markram stated, "It is not impossible to build a human brain and we can do it in 10 years." If Modha’s simulation is "light years" away from reality, so is Markram’s. Neither program includes "sensory inputs or motor outputs," Sejnowski points out, and their neural-signaling patterns resemble those of brains sleeping or undergoing an epileptic seizure. In other words, neither Modha nor Markram can mimic even the simplest operations of a healthy, awake, embodied brain.

The simulations of Modha and Markram are about as brain-like as one of those plastic brains that neuroscientists like to keep on their desks. The plastic brain has all the parts that a real brain does, it’s roughly the same color and it has about as many molecules in it. OK, say optimists, the plastic brain doesn’t actually perceive, emote, plan or decide, but don’t be so critical! Give the researchers time! Another analogy: Current brain simulations resemble the "planes" and "radios" that Melanesian cargo-cult tribes built out of palm fronds, coral and coconut shells after being occupied by Japanese and American troops during World War II. "Brains" that can’t think are like "planes" that can’t fly.

In spite of all our sophisticated instruments and theories, our own brains are still as magical and mysterious to us as a cargo plane was to those Melanesians. Neuroscientists can’t mimic brains because they lack basic understanding of how brains work; they don’t know what to include in a simulation and what to leave out. Most simulations assume that the basic physical unit of the brain is the neuron, and the basic unit of information is the electrochemical action potential, or spike, emitted by the neuron. A typical brain contains 100 billion cells, and each cell is linked via dendrites and synapses to as many as 100,000 others. Assuming that each synapse processes one action potential per second and that these transactions represent the brain’s computational output, then the brain performs at least one quadrillion operations per second.

Computers are fast approaching this information-processing capacity, leading to claims by artificial intelligence enthusiast Ray Kurzweil and others that computers will soon not just equal but surpass our brains in cognitive power. But the brain may be processing information at many levels below and above that of individual neurons and synapses. Moreover, scientists have no idea how the brain encodes information. Unlike computers, which employ a single, static machine code that translates electrical pulses into information, brains may employ many different "neural codes," which may be constantly changing in response to new experiences.

Go back a decade or two—or five or six—and you will find artificial intelligence pioneers like Marvin Minsky and Herbert Simon proclaiming, because of exciting advances in brain and computer science: Artificial brains are coming! They’re going to save us! Or destroy us! Someday, these prophecies may come true, but there is no reason to believe them now.

ABOUT THE AUTHOR

John Horgan, a former Scientific American staff writer, directs the Center for Science Writings at Stevens Institute of Technology. (Photo courtesy of Skye Horgan.)

 Image: iStockphoto/RapidEye

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 50 Comments

Add Comment
  1. 1. Tucker M 11:14 am 05/14/2010

    Irrational overconfidence is no guarantee of success…but it’s frequently a prerequisite.

    Link to this
  2. 2. mapmanic 1:44 pm 05/14/2010

    Setting up a rudimentary neural network with simple functions is within the realm of possibilities but to get to the level of consciousness would require a self-designed system (you know, like the one inside your cranium). We can start up a self-designing system but ultimately something akin to consciousness will arise in a "virtual evolutionary" context and we will only attain a level of understanding of how it works after the fact. Complexity, of course, is the key and the artificial intelligence resulting from such an exercise will likely think itself conscious (just like we do) because of an illusion of self. Any Buddhist monks involved in this project?

    Link to this
  3. 3. Peterbart 2:24 pm 05/14/2010

    It seems to me that the issue is not can we simulate neurons with computer circuits. That technical feat seems solvable. Nor is the issue can we hook up such artificial neurons in circuits which mimic the brain. Again, that seems a task that is only moderately challenging. The BIG ISSUE is that we havenot yet unravelled the neural code. This remains the biggest bottleneck of all.

    For perspective, consider what has happened in the world of genomics. In 1953, Watson and Crick described the basic code of DNA, but it took a couple of decades to determine how that code was translated into proteins, and five decades to fully sequence (or some fascimile thereof) the human genome. Even with the genome in hand, scientists are still puzzling over the complexity of how genes encode a living being.

    Building artificial neurons and neuronal circuits is nice, but without the code (and no one seems to be even close), it is unrealistic to discuss building an artificial brain.

    Link to this
  4. 4. ormondotvos 2:25 pm 05/14/2010

    Having been thrilled by my first sealed, non-catwhisker diode, I’ve lived through decades of nay-sayers about what is possible, and when. I suspect that the first self-conscious computer might not want to talk with us, since we’re so poorly designed by evolution to communicate with it.

    The very real danger is that a self-aware system might ally itself with some less irrational biological system, like a tree fungus or peaceful anthropoid species, because we are such a kludge ourselves, individually, in groups, and as a species of cancer on the earth.

    There’s little noble, on balance, about the human brain…

    Link to this
  5. 5. robert schmidt 4:32 pm 05/14/2010

    I think the issue is with the definition of "artificial brain". If the requirement is to build computer models that match actual brains molecule by molecule then we are a long way away from that. I don’t believe they even have an artificial cell modeled at that resolution yet. But is that necessary to create a "thinking" computer? When you look at the problem there are certainly issues that make you doubt it can ever be done. An example would be the movement of action potentials along the neuron which are mediated by ion channels in the cell membrane. If we not only have to model 100B neurons but also all those channels then we have increased the problem by many orders of magnitude.

    But perhaps some of this complexity can be reduced. All those channels can potentially be replaced by one equation. That is what we do now and the neural models still seem to work. Also, many of our neurons are used to manage systems rather than ponder philosophy. A computer doesn’t have to worry about controlling heartbeat, breathing or digestion so there is a huge chunk of brain we can slice off. Mapping to output is a huge system to model since our motor cortex connects to all our muscle fibers. But computers don’t have muscle fibers so again we can reduce this to a smaller number of servos. After selectively lobotomizing our artificial brain the problems begins to look a little easier.

    As for the notion that we don’t understand the neural code. I think that is a little disingenuous. We know the information is encoded in spikes. We understand how spikes propagate. We understand to a degree how they integrate. I’ve looked at this many times and I still don’t understand those who claim we know nothing about the neural code.

    Without a doubt the brain is complex but we are making substantial head way, if you excuse the pun, on all fields of inquiry. We have computer models of various brain regions that work. We haven’t integrated everything yet but that is partly because it is expensive. Ultimately, I’m not looking to the large scale modelers for a breakthrough, I am looking at the people designing brain prosthetics to replace damaged tissue. Once we have computer chips that are designed to work like neurons and synapses rather than software models of neurons, you will see tremendous growth in this field. Then we will have to address the question, how will we compete with machines that are built specifically to function in an environment to which humanity is so poorly adapted.

    Link to this
  6. 6. selrachj 5:00 pm 05/14/2010

    It appears that brains and computers are operating under different sets of rules. If we show an American a photograph of Abe Lincoln, they can identify the 16th president in about a half second. Because neurons are chemically based, they are slow. Therefore, that half second recognition must take place in roughly 100 steps. A computer, on the other hand, requires billions of operations to find Abe Lincoln in a database. If the computer is shown just a half face, or only an eye, beard, nose combination, or a rarely seen photo, it becomes nearly impossible for it to identify Abe. A brain, on the other hand, can still come up with the right answer — probably because it holds an invariant representation of Abe that is assembled and held in neural structure in a form that is vastly different from a computer database.

    We have quite a ways to go before creating electronic machines that can replicate neural processes, especially those of the cortex. Still, I believe we are now beginning to crack that code.

    Link to this
  7. 7. selrachj 5:06 pm 05/14/2010

    The fear of a runaway electronic intelligence ala Terminator or The Matrix is probably not warranted. No one is going to design in modules like the amygdala and other "reptilian" features that are responsible for territoriality, reciprocity, sexual jealousy, envy, pride, lust and so on. These will be adaptive machines that can learn to drive cars and learn to navigate my living room so it can bring me a cappuccino.

    Link to this
  8. 8. jtdwyer 6:18 pm 05/14/2010

    I get it now. The earlier articles proclaiming artificial brains ‘real soon’ produced comments from us pragmatists, while the opposing article brought out the hopeful enthusiasts. I guess SciAm both wins and loses either way…

    I can predict with confidence that portrayals of artificial intelligence in movies and commercials will continue to advance at an amazing pace.

    Link to this
  9. 9. bestofnothing 7:06 pm 05/14/2010

    My brain doesn’t have a neural code.

    Link to this
  10. 10. robert schmidt 7:07 pm 05/14/2010

    @selrachj, "It appears that brains and computers are operating under different sets of rules." that’s correct. Your example is a good one. Brains are great at pattern recognition. If you understanding von Neumann machines it is hard to wrap your head around neural nets because they are so different. One way to describe them is as a computer that learns how to map arbitrary input patterns to output patterns. Modern computers are very fast serial computers whereas brains are very slow massively parallel signal processors. So people can do incredibly complex tasks such as pickup important queues from each other’s body language during a conversation to derive meaning not inherent in the spoken words. On the other hand I can buy a calculator the size of a credit card that has the ability to do math built in, whereas after 40 years I’m still learning math. Not only that but computers can do any type of arithmetic logic calculations much faster than any person. It would likely take someone decades with paper and pencil to do what my computer can do in seconds. As you said, brains are chemical and as a result slow. Signals move through the body at about 200 mph and the brain works in the range of roughly 10-100 hertz. Whereas my PC is running at 4+ billion herz and the signals travel at close to the speed of light.
    So imagine a machine that is the best of both worlds. It is electronic but also massively parallel. One second for that machine is like 12.68 years for us. It can speak in any language, appreciate art, culture, emote, create, but it can also compute, calculating the most complex equations in seconds as well as design and test concepts in its mind alone, and then communicate those ideas in minute detail wirelessly with any other machine in the solar system. It can speak in a serial computer’s native language as effortlessly as we can speak with each other.

    Most people think that would never happen but the fact is, we want to make machines like that. Imagine having an engineer working for you that can do more work in a day than a person can do in 0.34 million years. Imagine a military that has soldiers who are that intelligent but that has no moral considerations about how they are deployed. Do you think they are working on machines like that to get you beers?

    As with most technologies we create we will get to a point where we can’t live without them, but they may be able to live without us.

    Link to this
  11. 11. mikej77 8:03 pm 05/14/2010

    Kfreels: You have it exactly IMHO. We are looking at functional equivalence or superiority. If the machine is much better at what we do then we are we need to look no further.
    Few say that the 787 is "still not a bird" and most have ceased shouting "get a horse".
    I would like to add that biological and electronic brains are already interbreeding and reshaping each other in every human endeavor.
    Without meeting the main point everything we think and do and what we say we believe will change.
    It may have been Nietzsche who said the last human being died a long time ago.
    With passing years this will become more apparent.

    Link to this
  12. 12. BlueDusk 1:43 am 05/15/2010

    It seems like Roger Penrose was right after all…

    Link to this
  13. 13. eablair 4:38 am 05/15/2010

    Marvin Minsky has said that AI research went nowhere for decades because hardly anyone was doing it. Now people are doing it and they have great tools.

    Kurzweil would say, "You’re thinking linearly; you should be thinking exponentially." Until you cogently address that issue your argument is meaningless.

    Markram would point out that his programs self organize. Most of the work is not done by humans, and human understanding of everything that is going on is not necessary and is probably impossible.

    Clifford Stoll would say, "Crow doesn’t taste so good."

    http://www.newsweek.com/id/106554

    His article from 15 years ago…

    The Internet? Bah!
    Hype alert: Why cyberspace isn’t, and will never be, nirvana

    …proved with geometric logic that the Internet would never amount to anything. Schools would never use it, we would do no shopping over it, electronic publishing would never happen. He was right that linear growth of the Internet would not bring us those things in a hundred years. Exponential growth has made the Internet what it is now… a pale shadow of what it will be like in 10 more years of exponential growth.

    Being the "wise voice of reason" has its charm for awhile but bites you later on.

    Link to this
  14. 14. jtdwyer 5:34 am 05/15/2010

    eablair – Ok, now you’ve aroused my nostalgia for technology, so I ran to my library of "IEEE Transactions on Pattern Analysis and Machine Intelligence", 1984-1985. Randomly, from the abstract of "Incorporating Fuzzy Membership Functions into the Perceptron Algorithm":

    "… It is shown that the fuzzy perceptron, like its crisp counterpart. converges in the separable case. A method of generating membership functions is developed, and experimental results comparing the crisp to the fuzzy perceptron are presented."

    Ah, the good old days. If we’d just had a MIPS or two (MIPS is not the plural of MIP by the way) – we could have really made progress! Now there’s plenty of processing power and memory but not so much fundamental research. It seems the experts are now focused on assembling large scale components, having memorized the existing research inventory… I’ll be amazed when the first fully functional artificial brain is finally assembled from the erector kit, and scared to death!

    Link to this
  15. 15. hs96dlw 2:12 pm 05/15/2010

    to anyone who disagrees with the notion that we don’t understand the neural code, i remember when the hebbian model of synapses and neurons, sitting in brains supported by those good old simple glial cells, was the only show in town. now it turns out that glia are neuromodulators, and there’s a lot more of them than 100 billion neurons. the whole exercise just became several orders of magnitude more complicated. still, that’s what makes the challenge so interesting!

    Link to this
  16. 16. robert schmidt 5:30 pm 05/15/2010

    @hs96dlw, to my knowledge glial cells haven’t been shown to play a direct role in signal processing but rather "manage" the neuron. If I’m wrong please point me to your source so I can catch up. How that would impact models I’ll wait and see but I doubt it will mean that additional nodes will need to be added to ANNs (this is one of the problems as it contributes to the curse of dimensionality), rather the neuron model will need to account for glial function. Even if there is a minor signal processing role it will likely be local in nature and so again can be incorporated into the neuron model. For example, if we find that glial cells are responsible for many of the mechanisms supporting long-term potentiation it may be useful to understand the mechanism but it may not have any effect on the complexity of models of memory formation which have already abstracted the effects of the mechanism. So, I think it is premature to suggest the "the whole exercise just became several orders of magnitude more complicated". There is a big difference between having to model the brain in precise detail and the need to model the information processing aspects of the brain at a functional level.

    Again, I could be wrong and would be interested to know your source.

    Link to this
  17. 17. jtdwyer 7:45 pm 05/15/2010

    robert schmidt – Since the dendrite structures of neurons provides distributed access to multiple synapses, each with their own chemical micro-environment of neurotransmitters regulating potentiation, with or without glial cell influence, isn’t the model required to functionally represent potential neuronal network processes already quite a bit more complex than can be represented as a simple electronic signal process?

    Link to this
  18. 18. robert schmidt 9:49 am 05/16/2010

    @jtdwyer, I don’t understand your point. Both neurons and synapses are signal processors. I’m not sure what you mean by "more complex than can be represented as a simple electronic signal process".

    Link to this
  19. 19. jtdwyer 11:35 am 05/16/2010

    robert schmidt – As I understand, synapses are signal processes moderated by their local chemical environment. Neurons are signal processors that process the signals from one or more of their synaptic interconnections, forming multiple, configurable neural networks.

    In contrast, electronic devices process fixed electron flows through statically defined functions, i.e., an instruction set. While varying sequences of instructions can be specified to provide many results, the chemical states of synaptic environments can dynamically alter the results of networked neural processes in response to changing conditions. I think the networked neural processes implemented by brain biology are significantly more complex and flexible than can be represented by fixed processing programs.

    Link to this
  20. 20. robert schmidt 12:52 pm 05/16/2010

    @jtdwyer, "In contrast, electronic devices process fixed electron flows through statically defined functions, i.e., an instruction set" yes but I am not saying that is how neurons and synapses work. That is not the definition of a signal processor that is just how computer processors do it.

    "more complex and flexible than can be represented by fixed processing programs" definitely. I don’t believe that I stated that this wasn’t the case.

    I’m still not sure what the disagreement is but if it is in regards to my response to hs96dlw I’ll clarify;

    hs96dlw was implying that the new understanding about the role of glial cells would increase the complexity of neural models. I disagree for two main reasons.

    Option 1 – Glial cells may not be directly involved in signal processing. If that is the case then their role is abstracted away in the model in the same way that ion channels are reduced to a single threshold function. So my node count doesn’t increase.

    Option 2 – Glial cells may be directly involved but act locally. If that is the case my neuron or synapse model just needs to include the function of the locally acting glial cells. So again my node count doesn’t increase.

    But if glial cells where to act like neurons themselves, in other words form signal passing connections with many other cells, then I need to add those nodes to my model which would increase complexity. If I have 10 nodes all interlinked I have 100 synapses. If I now have to add 10 glial cells all interlinked I have 400 synapses. This is the curse of dimensionality. Adding a node increases the complexity exponentially. I haven’t seen an evidence to suggest that glial cells work in this way.

    And again, it depends what your objective is, realistic biological model or functional information processing. The title of the article does not make this clear. One of the challenges of information processing models of the nervous system is distinguishing what is important for information processing from what is done to mitigate biological limitations.

    Link to this
  21. 21. jtdwyer 1:17 pm 05/16/2010

    robert schmidt – I’m sorry I can’t seem to explain that there is a qualitative distinction between ‘node’ interconnections are are strictly electronic in nature and those that whose electrical connection is moderated by chemical conditions under biological influences.

    This difference is not just the quantity of nodal interconnections. Moreover, if results are produced by analog processes distributed across loosely connected, dynamically reconfigurable processing ‘nodes’ rather than fixed configurations of fixed program instructions, the results will not be exactly repeatable, as is demanded of computers.

    No one actually ever had exactly the same thoughts or sensations under essentially repeated conditions. If computers could not produce exactly repeatable results they would be of little value.

    I hope this helps – it’s about the best I can do.

    Link to this
  22. 22. robert schmidt 7:36 pm 05/16/2010

    @jtdwyer, the results from biological brains are repeatable to a degree. We can walk, talk, understand language, etc. These are all things we do fairly consistently.

    "No one actually ever had exactly the same thoughts or sensations under essentially repeated conditions" there is no way to repeat an experience. Once we have an experience, we learn from it. That means the next time we have it, even if the conditions are identical, we are different. But in the real world no two situations are exactly the same. The measure of success in biology is reproductive success not predictability. In fact predictability can result in failure as predators will be able to anticipate your actions.

    Real neurons tend to be somewhat chaotic. That is likely why instead of using a single neuron to make a decision we use a population of neurons so the responses tend to average out. This also helps when we lose a few neurons on super bowl weekend. Artificial neurons aren’t noisy so, as I said, we can reduce the complexity required by biology to offset the noise inherent in biology. The point of ANNs is that they learn. Once they learn a pattern, given the same stimuli they will generate the same output.

    "If computers could not produce exactly repeatable results they would be of little value." I disagree. In the case of arithmetic / logic operations there is only one valid response so it is important that the results are consistent. But those aren’t the only class of problems. Pattern recognition as a separate class of problem that does not require consistent results but rather it requires appropriate results. To understand neural networks you really need to throw away all your preconceptions based on serial processors. Trying to draw parallels only leads to confusion. These are completely different computers for completely different problems.

    Link to this
  23. 23. jtdwyer 12:20 am 05/17/2010

    OK, so, catching up on academic research I lost interest in 20 years ago, along with Artificial Intelligence and Expert Systems, an Artificial Neural Network is actually an algorithmic representation of a signal processing function originally modeled after biological neuron processing, as defined more than 50 years ago. They have little association with biological neurons as they are currently understood to function, or computer hardware architectures. Artificial Neurons are simply a method of representing algorithms to be iteratively applied to the processing of information. In some ANN specification systems, `neuronal’ algorithms are even specified in an Excel spreadsheet. Those specifications are typically used to generate sequential instructions to be executed on standard computer systems.

    Much of the difficulty of discussing Neural Networks revolves around their pseudo-biological basis. The artificial neuron is actually no longer intended to functionally represent biological neurons, but merely to define a logical algorithm to be applied to an input `signal’, transforming it to an output `signal’, which may be transmitted to additional `nodes’ (process iterations) for further processing. These algorithms are not the serial processes typically applied in `standard’ programming, since their intended transformations are somewhat adaptive in accordance with their cumulative nature but they are typically implemented as `simulations’ using serially executed instructions.

    Perhaps optimally, specialized Massively Parallel Processor computer hardware implementations can be constructed, typically implemented using standard microprocessors, to allow concurrent parallel execution of iterations on multiple processors. For optimal performance, however, the number of processors physically configured should equal the number of algorithm nodes simultaneously active. Most optimally, specialized processors can be designed to maximize the performance of specific algorithms, most often used for highly specialized chess playing computers, designed to maximize the publicity generated for a computer manufacturer. These specialized processor configuration implementations are far more expensive than the more common method of `simulating’ neural networks as a serial process on a standard computer.

    I concede that specialized functions such as chess board evaluation and facial recognition may be performed more quickly using cascading, iterative/parallel computer processes than can be achieved even by experts. While a complete generalized simulation of human intelligence is most likely much further away than envisioned by enthusiasts (if any closer than it was 20 years ago), a limited simulation of, say a blog user could likely be produced much sooner. This is a classic qualifying `achievement test’ for Artificial Intelligence. It would merely have to evaluate blog entries and respond in some seeming appropriate manor sufficient to fool bloggers. Actually, it kinda makes me wonder… is that all I am? Enough, already!

    Link to this
  24. 24. jtdwyer 1:45 am 05/17/2010

    So, back to my earlier point, the Artificial Neuron model does not represent the role of local analog chemical states of neurotransmitters in propagating and modifying signals to nodes (process invocations), even as a statistical parameter. In actual neurons, the neuron process is not statistically defined, it s a dynamically determined analog process. Statistical simulations of n-nodal invocations cannot produce the results obtainable by local temporally variable chemical conditions external to the neuronal process.

    I could discuss my personal experience with elevated levels of cellular waste products and low levels of oxygen and other nutrient precursors of neurotransmitters produced by medical conditions and their affect on brain function, but I’ll spare you.

    You may have noticed similar variability arising from minor sleep disruption or deprivation, or ingestion of coffee or a large lunch, for example. These affects, along with conditional variations arising from habitual influences, environmental or genetic variations, not to mention permanent brain physiological differences. While most of these influences may seem generally detrimental, a detrimental affect in one function may allow a extremely beneficial result in another, such as that occasional inspiration.

    Of course you could just statistically sample the population and average the averages…

    Link to this
  25. 25. hs96dlw 10:36 am 05/17/2010

    "D-Serine is localized in mammalian brain to a discrete population of glial cells near NMDA receptors, suggesting that D-serine is an endogenous agonist of the receptor-associated glycine site." Michael J. Schell1, Roscoe O. Brady Jr.1, Mark E. Molliver1, and Solomon H. Snyder1, 2, 3 D-Serine as a Neuromodulator: Regional and Developmental Localizations in Rat Brain Glia Resemble NMDA Receptors Volume 17, Number 5, Issue of March 1, 1997 pp. 1604-1615 Copyright ?1997 Society for Neuroscience

    "Abstract
    Glial cells throughout the nervous system are closely associated with synapses. Accompanying these anatomical couplings are intriguing functional interactions, including the capacity of certain glial cells to respond to and modulate neurotransmission. Glial cells can also help establish, maintain, and reconstitute synapses. In this review, we discuss evidence indicating that glial cells make important contributions to synaptic function."

    Glial Cells and Neurotransmission
    Daniel S Auld*, 1, and Richard Robitaille*,

    Copyright 2003 Cell Press. All rights reserved.
    Neuron, Volume 40, Issue 2, 389-400, 9 October 2003

    doi:10.1016/S0896-6273(03)00607-X

    Drtement de Physiologie, Universite Montr, Centre de Recherche en Sciences Neurologiques, PO Box 6128 Station Centre-Ville, Montr, Quc H3C 3J7, Canada

    Correspondence: Daniel S. Auld, 514 398-4647 (phone), 514 398-5214 (fax); and Richard Robitaille, 514 343-6111 (phone), 514 343-2111 (fax)

    1 Present address: Centre for Neuronal Survival, Montreal Neurological Institute, 3801 University Street, Montreal, Quebec H3A 2B4, Canada.

    "ATP is released by neurons and functions as a neurotransmitter and modulator in the CNS. Here I show that ATP released from glial cells can also serve as a potent neuromodulator, inhibiting neurons in the retina of the rat."

    The Journal of Neuroscience, March 1, 2003, 23(5):1659

    Glial Cell Inhibition of Neurons by Release of ATP
    Eric A. Newman
    Department of Neuroscience, University of Minnesota, Minneapolis, Minnesota 55455

    "ABSTRACT Accumulating evidence has demonstrated the existence of bidirectional
    communication between glial cells and neurons, indicating an important active role of
    glia in the physiology of the nervous system. Neurotransmitters released by presynaptic
    terminals during synaptic activity increase intracellular Ca2 concentration in adjacent
    glial cells. In turn, activated glia may release different transmitters that can feed back
    to neuronal synaptic elements, regulating th

    Link to this
  26. 26. hs96dlw 10:51 am 05/17/2010

    "activated glia may release different transmitters that can feed back
    to neuronal synaptic elements, regulating the postsynaptic neuronal excitability and
    modulating neurotransmitter release from presynaptic terminals. As a consequence of
    this evidence, a new concept of the synaptic physiology, the tripartite synapse, has been
    proposed, in which glial cells play an active role as dynamic regulatory elements in
    neurotransmission." Glial Modulation of Synaptic
    Transmission in Culture
    ALFONSO ARAQUE* AND GERTRUDIS PEREA
    Instituto Cajal, Consejo Superior de Investigaciones Cient´ficas, Madrid, Spain, © 2004 Wiley-Liss, Inc.

    Link to this
  27. 27. robert schmidt 11:03 am 05/17/2010

    @hs96dlw, thanks. I’ll look through them. A brief review of the abstracts does seem to suggest that they are locally acting which suggests we don’t need to add new nodes to networks to account for their function.

    @jtdwyer, "Artificial Neuron model does not represent the role of local analog chemical states of neurotransmitters in propagating and modifying signals" that’s not so. The weight value assigned to synapses represents the quantity of neurotransmitter vesicles in the pre-synaptic terminal, the probability of release, and the electronic response at the post synaptic terminal. Some models have separate variables for all three aspects but generally only one is used. Also, some learning algorithms adjust weights based on the degree of error. This is similar to the role of Acetylcholine in determining if an experience has happen before in which case we recall, or if it is new, in which case we learn.

    Your correct though, the role of ANNs is generally not to faithfully model real nervous systems but to perform an information processing role. In this regard I think that artificial brain are actually not that far off. Since it appears that intelligence is an issue of organization more than brain size I believe it will be possible it build intelligent, self-aware machines with orders of magnitude fewer nodes than the human brain. It’s still a complex problem but much less complex than a biologically accurate model.

    Link to this
  28. 28. jtdwyer 11:59 am 05/17/2010

    robert schmidt – I do not agree that the the role neurotransmitter state variability in determining brain function can be adequately represented by weighted probabilities or any number of static global variable.

    I am curious as to what level of functional capability you would consider adequate qualification as an ‘intelligent, self-aware machine’? Would an aggressive chess playing system qualify? An emotional facial recognition system?

    Link to this
  29. 29. hs96dlw 1:56 pm 05/17/2010

    "…e postsynaptic neuronal excitability and
    modulating neurotransmitter release from presynaptic terminals. As a consequence of
    this evidence, a new concept of the synaptic physiology, the tripartite synapse, has been
    proposed, in which glial cells play an active role as dynamic regulatory elements in
    neurotransmission. In the present article we review evidence showing the ability of
    astrocytes to modulate synaptic transmission directly, with the focus on studies performed
    on cell culture preparations, which have been proved extremely useful in the
    characterization of molecular and cellular processes involved in astrocyte-mediated
    neuromodulation." © 2004 Wiley-Liss, Inc.

    Glial Modulation of Synaptic
    Transmission in Culture
    ALFONSO ARAQUE* AND GERTRUDIS PEREA
    Instituto Cajal, Consejo Superior de Investigaciones Cient´ficas, Madrid, Spain

    Link to this
  30. 30. hs96dlw 2:28 pm 05/17/2010

    hi robert schmidt, i’m still not sure it’s as (relatively) simple as you suggest. if glia alter the threshold at which post synaptic potentials spatially or temporally combine to trigger an action potential at the axon hillock, then one neuron could behave in, say, ten different ways to the same incoming action potentials, depending on the state of the surrounding glia. or a 100 different. or a 1000 different.

    Link to this
  31. 31. robert schmidt 2:51 pm 05/17/2010

    @hs96dlw, thanks for the additional resource. I still don’t see a major increase in complexity. My ANNs don’t care about which cells are involved in long-term or short-term potentiation. All I care about is how I calculate synaptic weight. If I need additional nodes to account for weight then the complexity has increased. So far all that I’ve seen is the biological underpinnings of how this value arises. To be fair though I haven’t had time to review all the material. I do have models that including global and regional variables that simulate "emotional" contexts. These variables influence spike initiation and weight adjustment. So, in a way, it seems glial cell function is already included in some models.

    @jtdwyer, " I do not agree that the the role neurotransmitter state variability in determining brain function can be adequately represented by weighted probabilities or any number of static global variable." you are certainly entitled to your opinion. But what conclusions does the evidence support? I have seen no reason to conclude this isn’t possible. So far I have seen neural models that are fairly accurate. I have a long term memory model that performs as well as a rabbit in various eye blink learning tests. Ultimately this is about information. If you think the models won’t work it must be because you think there is missing information. So, what is missing?

    Link to this
  32. 32. robert schmidt 3:06 pm 05/17/2010

    @jtdwyer, "I am curious as to what level of functional capability you would consider adequate qualification as an ‘intelligent, self-aware machine’?" that is a big question. I am not capable of answering that. I don’t think anyone is capable at this time. At this point we are breaking down regions of the brain into discrete circuits and validating those models. There is also some level of integration going on. For example, the model I am working with integrates the hippocampus, cerebral cortex and a few other regions and, as I said, it can perform almost as well as the wetware. I believe that self-awareness (consciousness) will arise from higher levels of integration. I think current results from neuroscience research support that. Consciousness in machines won’t be like a switch that once you add that nth node suddenly you have a conscious machine, instead there will be degrees of consciousness, which is consistent with what we see in nature. Also, the phenomenon of consciousness in machines may not be the same as that in humans. We need to be open to that. This lack of clear definition is what presents the danger though. The way things have worked in the past is that certain subject groups have perceived themselves as equals long before the dominant group considers them as such. Certain segments of society will never accept a machine as an equal simply because they believe it cannot have a soul. We’ve heard that before. It is meaningless that they are also unable to prove that they have a soul. I think machines will be conscious long before we admit it. But it won’t be by accident. We will make them that way for a purpose. The lack of acceptance will come from certain members of society that conveniently redefine consciousness in order to exclude them from the family of equals so they can continue to justify exploitation.

    For me being self-aware means being able to; see yourself from the third person perspective, have memories of yourself rather than just simple learned responses to stimuli, imagine yourself in unfamiliar environments and finally to plan for your future because you are able to anticipate what it will be like to be in that state. How we test that I don’t know. I fear though that we will set standards for consciousness in others that we couldn’t pass ourselves.

    Link to this
  33. 33. hs96dlw 3:37 pm 05/17/2010

    hi robert schmidt, i’m still not sure it’s as (relatively) simple as you suggest. if glia alter the threshold at which post synaptic potentials spatially or temporally combine to trigger an action potential at the axon hillock, then one neuron could behave in, say, ten different ways to the same incoming action potentials, depending on the state of the surrounding glia. or a 100 different. or a 1000 different.

    Link to this
  34. 34. jtdwyer 5:01 pm 05/17/2010

    robert schmidt –
    "If you think the models won’t work it must be because you think there is missing information. So, what is missing?"

    The control functions for all variable synaptic potentiation states. Whether it is neurotransmitter chemistry or gial cell modulation of local neurotransmitter release, there are processes separate from and independent of whatever algorithms are contained within your Artificial Neuron module that affect signal transmissions.

    Reducing complex processes into constants or probabilistic variables produces a simplified range of possible results.

    If the action potential for every human being were represented by a probabilistic state, would social interactions exhibit the enormous variability that can be commonly observed in public? It is the complexity of individual responses that produce a complex population of variable individuals.

    Link to this
  35. 35. jtdwyer 5:08 pm 05/17/2010

    robert schmidt – Self awareness is not an easily testable characteristic.

    I suggest you ignore that lofty goal and adopt the classic AI test: if textual conversations can be successfully generated by a machine participant without human participants detecting that they are conversing with a machine, the machine must be considered intelligent.

    Link to this
  36. 36. robert schmidt 8:05 pm 05/17/2010

    @jtdwyer, "there are processes separate from and independent of whatever algorithms are contained within your Artificial Neuron module" then all we need to do is include them. The sodium channel on a neuron is a very complex thing. It would take a great deal of computer power to model it faithfully. But it takes substancially less power to model what it does. That’s all we need,

    "adopt the classic AI test", if the Turing test is the test of intelligence then is a magician the test of whether or not there is such a thing as magic? Magicians know how to trick us. They don’t know how to violate the laws of nature. People are easily fooled, just look at the article here on optical illusions. I think it will be possible to make a machine that passes the Turing test before we make an intelligent machine, just as we can make very believable animated characters in movies. I think the objective to make machines that "blend-in" is a different one than intelligence. It is more for creating android/gynoid servants than thinking individuals.

    I don’t believe we will need a Turing test. If a machine is conscious it will be because we intend for it to be that way. But, it still may not be able to fool a person into believing it is not a machine. People have accents based on where they are from so I am sure machines will also have distinct accents/traits. It may be that they speak very succinctly, or use a broader vocabulary. It may be that intelligence is easier to model than human vocalization. I can certainly believe we will have intelligent machines before we have gynoid ballerinas.

    I think there is a certain amount of speciesism in the idea of the Turing test. It implies that if it acts human it must be intelligent. I’ve read a lot of comments here that I know came from humans but can guarantee that those humans were not intelligent. What if we create a robo-dolphin that is as smart as a real dolphin? Obviously it isn’t human, but is it intelligent? What about a robo-chimp or elephant? This points to the issues I raised earlier, if we equate intelligence with being human, then there are people who will never see anything not born of a woman human. Consciousness may come in different packages. I guess we need to really define what it is before we can apply that label to anything, even us.

    Link to this
  37. 37. robert schmidt 9:01 pm 05/17/2010

    @hs96dlw "…could behave in, say, ten different ways to the same incoming action potentials … or a 100 … or a 1000

    I can represent 256 states with 8 bits of data, 32767 states with just 16bits of data. If the weight of each synapse can be represented by 16 bits then the human brain holds about 2 petabytes of info. That’s 2,000 times more than my pc holds. It would cost about $200k for that amount of disk space. Of course I would rather run it all in RAM which would cost about $2B. But then again my current PC holds a million times more info than my first so maybe I’ll wait it out. The big concern for me is managing a quadrillion parallel processes 50 times a second. That’s way more than my dual SLI cards can handle. I can probably only model about 100k synapses per core which would mean I need 10 billion cores. Darn, I think I only have 384! Let’s hope Moore’s law stays in force for the next little while. Still we are talking about a 1 year old PC here not a supercomputer.

    The problem is unquestionably a big one. The question is how simple can we model neurons and synapses and still get robust results? I guess we’ll know when we get there.

    Link to this
  38. 38. jtdwyer 10:16 pm 05/17/2010

    robert schmidt -
    "It would take a great deal of computer power to model it faithfully. But it takes substancially less power to model what it does. That’s all we need,"

    I hope you understand precisely what is controlling the state of the sodium channel and do not simply represent it as a variable. In 1984 I published a paper on my experiences using analytical queueing algorithms to represent very large scale computer systems: I concluded that adequate representation producing usefully accurate results was not feasible. I suspect similar issues will occur in attempting to represent sodium channel processes. Good luck with that.

    I accept your argument against the Turing test (I had forgotten what that was), as long as your objective is defined and testable. However, I’m not sure what the purpose or benefit of a self aware Swiss Army knife might be…

    Link to this
  39. 39. hs96dlw 3:38 am 05/19/2010

    robert schmidt, by way of analogy, i think of our understanding of the neural code as similar to our understanding of whale song: we know how whales communicate, we have a fair idea why they communicate, but we won’t be making any meaningful contribution to the conversation for a while yet; probably around the same time the internet chips in with it’s opinion. i admire your promethean efforts, just please don’t get despondant and give up when in ten years time your pc or supercomputer still have as much intentionality as a lump of clay. but of course, if you’d ran your ann on a mac it probably would’ve asked you out on a date by now ;)

    Link to this
  40. 40. royniles 4:27 pm 05/20/2010

    What nobody knows, or may ever be able to know, are the strategic arrangements made by eons of evolutionary "learning" in the algorithms the brain must turn to for instructions as to how to react to all the combinations of stimuli that it has come to expect to receive. We can encode the information, but may never learn to encode the myriad of algorithms that we have needed to make appropriate use of it.

    Link to this
  41. 41. umakoshi 6:47 pm 05/20/2010

    Those myriads of different algorithms are just ‘slight’ adapts from tried/failled evolutionnary responses to stimuli &/or deffective neural reactions … !? Nowadays far lesser of these/such ‘algorithms’ may actually be still active nor effective in the ‘running’ of this biological beautiful machine-like organ, … however not such a simplistic "binary-input/reaction" interconnected cloud-network … In my opinion, our actual use of it’s {biological} capacities just would prevent Us of figuring a mathematical [ie fixed/constrained] method for reproducing it’s working and effectiveness at adaptation … !?

    Link to this
  42. 42. umakoshi 6:48 pm 05/20/2010

    Those myriads of different algorithms are just ‘slight’ adapts from tried/failled evolutionnary responses to stimuli &/or deffective neural reactions … !? Nowadays far lesser of these/such ‘algorithms’ may actually be still active nor effective in the ‘running’ of this biological beautiful machine-like organ, … however not such a simplistic "binary-input/reaction" interconnected cloud-network … In my opinion, our actual use of it’s {biological} capacities just would prevent Us of figuring a mathematical [ie fixed/constrained] method for reproducing it’s working and effectiveness at adaptation … !?

    Link to this
  43. 43. royniles 7:21 pm 05/20/2010

    The algorithms are more than "slight adapts," they are strategies for defense against predators, for competitive advantages, for adapting to their environment and adapting environments to them, for mating, for replenishing energy, ad infinitum.

    Link to this
  44. 44. Klortho 10:44 pm 05/22/2010

    John Horgan: "We don’t know how brains work, so we will never know!"

    Link to this
  45. 45. bill benzon 10:35 am 05/23/2010

    The final paragraph of a post at my own blog, New Savanna:

    I do, however, believe Horgan is correct about current prospect. I fear that the current batch of brain boys are mistaken in their hopes for the next decade or two. No doubt well see amazing developments, but I fear we have more to learn about the mind and the brain before we can build one. And by the time we learn enough, I suspect that the question of whether or not we can build one will have become irrelevant.

    http://new-savanna.blogspot.com/2010/05/theyre-at-it-again-hacking-human-mind.html

    Link to this
  46. 46. bill benzon 10:39 am 05/23/2010

    Hi John,

    I agree with you 100%. My teacher, the late David Hays, was one of the original researchers in machine intelligence, and he was on the committee the DoD appointed in the early 60s to assess the field when it failed to live up to its promise. And then there was the AI Winter of the 1980s. Now the brain boys are taking a whack at the pinata. I say more at my own blog, New Savanna:

    http://new-savanna.blogspot.com/2010/05/theyre-at-it-again-hacking-human-mind.html

    Link to this
  47. 47. zalmoxis 10:31 am 05/24/2010

    I beg to differ about the claim that the brain is a complete mystery and is magical to us. That would imply that neuroscience is not a science but a fraud. Moreover an action potential impulse is not the equivalent to an operation just as well as 1 bit traveling through a computers bus is not equivalent to an operation. A computer is a serial computational machine while a brain is a parallel computational machine, each encoding information in whatever way makes sense for the processing technique: serial, parallel…

    I completely agree that back-propagating, feed-forward, etc. neural networks are not good "simulations" of the brain but surely the neuron is a basic unit of processing without which you could not do. Surely more advanced behaviours can be built on top of a bunch of these units.

    Link to this
  48. 48. brainfood42 1:39 pm 06/1/2010

    The premise of this article is wrong. Hardware/software based Artificial Intelligence has been created already, it just hasnt been built yet. It will take less than twenty-five years and a relatively small ($50M) amount of money to get built.

    AI does not need a body. Sensors, inputs and human based training will enable the AI-conscious to appear. We promote human beings too much in our thinking of consciousness. My cat is conscious and has emotions. Lower level animals (significantly less brain matter) are also conscious. Motivation, knowledge, internal competition (alternatives), time sense and ability (acquiring new knowledge) will cause consciousness. Look to these items within Artificial Agents research to see the answers are here already.

    Link to this
  49. 49. brainfood42 1:40 pm 06/1/2010

    The premise of this article is wrong. Hardware/software based Artificial Intelligence has been created already, it just hasn’t been built yet. It will take less than twenty-five years and a relatively small ($50M) amount of money to get built.

    AI does not need a “body”. Sensors, inputs and human based training will enable the “AI-conscious” to appear. We promote human beings too much in our thinking of consciousness. My cat is conscious and has emotions. Lower level animals (significantly less brain matter) are also conscious. Motivation, knowledge, internal competition (alternatives), time sense and ability (acquiring new knowledge) will cause consciousness. Look to these items within Artificial Agents research to see the answers are here already.

    Link to this
  50. 50. brainfood42 1:43 pm 06/1/2010

    The premise of this article is wrong. Hardware/software based Artificial Intelligence has been created already, it just hasn’t been built yet. It will take less than twenty-five years and a relatively small ($50M) amount of money to get built.

    AI does not need a “body”. Sensors, inputs and human based training will enable the “AI-conscious” to appear. We promote human beings too much in our thinking of consciousness. My cat is conscious and has emotions. Lower level animals (significantly less brain matter) are also conscious. Motivation, knowledge, internal competition (alternatives), time sense and ability (acquiring new knowledge) will cause consciousness. Look to these items within Artificial Agents research to see the answers are already here.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American MIND iPad

Give a Gift & Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now >>

X

Email this Article

X