Skip to main content

Integrating Left Brain and Right, on a Computer

As computers have matured over time, the human brain has no way of keeping up with silicon’s rapid-fire calculating abilities. But the human cognitive repertoire extends far beyond just fast calculations.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


As computers have matured over time, the human brain has no way of keeping up with silicon’s rapid-fire calculating abilities. But the human cognitive repertoire extends far beyond just fast calculations. For that reason, researchers are still trying to develop computers that can recognize, interpret and act upon information—like the kind pulled in by eyes, ears, nose and skin—as quickly and efficiently as good old-fashioned gray matter. Such cognitive systems are critical to transforming waves of big data collected by sensor networks into a meaningful representations of, say, automobile traffic on a particular roadway or maritime weather conditions.

There are several efforts underway worldwide to build “neuromorphic” systems that mimic the human brain. IBM, in particular, has made significant strides in this area. In 2011, the company’s Watson system put two former Jeopardy! champs in their place by interpreting questions and quickly finding the answers in its database. A lower-profile—but perhaps more significant—event occurred later that year, when IBM and Cornell University introduced an experimental chip designed to become the building block of a computing system that emulates the brain’s function, efficiency and compact size.

That chip is the foundation of IBM’s contribution to the Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, which seeks to reverse-engineer the brain's computational abilities to better understand our ability to sense, perceive, act, interact, and understand different stimuli. (HRL Laboratories is leading the other DARPA SyNAPSE project.)


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


DARPA’s most recent endowment of $12 million to the IBM team has enabled it to create the necessary software programming and testing components needed to move their version of SyNAPSE forward. This is the fourth phase of SyNAPSE’s development, fueled by about $53 million in DARPA funding since the project’s inception in 2008.

SyNAPSE software will provide instructions for IBM’s cognitive computing architecture, based on what the company refers to as a scalable, interconnected, configurable network of “neurosynaptic cores.” Each core consists of computing components that IBM likens to their biological counterparts—core memory functions similar to the brain’s synapses, processors provide the core’s nerve cells, or neurons, and communication capabilities are handled by wiring akin to the brain’s axon nerve fibers. In IBM’s architecture, a core will consist of a chip with 256 “neurons” for computation and 256 “axons” for communication connected via 65,536 “synapses.”

Neurons on one neurosynaptic core will be able to connect to axons on any of the other neurosynaptic core in the network, says Dharmendra Modha, IBM Research senior manager and principal investigator for the SyNAPSE project.

“The beauty is that it’s modular and can be repeated indefinitely,” creating larger networks of cores that operate in parallel without increasing the network’s complexity, Modha says. IBM’s long-term goal is to build a cognitive-computing network with 10 billion neurons and 100 trillion synapses that consumes a single kilowatt of power while occupying less than two liters of volume. “Once you have that architecture, the question becomes, how you program it,” he adds.

This brain-inspired computing platform requires a new approach to programming that transcends the conventional software-writing methods, in which every action needed to perform a function or set of tasks is pre-specified. Software for a SyNAPSE-based system might, for example, define each core’s general parameters, specifying patterns of connections between neurons and axons—and then the machine would take it from there. Cognitive computers, optimized for artificial intelligence and other approaches that enable machines to “learn,” will engage in their own decision-making processes based on the incoming data received.

In addition to creating a new programming language for writing code that tells each neurosynaptic core what to do, IBM has developed a library of pre-written programs–called “corelets”—as well as a way of simulating how different programs will perform. Modha and his colleagues are presenting their approach to cognitive programming Thursday at The International Joint Conference on Neural Networks in Dallas.

Such a program might someday be used to create an eyeglass-mounted computer equipped with video and auditory sensors that capture and analyze data to help a person with significant damage to the area of the brain that processes visual information to judge distances and depth. “If you emulate the visual cortex used to process those images using these chips, then you can imagine a lightweight compact pair of eyeglasses that could be used for the visually impaired to navigate an environment,” Modha says. A chip programmed to function like a visual cortex might also be used to help driverless cars deal with the types of split-second decisions needed to maneuver around a vehicle that cuts precipitously between lanes.

Computers based on SyNAPSE will perform a different kind of pattern recognition than Watson even though both can be classified as cognitive systems. IBM built Watson to demonstrate an ability to understand complex language with a variety of grammatical structures and perform analytical thinking and decision making under specific time constraints. “Think of Watson as the epitome of artificial intelligence and cognitive computing research from a left brain perspective—more symbolic and analytical,” Modha says. SyNAPSE is more right brain—seeking to make computers more perceptive and instinctive.

IBM wants to eventually meld left and right brain computing functions into an integrated cognitive computing intelligence that mimics the human brain’s ability to be both analytical and intuitive.