We can make movies using atoms as characters, grow organs and even skydive from space, yet when it comes to understanding the finer details of the 1.3 kilogram organ behind each person’s eyes – the brain – we’re mostly in the dark. Neuroscientists do not even know how many different types of cells it contains, much less how they are connected.

Neurons are intricately branched biological powerhouses, connecting through synapses to achieve feats like learning and perception. Situated among vast volumetric fields of cells, neurons are densely packed in the brain. Your brain contains around 86 billion of them, networked through 100 trillion synapses. To put this scale into perspective, consider it relative to other astronomically large numbers. Earth’s solar system is one of 300 billion in our home galaxy, the Milky Way. There are more synapses on average in a cubic centimeter of human brain than stars in the Milky Way. And there are over 1,200 cubic centimeters in a single brain.

Spanning just a few nanometers — a thousandth a width of a human hair — synaptic junctions are the seat of computation in the brain. As you read or walk or ponder the universe, billions of them fire in a symphony of signal relays. Most of these connectivity networks remain unknown, in part because neurons are extremely small, remarkably vast, and the technology to see them in detail is relatively new.

A synapse. An upstream cell, here an amacrine interneuron shown in yellow, sends impulses to a downstream cell, shown in blue, which if excited in the right way will then propagate the signal down its axon to other cells. Image by Alex Norton from data analyzed by deep learning algorithms and gamers in the citizen neuroscience game EyeWire.

Enter connectomics. This fresh neuroscientific subfield uses electron microscopy to image volumes of tissue in extraordinary detail. Pairing nanoresolution 3D digitized brains with deep learning, researchers are able to analyze neural circuits at synaptic detail.

The National Institutes of Health’s BRAIN Initiative states that neuroscience’s “challenge is to map the circuits of the brain, measure the fluctuating patterns of electrical and chemical activity flowing within those circuits, and understand how their interplay creates our unique cognitive and behavioral capabilities.” Advances in connectomics over the next few years offers a viable route to understand the physical organization of single-cellular level neural computational connections. Many researchers also theorize that connectomics could help us understand links between psychological disorders and miswirings in the brain.

Connectomics also has the potential to inspire breakthroughs in computer science. Deep learning, a hot field these days, is a type of artificial intelligence (AI) inspired by neural networks. It allows machines to learn and do smart things like recognize objects, predict what you’re searching for, recommend a video on Netflix or a friend to add on Facebook. It can also help doctors diagnose diseases. Deep learning got its start in the early 1980s. It did not catapult to prominence until after 2000 when researchers began layering artificial neural networks (hence “deep”), resulting in exponential improvements. But even advanced deep learning is based on a fairly simplistic model of layers of neurons in the brain. How might advances in connectomic resolution circuits inspire a next generation of breakthroughs computer science?

Editor’s note: This is the third installment in a series about emerging neurotechnologies. Join a pilot class of 12 PhD students at MIT as we explore how neuroscience is revolutionizing our understanding of the brain. Each post coincides with a lecture and lab tour at MIT created by the Center for Neurobiological Engineering. This experiment is supported by MITx and created by EyeWire.