The past several years have brought two parallel revolutions in neuroscience. Researchers have begun using genetically encoded sensors to monitor the behavior of individual neurons, and they’ve been using brief pulses of light to trigger certain types of neurons to activate.

These two techniques are known collectively as optogenetics–the science of using light to read and activate genetically specified neurons–but until recently, most researchers have used them separately. Though many had tried, no one had succeeded in combining optogenetic readout and stimulation into one unified system that worked in the brains of living animals.

But now, a team led by Michael Hausser, a neuroscientist at University College London’s Wolfson Institute for Biomedical Research, has succeeded in creating just such a unified optogenetic input/output system. In a paper published this January in the journal Nature Methods [Scientific American is part of the Nature Publishing Group], the team explain how they’ve used the system to record complex signaling codes used by specific sets of neurons and to “play” those codes back by reactivating the same neural firing patterns they recorded, paving the way to get neural networks in the brains of living animals to recognize and respond to the codes they send.

“This is going to be a game-changer,” Hausser says.

Limits of light
Conventional optogenetics starts with genes. Certain genes encode instructions for producing light-sensitive proteins. By introducing these genes into brain cells, researchers are able to trick specific populations of those cells–all the neurons in a given brain region that respond to dopamine, for example–to fire their signals in response to tiny pulses of light.

In optogenetics experiments, researchers typically perform what’s known as wide-field delivery of light, in which all of the cells in the field of view get activated, as long as they’ve got the light-sensitive protein. While it’s possible to get fairly precise results with this technique (for example, programming some neurons to respond only to red light and others only to blue) it still only allows researchers to select neurons by their genetic attributes, not by their observed behavior in a living brain. In other words, neuroscientists can’t revise their neuron selection on the fly when they notice something unexpected.

Optogenetics experts widely recognize this limitation – in fact, for the past few years, some have been working on new techniques for targeting specific individual neurons. So far, though, these techniques have only worked in vitro, on small slices of brain tissue maintained in a dish – never in the brains of living animals.

With their new method, however, Hausser and his co-authors took a novel approach. They used a tool called a spatial light modulator (SLM) to split a light beam into multiple beamlets and aimed each one at a specific, pre-selected neuron. This allows the team to change its target neurons on the fly, and it works in living, moving animals.

Hacking the network
The team started its experiment by gathering a sample group of mice and injecting a light-sensitive protein called GCaMP6 into an area of their brains known as the barrel cortex, which processes incoming touch stimuli from their whiskers. The GCaMP6 in each neuron would light up only when that neuron fired.

The team also gave the same population of neurons a gene for a protein known as C1V1, which would cause the neuron to fire in response to a pulse of light. Up to that point, it was essentially the same approach that had been used in many other optogenetic experiments, although combining the two proteins in the same cells was new, and it allowed the researchers to both read and trigger activity in the exact same neurons.

Using this strategy, Hausser and his team then used the activity sensor to tell them which of those neurons fired when they stimulated a particular whisker on a particular mouse, and they fed those results into their SLM. Since the SLM could fire light pulses at just the cells the researchers chose and change targets on command, it didn’t have the weakness of activating every light-sensitive neuron that happened to fall within the path of the light.

This new level of precision enabled the researchers to aim pulses of light only at those specific neurons that had fired when the mouse’s whisker was stimulated, causing those cells to fire signals in the same patterns they used when a whisker was stimulated. “It’s important to stimulate a selected set of neurons simultaneously,” Hausser says, “because neurons don’t normally act alone.”

The researchers then artificially triggered those same neural firing patterns while the mice ran, rested and otherwise went about their daily business. They found that those firing patterns were more intense while a mouse was running, even in response to the same neural signals. Along the way, they also found that they could track and pick out patterns of neural activity that responded to specific stimuli, and that they could trigger activation of those same patterns artificially by activating the specific neurons that encoded them.

“To be able to get at the neural code during this processing we need to be able to only activate the precise set of neurons that are engaged during that task,” Hausser says. “And this new approach finally allows us to do that in the intact brain, while the animal is engaged in behavior.”

Cortical shortcuts
These results lead to an obvious question: how did the mice themselves respond to all this artificial stimulation? “That,” Hausser says, “is actually what we’re working on right now.” Although this first round of experiments was more a proof of concept than a behavioral study, Hausser and his team are now working on getting animals to respond as if they’re actually detecting a simulated stimulus.

“We can generate a phantom stimulus, so to speak, which we’ll play into the cortex,” Hausser explains, “and that will enable us to see which signals trigger the brain–and the entire animal–to respond as if it had actually detected real whisker stimulation.”

Another intriguing implication of these results is that this kind of precise stimulation may provide much-needed shortcuts for mapping an animal’s connectome, the entire network of connections among all the cells in the animal’s nervous system. Although the science of connectomics has come a long way since neuroscientist Olaf Sporns coined the term in 2005, it still faces an information bottleneck, due to the massive volumes of data required to reconstruct even a tiny section of an animal’s brain at the level of individual neurons and their synaptic connections. With techniques like the one developed by Hausser’s team, however, neuroscientists may be able to target subpopulations of neurons involved in specific tasks and focus on mapping just those neurons.

“It’s still going to be many years before we even have the connectome of a single cortical column,” Hausser says, but with imaging techniques like those in this study, researchers may be able to sample the functional connectivity of a column’s neurons in a single experiment – in hours rather than months. “This approach has the potential to speed things up dramatically.”