March 4, 2013 | 1
“We are the Borg. We will add your biological and technological distinctiveness to our own. Resistance is futile.”
So what was it? It was one rat learning to do something, while electrodes recorded his every move. In the meantime, on another continent, another rat received the signals into his own brain…and changed his behavior.
Telepathy? No. A good solid proof of concept? I’m not sure. An interesting idea? Absolutely.
So I wanted to look at this paper in depth. We know already that some other experts weren’t really thrilled with the results. But I’m going to look at WHY, and what a more convincing experiment might look like.
Pais-Vieira et al. “A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information” Scientific Reports, 2013.
So what actually happened here? Each experiment involved two sets of rats. First, you have your “encoder rats”. These rats were water-deprived (not terribly, just thirsty), and trained to press a lever for a water reward (water deprivation is one training technique for lever pressing, and is one of the fastest. But you can also food-deprive and train for food or just train the animal to something tasty, like Crisco or sweetened milk). The rats were trained until they were 95% accurate at the task. They were then implanted with electrodes in the motor cortex, that recorded the firing of the neurons as the rats pressed the left or right lever.
The ‘decoder’ rats, who will be on the receiving end of the stimulus, were also already trained. They were trained to respond to direct stimulation in the motor cortex. A single pulse meant you got water at one lever, a train of pulses meant you got water at the other. The decoder rats were already about 78% accurate on average.
So then, to hook the two rats up, the authors recorded from the ‘encoder’ rat, and converted the signal for right or left to a train of signals, or to a single pulse. They then transmitted that signal (which ever one it happened to be) to the ‘decoder’ rat, and looked to see which lever it would press.
What they saw was that the stimulation pulses given to the decoder rat influenced its behavior.
A rat that is untrained and received no stimulus will respond at chance (50% of the time he’ll respond on the right lever). A rat that is fully trained (the encoder) will respond at 95% accuracy. A rat that was trained on the task, and received a stimulus to which it was already trained from another rat, responded at 64% accuracy. That is significantly better than the 50% chance. But it’s not great. The accuracy increased the more you gave the pulses, and worked whether the signals were trained on to motor cortex (experiment 1) or sensory cortex (experiment 2, and it should be noted that the bar results for sensory information were much tighter).
Where this gets more interesting is when they added in feedback. When the decoder rat got the answer right, the encoder rat got another reward. This made the encoder rat improve his performance, which resulted in him giving clearer “signals” to be decoded. Not only that, the decoder rat “learned” from the experience. Stimulating its whiskers OR the encoder rat’s whiskers (like in the sensory task, which used whisker stimulation instead of a light), made the sensory cortex fire. So the decoder rat may be recognizing the encoder rat’s signals in its brain as ‘self’.
So it’s a very interesting idea, to transfer signal from one rat to another. But at bottom, this is the decoder rat receiving a train or a single unit of pulses, and acting accordingly. Those pulses could come from another rat, or from a computer. It was a set of pulses to which it was ALREADY TRAINED. It already knew what certain trains of pulses “felt” like, and knew how to respond accordingly. So this isn’t the wow-idea of putting a particular ‘thought’ into another rat’s head. Instead, it’s teaching a stimulus, and then giving him that stimulus, it’s just that the stimulus comes from another rat.
They also show (and you can see above, the ratio of correct responses to incorrent. This is a very important measure, as it helps to determine whether a rat was just banging away on any lever regardless of this signal. You can see the decoder rat’s accuracy increased as the train of stimuli got stronger…but those error bars are kind of large (for the 61-80 microsim train, for example, they varied between 0.5 and almost 0.8. It’s displayed of fraction of right choice, but you might as well say that they got the lever right anywhere between 50 and 80% of the time). So it’s possible that for some stimuli, they really WERE just banging on the lever. And at the low end of the stimuli, they were WORSE than chance. 0-20 and they were only achieving about 40% accuracy. For the sensory condition, though the error bars are tighter, the accuracy is actually WORSE. At the highest pulse used they achieved 60% accuracy, but either they didn’t go higher, or they couldn’t get the rats to perform better. So it’s possible that, to some extent, the stimulation is just making the animals hit levers or choose openings (for the sensory task), rather than accurately making the right choice.
But the real issue with the results is that the results weren’t particularly robust, and it’s a task that is very, very simple. And this is only a task with TWO choices. Left or right. Right or wrong. In something more complex? The whole thing would probably descend into noise. Especially considering…there were only 2-5 rats per group in every single experiment. In the second experiment, there were only two encoder rats, in the first there were only three. Yes, they got significant results, but I wonder if it would hold up if you had more rats with a wider range of behaviors. A single rat tends to have a very consistent level of behavior in these kinds of tasks, but it can vary pretty drastically from rat to rat. It’s possible they had 2-3 really good performers, and adding animals would take significance away from the results.
So what might be a good way to show the power of this? I personally think you might have to do more than motor cortex, and you’d definitely have to do more than just a train vs a single pulse. I think you might have to look at the entrainment patterns of a group of neurons following a particular type of training, and then match that stimulus on to another animal. For example, you could look at the entrainment of neurons in one rat who has been exposed to fear conditioning (learning to freeze in an environment associated with a shock). Pattern that onto an untrained rat and look for freezing in context. That is a behavior and a learning method that have been extensively studied, and are localized to the hippocampus in the brain, so you have a small area to work with. And while there’s still a binary of behavior (freezing or not), the receiving animal wouldn’t be trained prior to testing, which would make the finding more robust.
An even stronger experiment might be to involve hippocampal place cells. Place cells are cells in the hippocampus that fire in reference to a specific location an animal has previously experienced. So the animal is in a maze, the first left turn will get a place cell, the hallway will get some place cells, the next right turn, etc. You can get very strong responses with these place cells, and if you could form a similar ‘map’ in a ‘decoder’ rat, you might be able to get an animal to navigate a maze it has never seen before. That’s a much more nuanced set of behaviors (so nuanced that it is probably down the road, the fear conditioning might be something that’s possible sooner), and would really establish the paradigm.
But of course you may wonder, what good ARE studies like these? Especially if they aren’t going to make us Borg? When it comes to brain to brain interfaces and machine to brain interfaces, that’s a good question. The lead author, Nicolelis, has already done a lot of work with machine/brain interfaces, and sensory feedback from motor stimuli, which is the most important thing when developing a neural prosthetic. I could see studies like this being used to, maybe, one day in the future, link up a brain (one, say, that is post stroke and needs to re-learn) to a computer, and use the trains of pulses from the computer to re-learn things like motor skills. I think this study may help in that regard. The results of this particular study may not be overwhelmingly strong, but the idea, and the technology, is there.
Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information Scientific Reports, 3 DOI: 10.1038/srep01319
*There is also something interesting that I noticed about Scientific Reports, where the article was published. On their site they note they they are “Rigorous — peer review by at least one member of the academic community”. I like the open access aspect, and publishing speed is always good with me, but only one member of the academic community required? That’s a bit unusual, especially for a paper this high-interest. I assume that because they say “at least one” they probably had more than one, but the usual is 2-3 reviewers (or even more if the paper proves controversial). I’m not sure what this might mean for the paper, and probably nothing, but it definitely struck me.
Give a 1 year subscription as low as $14.99X