Skip to main content

Optogenetics: Light my Fire? Or Totally Lame?

Optogenetics likes to light up debate. Optogenetics is a hot technique in neuroscience research right now, involving taking a light-activited gene (called a channel rhodopsin) targeted into a single neuron type, and inserting it into the genome of, say, a mouse (yes, we can do this now).

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Optogenetics likes to light up debate. Optogenetics is a hot technique in neuroscience research right now, involving taking a light-activited gene (called a channel rhodopsin) targeted into a single neuron type, and inserting it into the genome of, say, a mouse (yes, we can do this now). When you then shine a light into the mouse's brain, the channel rhodopsin responds, and the neurons that are now expressing the channel rhodopsin fire. This means that you can get a single type of neuron to fire (or not, there are ones that inhibit firing, too), whenever you want to, merely by turning on a light.

(Source)

I actually remember where I WAS when I first heard of optogenetics. I came into the lab in the morning, was going about my daily business, and hadn't checked the daily Tables of Contents for journals yet (I get these delivered into my email). I remember the postdoc, normally a pretty phlegmatic person, actually putting a little excitement into their voice, "hey guys, look at this." The paper was this one. We all crowded around. It took us all a few minutes to "get it".


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


As it began to sink it, I had two thoughts. The first? "WHOA, THAT IS AWESOME." The second? "Great, I know what's going to be the hot stuff now."

There are fashions in science. Not the kind where everyone dyes their lab coat plaid or creates cutoffs out of their Personal Protective Equipment (though that would be hilarious). There are experimental fashions. Lesions were once really "in". Knockouts were hot stuff in the 90s. fMRI enjoyed (and still does enjoy) its moment in the sun, electrophysiology often adds a little je ne sais quoi to a paper. DREADDs, CLARITY. And when a new thing comes along and is going to be hot? You can sniff it out a mile away. For next year? I'm betting on GEVIs, myself. They'll be all the rage.

Optogenetics caught on fast. And it had a very good reason. It's a lovely thing to be able to have control over exactly which neurons fire, and when. The technique is flexible and can be targeted to all different types of neurons in different areas. And it allows us to investigate specific areas of the brain, specific types of neurons, more closely than ever before.

So I was a little surprised when I saw an article by my colleague John Horgan, about how optogenetics doesn't light up his life. I mean, he doesn't have to love it, of course. But is reasons for not liking it are not ones that I can really get behind.

Now, don't get me wrong, optogenetics, like any technique, is not remotely perfect. In fact, it suffers problems BECAUSE of its current hotness. Often people just say "oh yeah...can we use optogenetics?" without really questioning if it's the best way to find out what they need to know. Or they decide they need to develop optogenetics in the lab, even if they are just redoing experiments that other people have done previously with other methods...only this time with light up brains! There are problems with the technique, and Mark Baxter has a really good post on how some people end up overdoing the opto. Mark doesn't dislike optogenetics, per se, he just has some good words of caution: do you really NEED to do that experiment with optogenetics? Is it the best way? It's an important series of questions to ask when you're looking to answer any question with any technique.

When John Horgan says that he dislikes optogenetics, though, he has a different set of reasons. He dislikes optogenetics because he does not believe that it can be used to treat humans. He has a meta-problem with optogenetics:

I can’t get excited about an extremely high-tech, blue-sky, biomedical “breakthrough”—involving complex and hence costly gene therapy and brain surgery–when tens of millions of people in this country still can’t afford decent health care.

While I agree that it's really horrible that millions of people in this country can't get the healthcare they need, logically, I don't think that's really linked to whether or not you can be excited about optogenetics. John is aiming at human treatment with optogenetics, and says that he can't get excited about it because he doesn't think it will be useful in humans. Or he can't get excited about it because humans don't have good healthcare. I'm a little fuzzy on that point.*

I'll be clear, regardless of what other people may say, I think optogenetics in humans is ridiculously far off, and I am rather hopeful that we'd never end up doing it at all. Lights positioned to shine on the brain require a lot of wiring, and I'd like to think that by the time we'd be in a position to use optogenetics in humans, we'd have already proceeded on to a treatment that was less invasive.

But that is beside the point. Because I don't think we NEED to use optogenetics in humans to make use of the technique. I don't think we need every new hotness in neuroscience to be directly applicable to the human condition. Sure, we need new techniques to be developed for use in humans, but not every one. Often, you need a technique that will allow you to find out more about neurons, circuits, and systems in models. That knowledge of those systems can then be applied to how we treat and understand human conditions.

Examples of this abound: microdialysis cannot be used in humans (well, it has been, but it's very, very difficult and highly invasive), but it has told us incredibly important things about how neurotransmitter systems respond to addictive drugs.

GFP is not used in humans (well, it can, but the realities of making people glow green...), but the ability to SEE how different proteins are affected, where they are localized, etc, has made huge strides in our understanding of the brain.

Taq Polymerase is now used on human tissue, and is an integral part of many, many lab procedures. It has been incredibly important in our understanding of how the brain changes during developing, following stress, in response to drugs...I could go on.

Many other examples abound. Two of the above (Polymerase and GFP) won Nobel Prizes in chemistry. We would NOT KNOW all the things that we know now about the brain...if we had ditched these techniques in favor of going straight to humans. Many techniques that we use in humans would still be valuable even if they were restricted to animal models! PET, memory tests, fMRI. All of these would be valuable tools even if they never made it into a human.

Optogenetics has already shown some great things. How particular neurons firing can trigger OCD behaviors. How particular neurons control drug taking. How stem cell grafts might work to relieve the symptoms of Parkinson's. None of these are humans. But all are applicable to the human condition.

So while I don't think we should all gush over optogenetics all the time, I wouldn't throw out the technique because we haven't cured anything yet. And I certainly wouldn't discount it because we aren't using it in humans. We, all of us, want cures or treatments for difficult diseases, we all do. But often, those treatments and cures will only come with better understanding of what is going on inside the brain, understanding we can get from studying models, from yeast to zebrafish to mice. We can't continue to just hope we luck into the next aspirin or Prozac. We have to understand. We have to discover. And to do that, we need tools. Optogenetics is a good tool to have.

*I also don't understand the idea that you can't get excited about opto because some people don't have healthcare. I understand that the fact that there's severely inadequate healthcare for a lot of people is HORRID. It's terrible. But there is absolutely no logic or reason to assume that the use of optogenetics precludes the development of a better healthcare system. That's like saying that many people don't have adequate transportation and therefore we shouldn't get excited by going to Mars. One does not directly, or even indirectly, link to the other. Even if you are talking about, say, the opportunity costs of developing optogenetics as a technique and using it, the costs of research at NIH in their entire annual budget, as compared to the national budget for healthcare, aren't even in the same ballpark. Cutting the entire NIH budget would be a mere drop in the bucket in terms of improving healthcare access. In addition, the information generated by use of things like optogenetics could produce in terms of future treatments, and understanding would be of benefit to healthcare in the long run (in much the same way as space research has generated a huge number of technological advances which affect things like transportation).

Scicurious has a PhD in Physiology from a Southern institution. She has a Bachelor of Arts in Philosophy and a Bachelor of Science in Biology from another respected Southern institution. She is currently a post-doctoral researcher at a celebrated institution that is very fancy and somewhere else. Her professional interests are in neurophysiology and psychiatric disorders. She recently obtained her PhD and is pursuing her love of science and writing at the same time. She often blogs in the third person. For more information about Scicurious and to view her recent award and activities, please see her CV ( http://scientopia.org/blogs/scicurious/a-scicurious-cv/)

More by Scicurious