ADVERTISEMENT
  About the SA Blog Network













MIND Guest Blog

MIND Guest Blog


Commentary invited by editors of Scientific American Mind
MIND Guest Blog HomeAboutContact

Mnemodystopia: A Little Context

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



We’ve been here before. Two or three times a year, a team of neuroscientists comes along and tightropes over the chasm that is dystopian research. Across the valley lies some pinnacle of human achievement; below flows the dirty, coursing river of mind control and government-sponsored brainwashing and all things Nineteen Eighty-Four. Cliffside, maybe clutching our tinfoil caps, we bite our nails and try to keep our faith in the scientists. This time is no different. On July 26, a research team took its first step onto the tightrope.

Working under Nobel laureate Susumu Tonegawa, the MIT group reported that they had created a false memory in the brain of a mouse. “Our data,” wrote the authors in Science, “demonstrate that it is possible to generate an internally represented and behaviorally expressed fear memory via artificial means.” While the sterility reserved for scientific research abstracts tends to diffuse the élan of the work, the gravity here is apparent.

Which brings us to the cliff and the chasm.

That devil-klaxon of a sound effect from Inception always seems appropriate for heralding reports with sci-fi undertones. In the case of the closest thing we have to an actual inception, it seems particularly apt. But the group’s work is not Inception per se, and it’s certainly not Total Recall. That’s not to say it isn’t unnerving. It’s also not to say the study isn’t remarkable. More than anything, the Science paper’s publication is a reminder that neuroscience is inching over some dangerous ethical waters, and from here, it is important to tread carefully.

* * *

When Cicero wrote of the art of memory, he began with the story of Simonides of Ceos. Simonides lived in the sixth and fifth centuries B.C. and is famous for inventing a few letters of the Greek alphabet. Most essentially for the purpose of this essay, he was a lyric poet who (a) was fortunate enough to not be on the receiving end of a building collapse and, (b) having recently performed in said building, could identify the mangled-beyond-recognition bodies due to his remembrance of the palatial seating chart. Cicero related Simonides’ story as the prime example of linking person with place as a memorization technique.

A far more eloquent version of the above description appears in Frances Yates’ 1966 The Art of Memory, and it is the standard (Wikipedian) introduction to any deep meditation on memory improvement. The works of Yates and Cicero describe memory-training methods for budding poets—practices that consist of forming connections between places and images to build the ‘artificial memory’ necessary for recounting the epics. What was true for Yates was true for the ancient rhetors and is true today: The pairing of place with object or event, whether naturally or as a mnemonic device, represents one of the most fundamental components of memory.

This is because daily events always have a physical backdrop. Neuroscientists refer to our responses to this relationship between foreground and background as contextual behavior. Contextual memories are those that ground an object in a setting, a person in a place—episodes within a context. “Everything we experience happens somewhere,” wrote Dr. Jerry Rudy of University of Colorado, Boulder in a 2009 review of neural context representations. “[The] context often helps to select appropriate behaviors and determine the explicit and implicit content of our thoughts.” We retrace our steps because our misplaced reading glasses are associated with a specific context. Strolling into a dive bar prompts a certain mindset and behavioral catalogue that isn’t associated with, say, visiting Uncle Morris at the nursing home. Simonides didn’t identify members of the crushèd audience by their appearance: He remembered the structure of the scene.

Neuroanatomically, much of this activity is happening in and around the seahorse-shaped brain region called the hippocampus. There is no single cortical area responsible for storing a lifesworth of facts and experiences, but the hippocampus and the parahippocampal areas are always at the center of discussions on learning and memory. Dr. Tonegawa has been studying these areas and processes for years, and the false memory study is his group’s most recent success in furthering our understanding of how we remember.

Steve Ramirez, a grad student in Dr. Tonegawa’s lab and the lead author of the Science paper in question, explained the motivation for the work in a July 30 Reddit AMA:

The ultimate goal is to causally dissect the seemingly ephemeral process of memory. This way, the more and more we know about how the brain works, the better our predictions and treatments will become when broken brain pieces give rise to broken thoughts. False memories are just one of many cognitive hiccups.

People experience false memories all the time. If you’ve ever misremembered a situation, you’ve had one. We’re not really sure why they occur, especially since it’s not even immediately obvious why memory should exist in varying degrees of depth and breadth in the first place. People more or less have the same degree of hearing, but memory strength varies as much as height (though thankfully not with it). And excluding those of mnemonic savants, all memories are inherently flawed to a certain extent. True photographic memory is classified as a syndrome: something abnormal. Upon recall, our memories are refashioned by other experiences and beliefs. It’s no surprise they’re prone to intrusion.

While the scientific techniques necessary for the intentional creation of a false memory are enormously complex, Ramirez’s study had a relatively straightforward experimental framework. Imagine a mouse in a blue box. The general idea of the study was to pinpoint the cells responsible for our furry friend’s memory of the box, activate that memory while frightening the mouse in a different, say, red box, and see if the mouse’s memory of the first space changed upon returning to the blue box. If the mouse demonstrated a fear response in the blue box, where it was never made to feel afraid, the experimenters could deem the original memory altered and false. Ramirez’s boxes differed in more than color—size, floor material, lighting conditions, and scent—but the concept is the same.

It turns out that activating the tagged, banal memory during acquisition of the fear memory muddled the two together. Upon returning to their original context, mice exhibited the fear we would expect of the second context, and mice that didn’t receive the simultaneous memory activation didn’t have altered memories. It’s difficult to say how the mice actually perceived the false memories, but we can conceivably imagine something like witnessing a car accident while passively thinking of our kitchen and then inexplicably tensing up while cooking dinner.

Thus was born the era of non-pharmacological memory alteration. To allay one of the most immediate Orwellian concerns, a mouse’s false memory is only a “fear memory” out of convenience. Fear responses are easy to assess in mice—they freeze when they’re afraid—and training a mouse to fear a given context just takes delivering mild foot shocks. There’s minimal, if any, pain involved, with the sensation likely akin to the static shock we might feel rubbing our socks across carpet. In neuroscience, researchers often use such fear conditioning paradigms to assess contextual memory. Freezing is easy to quantify, and changing contexts just means swapping out an experimental chamber for a new one. One of the next agenda items for Ramirez is replicating the study with more complex memories like joy. If the work really takes off over the next couple decades, it’s the kind of technology that could, for example, be used to alter or erase the debilitating memories of post-traumatic stress disorder patients.

And while the revision of an existing memory is certainly a striking, perhaps unanticipated result, it’s the tagging of the memory itself that is most impressive here. The false memory study follows on the heels of a Nature paper published last year by groups led by Tonegawa and Stanford neurojuggernaut Karl Deisseroth. While it was this Nature paper that presented “the first demonstration that directly activating a subset of cells involved in the formation of a memory is sufficient to induce the behavioural expression of that memory,” Ramirez’s study is another confirmation of a nontrivial truth neuroscientists have been pursuing for decades: that there is a discrete neural representation of a memory. It’s called the engram. It sits in a cozy corner of cerebral substrate and is neatly distributed across a population of neurons. And now we have the technology to locate and activate engrams at our bidding.

This is where the rest of the dystopian anxieties come into play. Major media outlets’ fear mongering of modern neuroscience tends to revolve around the concepts of manipulation and mind reading. I probably shouldn’t tell you that DARPA has been pushing for a ‘thought helmet’ for a good long while now. The spoiler alert, of course, is that we’re not there yet. The aforementioned complex tagging and activation of an engram relies on the excruciatingly meticulous coaxing of a light-sensitive protein into the membranes of the cells of interest and a surgically implanted fiber-optic cable directing light through the skull and over the tissue. These optogenetic techniques won’t be ready for humans anytime soon. To stoke the fire a bit, though, in the past year and a half we’ve seen the publication of a handful of major neuroscience research reports whose titles alone are enough to raise some neck hairs. “Neural decoding of visual imagery during sleep.” “A cortical neural prosthesis for restoring and enhancing memory.” “Non-invasive brain-to-brain interface (BBI): establishing functional links between two brains.”

You’re going to have to take my word on it, but there’s nothing malicious to see here yet. That latter study basically consists of a computer detecting whether or not a volunteer is looking at a strobe light. If so, a second system delivers an ultrasound burst over a passed out rat’s motor cortex, causing its tail to twitch. It’s crude, and it’s not remotely close to telepathy. The fact of the matter is that scientific publishers tend to be just as susceptible to headline porn as any other reporting channel. Wow-science will always sell, and neuroscience in particular lends itself to sensationalism.

In The Art of Memory, Yates quotes Cicero’s description of Simonides:

He inferred that persons desiring to train this faculty (of memory) must select places and form mental images of the things they wish to remember and store those images in the places, so that the order of the places will preserve the order of the things, and the images of the things will denote the things themselves, and we shall employ the places and images respectively as a wax writing-tablet and the letters written on it.

The modern stylus is the microelectrode, and with it we probe our neuronal wax. We know memory to be impressionable, the brain to be malleable. Synapses are ever strengthening and weakening, with the complex patterns of their weights encoding the outside world as we experience it. Neuroscience is lush with examples of this neural plasticity. Whole brain areas can be repurposed for the sake of efficiency—if a person loses his sight, for example, auditory cortex can take over the previously visual areas. The wax tablet can be heated, cleared, and reused. Someday, we’re going to be better at engineering and manipulating this plasticity, and when we are, we’re going to have some pretty massive ethical dilemmas to confront.

In his 2005 book The Ethical Brain, neuroscientist Michael Gazzaniga writes, “All of us are trying to read the minds of others all the time. … Indeed, we must read the minds of others if our social system is to work.” But nobody is freaking out about the advent of language or our attempts at empathy, the systems with which we can convince others to do something just by pushing some air through our vocal cords or read someone’s emotional state by looking into his eyes. There’s something subtly, uncannily creepy about neurodystopia. Memory alteration and brain linking studies don’t scare me because they represent another level of invasiveness, or really, abstractly, anything new—they scare me because deep down, I know that if I could change someone’s memory or truly read a mind, I would.

* * *

It is 8PM CST on a Thursday, which for me and 6.5 million other viewers means Big Brother. In my defense, I went to high school with someone that’s on the show this season. In the show’s defense, there is no other experience that so completely epitomizes the couch potato voyeurism for which reality television strives. But Big Brother is also a nice case study in psychology and memory. In a three month, 24/7 reality show in which participants are forced to live together and compete for $500k, lying is, predictably, the name of the game.

In Big Brother, perceptions are warped along most dimensions. Houseguests trundle along comfortably with their friends, only to be turned on in secret eviction votes. Alliances form and fall; faux alliances are made to confuse the weaker players. Whole false realities are constructed on misheard and misremembered conversations. Many tears are shed. At the end of the day, the fear-conditioned mice in Ramirez’s study aren’t that different from the Americans we’ve placed in the Big Brother house-box. Don’t tell me manipulation is something we fear. Manipulation is something we emphatically endorse.

Which is why studies like Ramirez’s are ultimately so important. The false memory paper shows us that, while we aren’t quite in the house of Orwell, we’re beginning to peek through the kitchen window. Instead of waiting around for those massive ethical dilemmas to present themselves, we could take this opportunity to get the ball rolling on some early policy considerations relevant to neurotechnological advances. The time is ripe. President Obama’s recent $100 million BRAIN Initiative, a public-private partnership to invest in neurotechnology research, is a hint that at least a handful of agencies are cognizant of the strides that will continue to be made in the field. The BRAIN Initiative’s currently hazy methodology aside, it is difficult to refute that there is a political eyeball on neuroscience and neuroengineering, and that at the very least, the Initiative could be the reveille that heralds in some meaningful science policy. In his July 1 charge to the Presidential Commission for the Study of Bioethical Issues, Obama asked the committee to “identify proactively a set of core ethical standards—both to guide neuroscience research and to address some of the ethical dilemmas that may be raised by the application of neuroscience research findings.” Which is, well, a start.

Of course, it will be the development of regulatory frameworks and best practices that lend more practical value than ethical standards. And while it’s far too soon for actual implementation, groups like the Oxford Centre for Neuroethics, the Center for Neurotechnology Studies at the Potomac Institute for Policy Studies, and to a certain extent, futurist organizations like Oxford’s Future of Humanity Institute, the Center for Applied Rationality, and the Machine Intelligence Research Institute are starting the conversation. In his 2006 essay “Technological Revolutions: Ethics and Policy in the Dark,” Nick Bostrom, Director of the Oxford Centre for Neuroethics, wrote of the complications present in such premature discussions:

Ethical assessment in the incipient stages of a potential technological revolution faces several difficulties, including the unpredictability of their longterm impacts, the problematic role of human agency in bringing them about, and the fact that technological revolutions rewrite not only the material conditions of our existence but also reshape culture and even—perhaps—human nature.

These are difficulties worth trudging through. The questions at stake are very real. In a 2003 report on memory dampening from the President’s Council on Bioethics (you’ll notice the name changes with every re-up), the authors asked, “Do those who suffer evil have a duty to remember and bear witness, lest we forget the very horrors that haunt them?” There are some appropriate Macbeth and Jane Austen quotes here, but I’ll spare you. In some cases, memory dampening seems relatively clear-cut, especially given examples like post-traumatic stress disorder. But what about larger, collective memory events like the Holocaust? Is it possible to dull emotional memories without sacrificing gravitas? Ten years ago, the question was in reference to beta-blockers like propranolol. In another ten, the same question could arise in reference to studies like Ramirez’s.

Aside from memory alteration, the field of neuroethics considers issues of cognitive enhancement, borderline consciousness and agency, moral responsibility, and high-resolution brain imaging. It will become increasingly more important to shift the field toward application and implementation as our improving neurotechnology starts to bleed out of the lab. In his Reddit AMA, Ramirez wrote that he has “more than enough optimism in our society to believe that, once this is possible for the humans that need it (i.e. those with certain cognitive disorders), we’ll be fully ready to implement the necessary legislation.” Optimism is the key word here. We won’t be ready if we don’t get serious about having real conversations among all stakeholders. This includes scientists, ethicists, policymakers, and the general public.

Utopian literature is never written as a blueprint—the function of a utopia is to skewer society’s flaws by inverting them. A dystopia works similarly, scratching at the same scabs by exaggeration. In our newfound society where the NSA double dips its icky fingers in anything with a modem, it’s understandable that there is a dystopian itching surrounding contemporary neuroscience. But we can use this dystopia to fuel real, smart societal progress. Let’s not remember this as the time our alarmist tendencies distracted us from thinking clearly about the true matters at hand. The science is incredible and deserves celebration. It also prompts some important questions; questions that could eventually have the potential to translate into the legislation that will help keep our fears at bay. As long as we can genuinely acknowledge these concerns, we can put aside the tinfoil hats for now, revel in our discoveries, and continue to poke and prod at Cicero’s tablet of wax.

Image: Hieronymus Bosch, “The Garden of Earthly Delights” (detail), Wikimedia Commons

 

Clayton Aldern About the Author: Clayton Aldern is a neuroscience graduate student at the University of Oxford interested in brain-computer interfaces and computational neuroscience. He writes about neurotechnology, science policy, and the intersection between neuroscience and business. Follow on Twitter @compatibilism.

The views expressed are those of the author and are not necessarily those of Scientific American.






Comments 2 Comments

Add Comment
  1. 1. WRQ9 2:44 pm 08/26/2013

    The thing you in the institution of science don’t get is that it is already over. If I am being watched, certainly you all are. The notion of being “too soon” to worry was in the garbage when the research began. Informal studies will go on by concerned parties where profit dictates, and once business has embarrassed itself with ill gotten gains the doors will be wide open for governments. The incredible vacuum of potential corporate gain must be underestimated in order to be maintained.
    All the money in the world is owned by very few people, so there is no longer (if there ever was) any independent research. The work lies purely at the behest of those doing the hiring, controlled by limitation or direct action. The barn door opened on reason in the seventies, and the nature of science has been in “self destruct” (sic) mode ever since.
    Containment is the concept that we should’ve been developing a mastery for so many years ago. Although it sounds repressive, in the proper context, it can be made plain that the right to proceed should be attached directly to the notion of containment. That notion does not stop at the level of toxins in an environmental context, but through all phases of potential actions from the impetus. That is mastery, and that is what is required.

    Link to this
  2. 2. TommyL 12:32 pm 08/28/2013

    I found this quote to be insightful: “Indeed, we must read the minds of others if our social system is to work.” (from The Ethical Brain, by neuroscientist Michael Gazzaniga) I linked it (within my own jumpy neural network)to this article describing a study on how the brain brings past experience to bear when thinking about future situations: http://harvardmagazine.com/2013/07/the-social-life-of-memory The article concludes as follows: “The researchers concluded that memory and social cognition therefore work in concert when individuals hypothesize about the future behavior of others. The brain regions responsible for forming “personality models” and assigning them identities are intrinsically linked to the memory/imagination systems that simulate the past and future.”

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article



This function is currently unavailable

X