Could investigating conjoined twins shed light on the mysteries of consciousness?
In "Too Hard for Science?" I interview scientists about ideas they would love to explore that they don't think could be investigated. For instance, they might involve machines beyond the realm of possibility, such as devices as big as galaxies, or they might be completely unethical, such as experimenting on children like lab rats. This feature aims to look at the impossible dreams, the seemingly intractable problems in science. However, the question mark at the end of "Too Hard for Science?" suggests that nothing might be impossible.
The scientist: Peter Watts, science fiction author and one-time marine mammal biologist at the University of Guelph and the University of British Columbia.
The idea: Might investigating that of conjoined twins helps shed light on consciousness?
"Consciousness continues to confound us on all fronts — we haven't even established what it's good for," Watts says. "It's slow, metabolically expensive, and — as far as we can tell — unnecessary for intelligence. More fundamentally, we don't have a clue how it works — how can the electrical firing of neurons produce the subjective sense of self? How can a bunch of ions hopping the synaptic gap result in the sense of this little thing behind the eyes that calls itself 'I?'"
"One thing we have discovered is that consciousness involves synchrony — groups of neurons firing in sync throughout different provinces of the brain," he says. "Something else we've known for some time is that when you split the brain down the middle — force the hemispheres to talk the long way around, via the lower brain, instead of using the fat high-bandwidth pipe of the corpus callosum — you end up with not one conscious entity but two, and those two entities develop different tastes, opinions, even different religious beliefs."
"What this seems to point to is that consciousness is a function of latency — it depends upon the synchronous firing of far-flung groups of neurons, and if it takes too long for signals to cross those gaps, consciousness fragments. 'I' decoheres into 'we,'" Watts says.
"Fortunately, there are developmental accidents that could potentially offer enormous insights into this phenomenon," Watts says — that is to say, conjoined twins fused at the brain.
"We've already learned a lot from such cases opportunistically," he explains. "For example, the Hogan twins out in British Columbia appear to have distinct personalities, yet can tap into each others' sensory systems — they are fused at the thalamus, a structure that acts, among other things, as a sensory relay. Suppose they were fused at the neocortex instead? Would they still be individuals — would the signal lag across the depth of two skulls prove too great for a coherent self? Or would we be dealing with a single integrated person wired into two bodies, with two sets of sense organs and twice the normal complement of human processing power?"
However, conjoined twins fused in the brain "are exceedingly rare in nature, and even when they do occur the results are not always configured for optimum scientific insight," Watts says. If one were to systematically fuse the brains of developing embryos in utero at precisely controlled spots, one could answer all these questions regarding conjoined twins and more, Watts says. "A conjoined-twin breeding program could break the whole dilemma of consciousness itself wide open," he posits.
The problem: "I have no idea. Really. I can't see any down side to this at all. I'm actually kind of amazed it hasn't already been done," Watts says.
[Ed: Watts is joking about experimenting on unborn children. — CQC.]
The solution? "One could always resort to doing these experiments as simulations," Watts says. For instance, Luis Bettencourt at Los Alamos National Laboratory has discussed the progress that has already been made towards computer simulation of whole brains. "It's not doable now, but in a decade or two, who knows?" Watts says.
"Of course, such simulations would have to extend down the molecular level at least," he adds. "And if software can replicate the conditions necessary for the emergence of self-awareness, then you're left with a similar thicket of issues to the one you'd have faced if you'd just stuck with meatspace experiments — you've created a sapient entity which, assuming you've modeled the brain correctly, can suffer."
"The advantage of models is that you can hit reset once you've run your experiments, and whatever suffering you've inflicted on your subject disappears along with the post-experimental self, which raises a whole other issue — can an entity be said to have 'suffered' if the suffering leaves no memory, no post-traumatic symptoms, no trace whatsoever? Is it okay to inflict suffering if the subject is utterly unaffected by the experience afterwards?" Watts asks.
If you have a scientist you would like to recommend I question, or you are a scientist with an idea you think might be too hard for science, email me at email@example.com.