November 12, 2012 | 9
In 2005, I became, briefly, a tool of the military-industrial complex. My service began when I received an email from Centra Technology, a defense contractor. Centra wanted my ideas on fighting terrorism, which it would pass on to the National Counterrorism Center, a security agency overseen by the CIA. My first reaction was, Whaaa..? After Googling Centra and the Counterterrorism Center to confirm that they actually existed, I called the Centra contact, Debbie, and told her that she had confused me with another John Horgan, who is a psychologist and authority on terrorism. No, Debbie, said, Centra wanted me; it was looking for advice from non-experts who could “think outside the box.”
The offer left me feeling ambivalent, to put it mildly. As a peacenik who loathed the Bush administration’s hawkish foreign policies, wouldn’t accepting this gig be hypocritical? On the other hand, the invitation was flattering—my government wanted my advice!–and the pay was good. So I agreed to do the job, which consisted of writing up a few ideas and emailing them to Centra. As long as I didn’t propose anything that violated my principles, I rationalized, what would be the harm?
Now that I’ve disclosed my ethical elasticity, I can delve into the debate over military funding of science and engineering. This long-smoldering issue has flared up over the past decade–not surprisingly, given the stagnation in non-military funding and surge in defense spending. In recent years several professional societies, notably the American Psychological Association and the American Anthropological Association, have been wracked by debates over whether members should consult for the armed forces or other defense agencies. My guess is that such disputes will become more common, as ethical ideals collide with economic realities.
Many professors at Stevens Institute of Technology, where I teach and run a lecture series, receive grants from security agencies, and our graduates often pursue defense-related careers. In my classes I have begun to encourage students considering jobs with military agencies or contractors to ponder these questions: Will your work really make the world safer or more dangerous? Will it inhibit conflict or provoke it, perhaps by triggering an arms race? I recently posed these same questions to engineers—most of whom were working or had worked in defense-related fields–enrolled in a Stevens continuing education program.
I have also brought scholars to Stevens to talk about the militarization of research. One was Peter W. Singer, a security analyst at the Brookings Institution, whose 2009 book Wired for War examined how research in robotics, artificial intelligence and related fields has transformed modern warfare. The U.S. now deploys more than 12,000 “unmanned ground vehicles” in Iraq and Afghanistan, Singer stated in Scientific American in 2010. These robots, which resemble miniature trucks or tanks, are equipped with cameras and other sensors, robotic arms and, in some cases, machine guns.
The U.S. military has also deployed more than 7,000 unmanned airborne vehicles, or drones. The best-known is the Predator, a 27-foot-long plane that flies as high as 26,000 feet and scans the earth with television cameras, including an infrared camera for night vision and radar that can see through clouds, fog and smoke. Since 2001, the Predator has been armed with missiles. In addition to land vehicles and drones, the Pentagon is developing unmanned motorboats and submarines that can navigate on or beneath the water to search for hostile boats, mines and other threats.
Remote-controlled robots can help save the lives of American soldiers and, in principle, civilians. Soldiers operating Predators and other armed drones—often from facilities in the U.S.–should be less susceptible to the panic, fury and confusion that can lead troops in a combat zone to commit lethal errors. But as I wrote last year, attacks by military drones have killed many civilians in Afghanistan and Pakistan. (See also this recent report by scholars at Stanford and NYU, who conclude that U.S. drone strikes in Pakistan “have facilitated recruitment to violent non-state armed groups, and motivated further violent attacks.”)
In Wired for War, Singer quoted an Army chaplain who feared that “as soldiers are removed from the horrors of war and see the enemy not as humans but as blips on a screen, there is a very real danger of losing the deterrent that such horrors provide.” (For more on Singer’s views, see my conversation with him on Bloggingheads.tv.)
Military robots are now under development in more than 50 nations, and the next step could be autonomous robots, which would incorporate artificial intelligence that allows them to operate independently of humans. Advocates of autonomous robots assert that—again, in principle–they should be less likely than humans to commit mistakes. Singer worried that, given the complexities of combat and the limits of technology, robots will inevitably make poor decisions, just as human soldiers do. If a robot commits a war crime, Singer asked, who should be held accountable? Scientists, ethicists, politicians, military officials and others, Singer said, need to “start looking at where this technological revolution is taking both our weapons and our laws.”
Another speaker I brought to Stevens, the bioethicist Jonathan Moreno of the University of Pennsylvania, raised questions about the militarization of neuroscience. In his 2006 book Mind Wars, Moreno reported that the Pentagon is funding research on a wide variety of “neuroweapons” that can boost or degrade the capacities of combatants. Potential neuroweapons include transcranial magnetic stimulators, devices that stimulate the brain to help soldiers stay alert; gases that confuse or knock out enemies; and even brain-scanning technologies that can read prisoners’ minds.
Perhaps the most disturbing line of research examined by Moreno involves neural prostheses, electronic devices that communicate directly with neural tissue via implanted electrodes. The most successful neural prosthesis is the artificial cochlea, which restores hearing in deaf people by feeding signals from a microphone into the auditory nerve. Researchers are now trying to produce prostheses that can restore vision, motor control and even memory to people suffering from nervous-system damage.
Officials at the Defense Advanced Research Projects Agency, which supports neural-prosthesis research, say they want to help soldiers who have suffered injuries to their brains or spinal cords. But the Pentagon may also want to create bionic soldiers whose abilities are enhanced by neural implants. A Darpa official once acknowledged as much to me when I interviewed him for an article on the neural code. “Implanting electrodes into healthy people is not something we’re going to do anytime soon,” he said, “but 20 years ago, no one thought we’d put a laser in the eye [to improve vision]. This agency leaves the door open to what’s possible.”
Moreno is not a pacifist; the U.S. has the right to defend itself and to seek advantages over potential enemies, he asserted. Nor did Moreno view neuroweapons as intrinsically less moral than other weapons; chemical incapacitants and mind-reading technologies, for example, are more humane than bombs or torture. “The neurosciences and related fields may well lead to measures that both give us an advantage over our adversaries and are morally superior to other tactics,” he wrote in Mind Wars.
But Moreno believes that scientists and others should have a vigorous, open debate about the pros and cons of research in neuroweapons. Some neuroscientists have gone further, calling on their colleagues to sign to pledge “to Refuse to Participate in the Application of Neuroscience to Violations of Basic Human Rights or International Law.”
Like Singer and Moreno, I don’t view all military-funded activities as immoral–and I say this not merely to justify my own decision to take money from Centra. Defense-funded research has led to advances in civilian health care, transportation, communication and other industries that have improved our lives. My favorite example of well-spent Pentagon money was a 1968 Darpa grant to the political scientist Gene Sharp. That money helped Sharp research and write the first of a series of books on how nonviolent activism can bring about political change.
Sharp’s writings have reportedly inspired nonviolent opposition movements around the world, including ones that toppled corrupt regimes in Serbia, Ukraine, Georgia–and, more recently, Tunisia and Egypt. Sharp, who has not received any federal support since 1968, has defended his acceptance of Darpa funds. In the preface of his classic 1972 work The Politics of Nonviolent Action, he argued that “governments and defense departments—as well as other groups—should finance and conduct research into alternatives to violence in politics.” I couldn’t agree more.
That brings me back to the counter-terrorism ideas that I offered Centra in 2005. They included asking ex-terrorists for insights into how terrorists think; using artificial-intelligence to predict attacks; creating a website where people could anonymously submit plans for terrorist attacks; and disseminating Sharp’s writings among populations prone to terrorism. The Sharp proposal was the only one I really thought would work, and of course that was the only one that Centra rejected. “Too political,” Debbie said. To my mind, the best possible use of military funds is to support research into how to make arms and armies obsolete.
Self-plagiarism alert: This post is an updated version of an essay originally published in The Chronicle of Higher Education in May 2011.
Get 6 bi-monthly digital issues
+ 1yr of archive access for just $9.99