About the SA Blog Network



Critical views of science in the news
Cross-Check Home

Should Scientists and Engineers Resist Taking Military Money?

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

In 2005, I became, briefly, a tool of the military-industrial complex. My service began when I received an email from Centra Technology, a defense contractor. Centra wanted my ideas on fighting terrorism, which it would pass on to the National Counterrorism Center, a security agency overseen by the CIA. My first reaction was, Whaaa..? After Googling Centra and the Counterterrorism Center to confirm that they actually existed, I called the Centra contact, Debbie, and told her that she had confused me with another John Horgan, who is a psychologist and authority on terrorism. No, Debbie, said, Centra wanted me; it was looking for advice from non-experts who could “think outside the box.”

The offer left me feeling ambivalent, to put it mildly. As a peacenik who loathed the Bush administration’s hawkish foreign policies, wouldn’t accepting this gig be hypocritical? On the other hand, the invitation was flattering—my government wanted my advice!–and the pay was good. So I agreed to do the job, which consisted of writing up a few ideas and emailing them to Centra. As long as I didn’t propose anything that violated my principles, I rationalized, what would be the harm?

Now that I’ve disclosed my ethical elasticity, I can delve into the debate over military funding of science and engineering. This long-smoldering issue has flared up over the past decade–not surprisingly, given the stagnation in non-military funding and surge in defense spending. In recent years several professional societies, notably the American Psychological Association and the American Anthropological Association, have been wracked by debates over whether members should consult for the armed forces or other defense agencies. My guess is that such disputes will become more common, as ethical ideals collide with economic realities.

Many professors at Stevens Institute of Technology, where I teach and run a lecture series, receive grants from security agencies, and our graduates often pursue defense-related careers. In my classes I have begun to encourage students considering jobs with military agencies or contractors to ponder these questions: Will your work really make the world safer or more dangerous? Will it inhibit conflict or provoke it, perhaps by triggering an arms race? I recently posed these same questions to engineers—most of whom were working or had worked in defense-related fields–enrolled in a Stevens continuing education program.

I have also brought scholars to Stevens to talk about the militarization of research. One was Peter W. Singer, a security analyst at the Brookings Institution, whose 2009 book Wired for War examined how research in robotics, artificial intelligence and related fields has transformed modern warfare. The U.S. now deploys more than 12,000 “unmanned ground vehicles” in Iraq and Afghanistan, Singer stated in Scientific American in 2010. These robots, which resemble miniature trucks or tanks, are equipped with cameras and other sensors, robotic arms and, in some cases, machine guns.

The U.S. military has also deployed more than 7,000 unmanned airborne vehicles, or drones. The best-known is the Predator, a 27-foot-long plane that flies as high as 26,000 feet and scans the earth with television cameras, including an infrared camera for night vision and radar that can see through clouds, fog and smoke. Since 2001, the Predator has been armed with missiles. In addition to land vehicles and drones, the Pentagon is developing unmanned motorboats and submarines that can navigate on or beneath the water to search for hostile boats, mines and other threats.

Remote-controlled robots can help save the lives of American soldiers and, in principle, civilians. Soldiers operating Predators and other armed drones—often from facilities in the U.S.–should be less susceptible to the panic, fury and confusion that can lead troops in a combat zone to commit lethal errors. But as I wrote last year, attacks by military drones have killed many civilians in Afghanistan and Pakistan. (See also this recent report by scholars at Stanford and NYU, who conclude that U.S. drone strikes in Pakistan “have facilitated recruitment to violent non-state armed groups, and motivated further violent attacks.”)

In Wired for War, Singer quoted an Army chaplain who feared that “as soldiers are removed from the horrors of war and see the enemy not as humans but as blips on a screen, there is a very real danger of losing the deterrent that such horrors provide.” (For more on Singer’s views, see my conversation with him on

Military robots are now under development in more than 50 nations, and the next step could be autonomous robots, which would incorporate artificial intelligence that allows them to operate independently of humans. Advocates of autonomous robots assert that—again, in principle–they should be less likely than humans to commit mistakes. Singer worried that, given the complexities of combat and the limits of technology, robots will inevitably make poor decisions, just as human soldiers do. If a robot commits a war crime, Singer asked, who should be held accountable? Scientists, ethicists, politicians, military officials and others, Singer said, need to “start looking at where this technological revolution is taking both our weapons and our laws.”

Another speaker I brought to Stevens, the bioethicist Jonathan Moreno of the University of Pennsylvania, raised questions about the militarization of neuroscience. In his 2006 book Mind Wars, Moreno reported that the Pentagon is funding research on a wide variety of “neuroweapons” that can boost or degrade the capacities of combatants. Potential neuroweapons include transcranial magnetic stimulators, devices that stimulate the brain to help soldiers stay alert; gases that confuse or knock out enemies; and even brain-scanning technologies that can read prisoners’ minds.

Perhaps the most disturbing line of research examined by Moreno involves neural prostheses, electronic devices that communicate directly with neural tissue via implanted electrodes. The most successful neural prosthesis is the artificial cochlea, which restores hearing in deaf people by feeding signals from a microphone into the auditory nerve. Researchers are now trying to produce prostheses that can restore vision, motor control and even memory to people suffering from nervous-system damage.

Officials at the Defense Advanced Research Projects Agency, which supports neural-prosthesis research, say they want to help soldiers who have suffered injuries to their brains or spinal cords. But the Pentagon may also want to create bionic soldiers whose abilities are enhanced by neural implants. A Darpa official once acknowledged as much to me when I interviewed him for an article on the neural code. “Implanting electrodes into healthy people is not something we’re going to do anytime soon,” he said, “but 20 years ago, no one thought we’d put a laser in the eye [to improve vision]. This agency leaves the door open to what’s possible.”

Moreno is not a pacifist; the U.S. has the right to defend itself and to seek advantages over potential enemies, he asserted. Nor did Moreno view neuroweapons as intrinsically less moral than other weapons; chemical incapacitants and mind-reading technologies, for example, are more humane than bombs or torture. “The neurosciences and related fields may well lead to measures that both give us an advantage over our adversaries and are morally superior to other tactics,” he wrote in Mind Wars.

But Moreno believes that scientists and others should have a vigorous, open debate about the pros and cons of research in neuroweapons. Some neuroscientists have gone further, calling on their colleagues to sign to pledge “to Refuse to Participate in the Application of Neuroscience to Violations of Basic Human Rights or International Law.”

Like Singer and Moreno, I don’t view all military-funded activities as immoral–and I say this not merely to justify my own decision to take money from Centra. Defense-funded research has led to advances in civilian health care, transportation, communication and other industries that have improved our lives. My favorite example of well-spent Pentagon money was a 1968 Darpa grant to the political scientist Gene Sharp. That money helped Sharp research and write the first of a series of books on how nonviolent activism can bring about political change.

Sharp’s writings have reportedly inspired nonviolent opposition movements around the world, including ones that toppled corrupt regimes in Serbia, Ukraine, Georgia–and, more recently, Tunisia and Egypt. Sharp, who has not received any federal support since 1968, has defended his acceptance of Darpa funds. In the preface of his classic 1972 work The Politics of Nonviolent Action, he argued that “governments and defense departments—as well as other groups—should finance and conduct research into alternatives to violence in politics.” I couldn’t agree more.

That brings me back to the counter-terrorism ideas that I offered Centra in 2005. They included asking ex-terrorists for insights into how terrorists think; using artificial-intelligence to predict attacks; creating a website where people could anonymously submit plans for terrorist attacks; and disseminating Sharp’s writings among populations prone to terrorism. The Sharp proposal was the only one I really thought would work, and of course that was the only one that Centra rejected. “Too political,” Debbie said. To my mind, the best possible use of military funds is to support research into how to make arms and armies obsolete.


Self-plagiarism alert: This post is an updated version of an essay originally published in The Chronicle of Higher Education in May 2011.

John Horgan About the Author: Every week, hockey-playing science writer John Horgan takes a puckish, provocative look at breaking science. A teacher at Stevens Institute of Technology, Horgan is the author of four books, including The End of Science (Addison Wesley, 1996) and The End of War (McSweeney's, 2012). Follow on Twitter @Horganism.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 9 Comments

Add Comment
  1. 1. aek2013 10:09 am 11/12/2012

    Upstream thinking re: terrorism prevention would meld the work of Thomas Joiner (interpersonal theory of suicide – concepts of thwarted belongingness, perceived burdensomeness and learned capacity for self violence), Kipling Williams (model of time-based reactions to ostracism/social isolation/social death), and C Fred Alford (narrative of lives of whistleblowers/knowledge as disaster)

    Link to this
  2. 2. julianpenrod 3:11 pm 11/12/2012

    Among other things, for all that they espouse “evolution”, “engineers” and “scientists” should be aware of the potential for adaptation in even a one side arms race.It’s is often opined that “terrorists” get better and better in answer to new developments in military technology. This may cause this not to be printed, but, for those suffering an effective addiction to “science”, that is, saying enhancement or innovation to non living, non spiritually endowed entites, machines, solves all problems and is the only thing that can solve any problem, this results only in a call for ever more development, followed by “terrorists” supposedly acclimating. Basically, obligingly cooperating in a New World Order authorized pattern of ceaseless repetitiion. Not once, incidentally, invoking the admonition that “insanity is doing the same thing over and over and expecting a necessarily different outcome”. If one wanted to indicate a precedent from nature, they could point out cooperating societies in nature. Once a species adopts a societal structure, they never seem to leave it, it benefits them. Of course, New World Order shills will prate the NWO doggerel that “Muslims only want to kill all Americans”, “”anger against the U.S. comes from the envy of the shiftless, not from the U.S. having carried out atrocities and abominations against smaller nations in the past and continuing to carry out clandestine operations against other nations’ sovereignty”. Using “evolution” as a guide, there is no excuse for any “engineer” or “scientist” to espouse constant militarization. The only use this has ever seen is in enslaving less powerful nations or destroying international treaties to reform them more enfranchising of the wealthy. But militarization has never ended war, it only laid the groundwork for more militarization. But all security apparatus that have been constructed, apparently, have always been circumvented; the only person, it has been demonstrated, you can feel secure around is a friend.

    Link to this
  3. 3. outsidethebox 9:08 pm 11/12/2012

    The enemies of America are not to be opposed because only they want to kill Americans (which they want indeed to do). They want to keep half of their population – the female half) as sub humans. They want to run their people as if they had an invisible imaginary friend in the sky telling them to kill. They are in no way the moral equals of their enemies. It would be useful to remember this.

    Link to this
  4. 4. JPGumby 3:16 pm 11/13/2012

    To respond to the question in the title, unless you are a true pacifist, there is no moral rationale for why one should “resist military money”. As a scientist, one should always be on guard for letting the interests of the funder impact the results of the science, but that applies to any source of funding (including non-military govt source that might find that results that indicate the need for more regulation is in their self-interest).

    Link to this
  5. 5. Don Quixote 5:37 pm 11/13/2012

    An actual relatively objective article, most excellent. I would love to explore a number the points made by the author, but will limit to two: “…ponder these questions: Will your work really make the world safer or more dangerous? Will it inhibit conflict or provoke it, perhaps by triggering an arms race? “I would argue that there probably isn’t a single student in these classes that could realistically make that judgment call. Their experience would be so minimal and their understanding so limited they would be marginally better off than most “Occupy” protestors. They would be intrinsically a lot smarter (than Occupy folks), but would still lack perspective. It would still make an excellent academic argument, but not one that most could act upon with sound judgment. I’m not denigrating their intelligence or potential, just saying that in IMO, they lack perspective that they can only get from experience and age. Second, I agree with Singer’s contention that removing people from the battlefield is a bad decision. We have unfortunately strayed far from what war is truly supposed to be: brutal and devastating. It is supposed to be these things so we consider it only as the absolute last resort, and when we do decide to conduct war, it should involve everyone. We can use machines, I have no problem with that, but the end result of war is to destroy the enemy’s (whoever that may be) ability to wage war. I just finished military coursework entitled “Law of Armed Conflict”. Now I know why there are so many wars and why they are so long: Lawyers define the rules of war. Anyway, I do not think there is any real problem with receiving funding from defense sources. The majority of research, while designed to support military objectives, also have direct applications to other, more benign, endeavors. Take the money unless you truly do not support the primary objective of the research.

    Link to this
  6. 6. Dr. Strangelove 8:30 pm 11/13/2012

    The reality is most of our present technology came from military research: rockets, airplanes, nuclear reactor, nuclear fusion, computer, radar, etc., etc. Unless other institutions can give same level of support to research, the military will continue to attract scientists and engineers.

    Link to this
  7. 7. electric38 12:16 am 11/14/2012

    In checking the research done on behalf of paralysis victims. It appears that electrodes can be implanted in the brain and can use electrical or chemical (or both) to move a mouse or control a keyboard, that in turn can control a robot (or an army of robots).
    The science will continue on behalf of those with paralysis to enable a higher quality of life, but industry or military interests will also make use of the end product.

    Link to this
  8. 8. Quinn the Eskimo 8:47 pm 11/17/2012

    Geeze, guys and gals, GRANT MONEY is GRANT MONEY. Even scientists have to pay their bills.

    Cha – Ching!!!

    Link to this
  9. 9. bucketofsquid 6:06 pm 11/19/2012

    Interesting article. Interesting follow-up comments as well. Only one conspiracy theorist comment too. Very nice!

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article