ADVERTISEMENT
  About the SA Blog Network













The Curious Wavefunction

The Curious Wavefunction


Musings on chemistry and the history and philosophy of science
The Curious Wavefunction Home

NSA and the problem of distinguishing good ideas from bad ones

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



George Dyson (Image: Wikipedia Commons)

Noted historian of technology George Dyson has a thought-provoking piece in Edge where he takes government surveillance to task, not just on legal or moral grounds but on basic technical ones. Dyson’s concern is simple; when you are trying to identify the nugget of a dangerous idea from the morass of ideas out there, you are as likely to snare creative, good ideas in your net as bad ones. This may lead to a situation rife with false positives where you routinely flag – and, if everything goes right with your program, try to suppress – the good ideas. The problem arises partly because you don’t need to, and in fact cannot, flag every idea as “dangerous” or “safe” with one hundred percent accuracy; all you need to do is to get a rough idea.

“The ultimate goal of signals intelligence and analysis is to learn not only what is being said, and what is being done, but what is being thought. With the proliferation of search engines that directly track the links between individual human minds and the words, images, and ideas that both characterize and increasingly constitute their thoughts, this goal appears within reach at last. “But, how can the machine know what I think?” you ask. It does not need to know what you think—no more than one person ever really knows what another person thinks. A reasonable guess at what you are thinking is good enough.”

And when you are trying to get a rough idea, especially pertaining to someone’s complex thought processes, there’s obviously a much higher chance of making a mistake and failing the discrimination test.

While the problem of separating the wheat from the chaff is encountered by every data analyst, what’s intriguing about Dyson’s objection is that it appeals to a very fundamental limitation in accomplishing this discrimination, one that cannot be overcome even by engaging the services of every supercomputer in the world . Alan Turing jump-started the field of modern computer science when he proved that even an infinitely powerful algorithm cannot determine whether an arbitrary string of code represents a provable statement (the so-called ‘Decision Problem’ articulated by David Hilbert). Turing provided to the world the data counterpart of Kurt Godel’s Incompleteness Theorem and Heisenberg’s Uncertainty Principle; there is code whose truth or lack thereof can only be judged by actually running it and not by any preexisting test. Similarly Dyson contends that the only way to truly distinguish good ideas from bad is to let them play out in reality. Now nobody is actually advocating that every potentially bad idea should be allowed to play out, but the argument does underscore the fundamental problem with trying to pre-filter good ideas from bad ones. As he puts it:

“The Decision Problem, articulated by Göttingen’s David Hilbert, concerned the abstract mathematical question of whether there could ever be any systematic mechanical procedure to determine, in a finite number of steps, whether any given string of symbols represented a provable statement or not.

The answer was no. In modern computational terms (which just happened to be how, in an unexpected stroke of genius, Turing framed his argument) no matter how much digital horsepower you have at your disposal, there is no systematic way to determine, in advance, what every given string of code is going to do except to let the codes run, and find out. For any system complicated enough to include even simple arithmetic, no firewall that admits anything new can ever keep everything dangerous out…

There is one problem—and it is the Decision Problem once again. It will never be entirely possible to systematically distinguish truly dangerous ideas from good ones that appear suspicious, without trying them out. Any formal system that is granted (or assumes) the absolute power to protect itself against dangerous ideas will of necessity also be defensive against original and creative thoughts. And, for both human beings individually and for human society collectively, that will be our loss. This is the fatal flaw in the ideal of a security state.”

In one sense this problem is not new since governments and private corporations alike have been trying to separate and suppress what they deem to be dangerous ideas for centuries; it’s a tradition that goes back to book-burning in medieval times. But unlike a book which you can actually read and evaluate, the evaluation of ideas based on snippets, indirect connections, Google links and metadata is tenuous at best and wildly unlikely to accurately succeed. That is the fundamental barrier that agencies who are trying to determine thoughts and actions based on Google searches and Facebook profiles are facing, and it is likely that no amount of sophisticated computing power and data will enable them to solve the general problem.

Ultimately whether it’s government agencies, corporations or individuals, the temptation to fall prey to what writer Evgeny Morozov calls “technological solutionism” – the belief that key human problems will succumb to the latest technological advances – can be overpowering. But when you are dealing with people’s lives you need to be a little more wary of technological solutionism than when you are dealing with household garbage disposal. There is not just a legal and ethical imperative but a purely scientific one to treat data with respect and to disabuse yourself of the notion that you can completely understand it if only you threw more manpower, computing power and resources at it.

At the end of his piece Dyson recounts a conversation he had with Herbert York, a powerful defense establishment figure who designed nuclear weapons, advised presidents and oversaw billions of dollars in defense and scientific funding. York cautions us to be wary of not just Eisenhower’s famed military-industrial complex but of the scientific-technological complex that has aligned itself with the defense establishment for the last fifty years. With the advent of massive amounts of data this alignment is honing itself into an entity that can have more power on our lives than ever before. At the same time we have never been in greater need of the scientific and technological tools that will allow us to make sense of the sea of data that engulfs. And that, as York says, is precisely the reason why we need to beware of it.

“York understood the workings of what Eisenhower termed the military-industrial complex better than anyone I ever met. “The Eisenhower farewell address is quite famous,” he explained to me over lunch. “Everyone remembers half of it, the half that says beware of the military-industrial complex. But they only remember a quarter of it. What he actually said was that we need a military-industrial complex, but precisely because we need it, beware of it. Now I,ve given you half of it. The other half: we need a scientific-technological elite. But precisely because we need a scientific-technological elite, beware of it. That’s the whole thing, all four parts: military-industrial complex; scientific-technological elite; we need it, but beware; we need it but beware. It’s a matrix of four.”

Ashutosh Jogalekar About the Author: Ashutosh (Ash) Jogalekar is a chemist interested in the history and philosophy of science. He considers science to be a seamless and all-encompassing part of the human experience. Follow on Twitter @curiouswavefn.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 7 Comments

Add Comment
  1. 1. M Tucker 5:14 pm 07/26/2013

    Oh yes, absolutely, we need to beware the military-industrial complex and the scientific-technological elite. We must beware the decisions made by the keepers of the data viewing machines. We must understand the limitation of the data miners. They will never really be able to know what someone it thinking unless they actually do listen in or read a message. That is why we were so surprised when the demonstrators began to dismantle the Berlin Wall. No one saw that coming. Corona might have been extraordinarily productive but it could not tell us what the Soviets were thinking.

    The problem with the ‘beware’ admonition is that the public needs an advocate. Eisenhower was a strong advocate for national defense but he also knew the thinking of our military leaders at the time and the excesses of the Pentagon. He was able to provide a strong national defense, reign in the call from Congress to spend more to speed up our intelligence satellite program, and explain it to the American people. I don’t see many in government today like Ike. I see an unbridled enthusiasm to spend more for defense, espionage, and border security at the expense of public health, welfare and food programs.

    My favorite Eisenhower quote:
    “Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron.”

    Link to this
  2. 2. curiouswavefunction 10:47 pm 07/26/2013

    Well said. I agree, that last quote by Eisenhower is amazing; can you imagine a Republican saying that today? For that matter, can you imagine a Democrat saying that today?

    Link to this
  3. 3. Chryses 9:24 am 07/27/2013

    Ash,

    Mr. Dyson has a reasonable criticism. The degree of confidence one may have that some correlation between data will always be less than one.

    I also agree with you that not every potentially bad idea should be allowed to play out, and also that the organs of government charged with protecting the public from the realization of those bad ideas are unlikely to ever have sufficient computing power and data to enable them to solve the general problem of reducing the rate of false positive correlations to zero.

    Let us assume for the sake of argument that all three of these assumptions are true.

    It seems to me that these concurrent, commonly accepted assumptions can be used to describe a domain within which legitimate concerns of the State for the common defense of the citizens may lay, and outside of which the preference of the citizens for privacy should prevail.

    Link to this
  4. 4. Chryses 9:40 am 07/27/2013

    Sorry – “confidence one may have that some correlation between data” should read “confidence one may have in some correlation between data”.

    Link to this
  5. 5. marclevesque 10:12 am 07/27/2013

    Very interesting! and enjoyable.

    Link to this
  6. 6. PunPui 12:21 pm 07/28/2013

    @3 Chryses,

    While the domain might be limited, the boundary is undefined, so how can both goals be simultaneously satisfied?

    Link to this
  7. 7. Chryses 7:16 pm 07/31/2013

    PunPui,

    The problem is intractable, but one way to address this difficulty is by requiring that each case be reviewed and approved by a judge before being passed from the NSA to the FBI. This would reduce, if not eliminate the frequency of low probability cases.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Dinosaurs

Get Total Access to our Digital Anthology

1,200 Articles

Order Now - Just $39! >

X

Email this Article

X