ADVERTISEMENT
  About the SA Blog Network













Observations

Observations


Opinion, arguments & analyses from the editors of Scientific American
Observations HomeAboutContact

Not That Secure after All: Cryptography in a Connected World

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



graphic showing two computers sharing data across a globeYou’re not going to like hearing this: The arsenal of mental and physical resources out there right now could easily bring down our cyber-security system, which protects the trivial, such as e-mails, to the critical, think banking system. The only reason it hasn’t happened yet: the intent hasn’t been there.

It was on this quite grave note that the session, "Keeping Secrets: Cryptography in a Connected World," ended June 4 at the World Science Festival. But the lively panelists, often in disagreement with one another, seemed to be unanimously content with this assertion. It was raised first by Brian Snow, who previously worked at the National Security Agency, creating and managing their Secure Systems Design division. In other words, he would know.

The problem isn’t so much encryption, which are mainly algorithms and extremely laborious math problems that act as padlocks to protect data. There are of course some difficulties: it is hard to build a system that will be secure in the future, as programmers must try to project how smart and resourceful future mathematicians and hackers might be.

The main problem is what Snow and fellow panelist Orr Dunkelman, a cryptanalysist (i.e. breaking ciphers and then analyzing them to see how secure they are), call the human factor. That is, to paraphrase Dunkelman, the problem with ciphers designed to enforce cybersecurity is not in the algorithms or encryption systems necessarily, but in their implementation by humans—we are the erratic and unpredictable flaw in most cybersecurity systems. An example came from journalist and BBC Science producer Simon Singh: when he wrote his book on the history of encryption, he included a handful of mathematical encryption puzzles for readers to break; the last one, which should have been the hardest puzzle, was in fact ridiculously easy because when Singh wrote it, he used the wrong cipher. Human factor.

But maybe that’s just fine. Tal Rabin, a researcher in cryptography at I.B.M., made the point that maybe we don’t want to be overly secure. Why would she want the same level of cybersecurity for her emails as with our nuclear system? People don’t want overly secure systems for everything; it would come with too high a personal and monetary cost. And some people just don’t care. Rabin and Dunkelman seemed to be fine with this, but Snow was visibly irked by this type of cognitive dissonance—the carelessness or inability of people to understand cybersecurity and how it works (or doesn’t).

Snow’s only comfort seemed to be that in the military, systems that are built and always tested and retested, to ensure that the cryptography used in the field is a trustworthy padlock. In commercial sectors, time and money are scarce, so what you get is by no means quite as secure. For instance, the new key fobs that automatically open the doors on new cars. These keys rely on a radio transmission between the key and the car; but it’s not a secure transmission—in fact Snow shared with us one way of getting around this and getting into any car that opens with a key fob.

We have come quite a long way from from the 1940s and 50s when math was first used by cryptographers as a way to hide information. But the leaps made still don’t completely protect our information, ranging from Facebook pages to government and military communications. Should we trust the security of cloud computing? Will online voting ever be a secure option? These questions lingered at the end of Saturday’s session.

Image credit: Pacific Northwest National Laboratory





Rights & Permissions

Comments 2 Comments

Add Comment
  1. 1. jtdwyer 3:33 pm 06/6/2011

    As I understand, the trade-off with encryption is the computational resources required to encode/decode messages with known keys versus those necessary to decode or break the encoded message without a key.

    While some level of encryption may currently be practically impossible to break, as the cost, speed and capacity of available computers continues to advance, ever greater levels of encryption become necessary to prevent the unauthorized decoding of messages. As a result, Moore’s law also indirectly applies to required levels of encryption.

    The total cost of encryption is determined by what message traffic is considered critically secure and how much of that traffic is generated. The more critical information becomes available through facebook, medical records, etc., the greater the total cost of encryption/decryption. Alternatively, the availability, access and exchange of critical information could be procedurally limited, reducing the exposure risk and total cost of security.

    Link to this
  2. 2. bucketofsquid 9:59 am 06/16/2011

    The only completely secure system is the totally isolated system with thorough physical search of humans interacting with it. Even if the feds run their own lines, some one can tap the line at some point.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Limited Time Only!

Get 50% off Digital Gifts

Hurry sale ends 12/31 >

X

Email this Article

X