April 27, 2012 | 4
I was recently asked to comment for a BBC article whether quantum computing was “just around the corner”. Did I see quantum computers being here in 5 years? I don’t. But, being ever the optimist, I pointed out that the history of technology demonstrates that new technologies tend to emerge in combination with existing technologies. I felt that quantum effects would most probably be harnessed in computing in exactly this way. It’s only in retrospect that we tend to think of some area of technology as a “leap forward”. In reality, progress comprises a series of small steps; more evolution than revolution. And so I think it will be with quantum information technology.
It was only after we had done the BBC article that I realised that there is a good example of quantum information technology that is making its way from the lab branch into the world of everyday use: quantum cryptography. The term “quantum cryptography” means different things to different people, but here I’m referring specifically to Quantum Key Distribution (QKD). This is using quantum phenomena, combined with traditional cryptographic techniques, to solve a problem that has long plagued those attempting to send secret messages.
In so-called symmetric key cryptography, plaintext is encrypted using a known algorithm which uses a secret key. This secret key must be shared between the sender and the receiver, who are usually referred to as Alice and Bob (A and B). Claude Shannon, the father of modern Information Theory, proved that the most secure form of encryption is where the secret key is the same length as the message, and the secret key is used only once. This is known as the “one-time pad”.
What Alice and Bob are trying to do is prevent an eavesdropper (Eve) from intercepting and reading their secret message. Regardless of the algorithm in this scenario, the big issue is how the secret key is shared between Alice and Bob without Eve becoming privy to that key, and hence being able to decrypt their secret message. Despite the huge amount of work that has gone into producing ever more secure encryption algorithms, the weak link remains the transmission of the secret key.
In something like a military or diplomatic situation you can have secure couriers to take the secret key from Alice to Bob. Government organisations have whole departments and procedures for securely handling cryptographic key. Assuming you can trust the courier, and the process by which s/he carries the key, you can be reasonably sure that the key has not been compromised. However, as the world has become more interconnected, cryptography has been in demand from everyday users, for everything from securing emails to ensuring that credit card details remain confidential.
This problem was addressed by a number of researchers in the 1970’s, leading to the seminal paper in 1976 by Diffie and Hellman entitled “New Directions In Cryptography” and the work shortly afterwards by Rivest, Shamir and Adleman (famously known as RSA). This laid the foundation for public key encryption, also known as asymmetric encryption, which most people use today on the Internet, even if they don’t necessarily realise it.
In this, the key used in the encryption has both a private and a public element, and, by the use of appropriate protocols, encrypted messages can be passed between Alice and Bob with them having to pass only the public portion of the key. The reason this is possible is because there are some mathematical functions that are relatively easy to do, but extremely difficult to undo. The classic example is combining large prime numbers and subsequently trying to factor the resulting large number to determine the original prime numbers.
However, there is always the danger that at some point someone will discover a hitherto unknown mathematical or computing technique that renders the mathematical problem upon which public key cryptography is based, easy to solve. (Of course, there is nothing to stop messages being intercepted now and being kept in case this becomes possible. There are some secrets that one might not want exposed even in 10, 20 or 30 years’ time.)
There is the added complication that public key encryption is a lot less efficient than symmetric encryption. Many implementations of what appear to be public key encryption actually use public key encryption for passing a secret key which is then used for the more efficient symmetric key encryption for the remainder of the communication.
So, the ideal solution would be to use symmetric key encryption, using a key that is the same length as the message being encrypted, and where the secret key can be exchanged such that Alice and Bob can guarantee for all time that the secret key has not been intercepted by Eve, or anyone else. That is the promise of Quantum Key Distribution.
In QKD, the secret key is transmitted using “qubits” instead of classic “bits” (0’s and 1’s). Qubits are simply a quantum level feature, such as the polarization of a photon (the particle that makes up light). For example, you might have vertically polarized light representing a 0 and horizontally polarized light representing a 1.
QKD relies upon the counterintuitive phenomenon of quantum indeterminacy. This says that a qubit can represent a 0 or a 1 or, crucially, both at the same time. If using vertically and horizontally polarized light (0 and 90 degrees), then, for a single photon, both states exist simultaneously right up the point where it is measured, when it takes on one value or the other.
In the case of polarization of light, you need to use a specific filter to determine which polarization was set by Alice. However, if you use a filter that is intended to measure a different orientation of polarized light then the measurement achieved is a random result. For example, if your filter measures whether the photon has diagonal polarization (45 and 135 degrees), then any photon polarized using vertical/horizontal orientation will be randomly measured as having as 45 of 135 degrees rather than 0 or 90 degrees. The reverse is also true.
The clever part of QKD is the protocols that have been developed to make use of the fact that measuring the photon causes it to take on a definite state, and one which is dependent upon how you measure it.
Probably the most familiar protocol used in QKD is known as BB84 which was published in 1984 by the eponymous Bennett and Brassard. In BB84 Alice transmits the secret using two sets of possible states. Let’s assume they are the two sets of polarized light described above. When Bob receives the photons he does not know which set of polarizations were used so randomly chooses a filter with which to make his measurement. If the filter selected was incorrect set he will obtain a random result. If Bob chooses his filter at random, this will happen 50% of the time.
Now Bob tells Alice he has received the key and she tells him, potentially using a public channel, which set of polarizations she was using for which photon. From this they can tell which photons Bob will have measured correctly and hence which to discard. From those they retain they construct a secret key.
If Eve has intercepted the photons and retransmitted them to Bob, s/he will have the same problem in having to choose a filter at random. Hence, s/he will introduce errors in the same way that Bob has, but because Eve and Bob have chosen at random, Bob will find that he has more errors than he would expect. By comparing how many errors are in the photons that Alice and Bob would like to use for their secret key, they can determine if it has been intercepted, and if necessary reject it and start again.
Being sure that their secret key is known only to the two of them, Alice and Bob can use traditional symmetric key algorithms to encrypt the message that they want to pass. In the example of using light down a fibre optic cable this can all be done so quickly that the secret keys can, if implemented correctly, be as long as the messages that are being encrypted. Hence, Alice and Bob can have truly secure communications.
Of course, real world engineering means its not quite that simple and errors are also introduced by the transmission medium (eg the fibre optic cable) and other pieces of the equipment being used. This is allowed for when setting the level of errors to be tolerated before discarding a key as having been compromised. Since BB84 was published there have been other protocols developed, including employing different phenomena such as quantum entanglement.
And yes, QKD is still open to all sorts of attack, particularly the so-called side channel attacks where it’s not the encryption that is attacked but some other weak link, usually the people. Plus, unless Alice and Bob do use keys that are the same length as the message there is always the possibility that someone will find a weakness in the algorithm or be able to use the increasingly powerful cryptanalysis techniques on the encrypted messages.
But QKD, if implemented rigorously, holds the promise of fast, provably secure communications. It is for this reason that in the last ten years it has left the lab bench, and equipment has become commercially available to deploy QKD. There are limitations: currently the fibre optic based implementations typically can transmit for between 100 and 200km and the data rates are not as high as the modern networks might expect. But, with more companies entering the market, and with some cities such as Geneva and Tokyo choosing to implement QKD networks, this journey from theory to commercial product does suggest that quantum phenomena will indeed be in everyday use in our computing infrastructure within the foreseeable future.