Quantum proof cryptography

Quantum Proof Cryptography.

Digital security experts have their eyes fixed on Y2Q - Years to Quantum clock.

It ticks down to the time until the projected date, 

when a quantum computer will be able to break an essential form of modern cryptography.

It is called public-key cryptography because of its ingenious method, 

of sharing secret codes in public.

It ensures our credit card number is safe when we shop online, 

and our phones software update is coming from the phone company and not a hacker.

Whistleblowers use it to contact journalists, and business use it to send confidential files.

But a quantum computer would render the standard types, 

of public key cryptography useless.

Y2Q is an analogy to Y2K crisis of the late 1990’s.

20th century software programmes had designated years with only 2 digits.

1900 and 2000 was designated as 00.

Programmes involving such a date where expected to malfunction, 

when the new millennium arrived, causing potentially massive disruptions.

Business and governments however raced to successfully fix the Y2K bug.


No one knows exactly when a quantum computer, 

large enough to break cryptographic standards will be developed.

The current date of 14th April 2030 is just a guess.

The National Institute of Standards and Technology, or NIST initiated a public contest in 2016.

Its solicited ideas for a quantum - resistant cryptography.

They selected an algorithm called CRYSTAL-Kyber, in the category of public key encryption,

and three in the category of digital signatures.

These are codes that can run on today’s computers, 

but are so robust that not even quantum computers could break them.

NIST is working with researchers to standardise the CRYSTALS-Kyber algorithm, 

so programmers can start laying the foundations of quantum proof cryptography.

All the four selective algorithms are based on mathematics of lattices .

Experts are confident that these are very hard problems to solve, but no one can guarantee,

that a future breakthrough won’t crack them open.


One of the earliest known forms of cryptography is a cipher that was used substitute letters.

Julicus Caesar used one where he replaced each letter, 

with one, 3 positions away in the Roman alphabet

To decrypt a message one has to simply reverse the process.

There are endless variations of Caesar substitution  scheme.

Code breakers can usually solve complicated substitution schemes.

By comparing the frequency of different symbols, 

with those of letter in common English texts.

The gold standard in modern cryptography known are Advanced Encryption Standard, or AES,

dramatically expands on Caesar’s approach.

It scrambles the message by repeatedly substituting the entries, 

and shuffling them like a deck of playing cards.

After enough shuffles and substitutions, its very difficult to reconstruct the original version.

To decrypt the message one would have to unscramble it by undoing, 

each shuffle and substitution.

With a physical deck of cards, this is nearly impossible, 

since shuffling order is determined by imperceptibly slight movements.

But the computer shuffles the message according to a precise set of instructions.

For example, move the second entry into the 5th spot, that are easy to undo.

The computer simply follows the instructions in reverse.


Like Caesar’s cipher, AES has symmetrical procedures for encrypting and decrypting. 

It applies the same process forward and backward, 

just like twisting a key in opposite directions to lock and unlock a door.

Until the 1970’s the symmetric-key cryptography was the only type of cryptography.

But it has major limitations.

The sender and receiver need to agree on the procedure for encryption and decryption,

before exchanging any messages, 

either in person or through a trusted separate mode of communication.

Its hard to imagine an alternative to symmetric cryptography without this constraint.

Ralph Merkle envisaged a system in which 2 people exchange messages entirely in the open, always assuming someone is listening.

Through this public correspondence they managed to establish, 

a scheme for coding and decoding, and use it to send secret messages.

If someone else was reading the messages, they would not be able to figure out the scheme.

Merkle’s proposal was rejected by experts as unrealistic.

Remarkably, several mathematic papers realised Merkle’s vision, a few years later.

Two of them called Diffie-Hellman and Rivest-Shamir-Adleman, 

or RSA are ubiquitous in modern communication, as public-key cryptography systems.


When you check your bank account online, 

your computer and the bank’s server are exchanging messages.

You send your password, and the bank sends your balance.

While that information moves through the internet, someone else could read it.

The messages need to be encrypted.

Most messages are encrypted with symmetric cryptography, such as AES, 

which quickly and efficiently scrambles them.

But first your computer and the communicating server need to agree, 

on the specific scrambling procedure.

They can’t simply write it down, because all their communications, 

could be captured by a eavesdropper.

They need to use public-key cryptography.


We can use an analogy to understand how public-key cryptography works.

Alice and Bob own a bakery with a top secret brownie recipe.

It is so secret that even Alice and Bob do not know the full recipe.

They each had a special ingredient known only to that the person who adds it.

To create the brownies, Alice and Bob trade of days starting the recipe.

Sometimes Alice mixes up basic ingredients with her secret one,

and sends the batter to Bob, who adds his secret ingredient and bakes it.

Other times Bob first combines the basic ingredients, 

with his secret ingredient and sends it to Alice, who mixes her secret ingredient, 

and bakes the brownies.

Alice and Bob always end up with the same brownies, but they never share the full,

exact ingredients with anyone, including each other.

Even their conniving delivery driver, Eve, can’t figure out the recipe.

In cryptography, the two people exchanging messages are named as Alice and Bob,

and a potential spy is named Eve.

Eve can’t deduce the secret ingredients, whenever she transports them, 

because its mixed up with all the basic ingredients, and its impossible to separate them. 


Computers work with numbers and math.

In real public-key cryptography, the goal is to end up with a shared secret number.

It is something like a temporary password that grants access to a private conversation.

The two computers can then continue to have an encrypted conversation,

using symmetric cryptography such as AES.

Different types of public key cryptography create and share the temporary passwords, 

in different ways.

Alice and Bob hid their brownie recipe from Eve by mixing the ingredients before transporting.

Some one implementing public-key cryptography would instead use mathematical functions, 

to blend secret numbers.

Functions are like machines that taken numbers, churn them up and give out a new number.

The functions used in public key cryptography are very particular.

They need to mix up numbers easily, while making it hard to unmix.

Even if Eve sees the output of the function, she shouldn’t  to be able to guess, 

which secret numbers went into as input.

RSA cryptography, is based on the multiplication function and its opposite, factoring.

Mixing numbers by multiplying them is relatively easy for a computer,

even if the numbers are very large.

Undoing multiplication, or factoring is very hard if the numbers are large.

Factoring the number 21 gives seven and three.

Decrypting a password created with RSA would require factoring a large number.

The best methods involve filtering through many numbers, 

to find a particular combination of them, which takes a computer a very long time.

Rather than making cryptography schemes more complicated, 

scientists have moved to cryptography that is based on very, very, 

simple things like integer factoring, which has been studied for thousands of years.


In 1994, Peter Shor discovered a way in which a quantum computer, 

could break any code encrypted with RSA or Diffie-Hellman.

He thought of a discrete logarithm problem. 

Logarithm function is an inverse of an exponential function, 

for example finding X in the equation 2 to the power of X, equal to 16.

Usually it is easy to find the logarithm.

But the discrete logarithm is about computing the logarithm, 

using alternative forms of arithmetic in which one counts in a circle, like on a clock.

RSA is based on factoring.

Diffie-Hellman is based on discrete logarithm problem.

Computer scientists generally believe there is no quick way to find the discrete logarithm, 

with a classical computer.

But Peter Shor found a way to do it on a quantum computer.

He then applied similar logic to show how to use a quantum computer, 

to quickly factor large numbers.

Together these solutions are known as Shor’s algorithm.

Shor wasn’t imagining programming, a real quantum computer.

He was simply doing math on chalk boards and paper.

At that time quantum computers seem like they were far in the future.

His algorithms has major implications for public-key cryptography.

The quantum computer could use it to break almost all cryptographic systems, 

currently in use.


Classical computers run on long strings of 0’s and 1’s, called bits.

Quantum computers use qubits.

Qubits is a portmanteau of quantum and bit.

Qubits can be in superposition, which is a strange combination of 0’s and 1’s.

By hovering between the two states, qubits enable quantum computers, 

to perform certain tasks, much faster than classical computers.

But quantum computers are finicky.

The qubits need to maintain a superposition while the algorithm is running.

But they tend to collapse into a string of 0’s and 1’s.

Quantum computers look impressive, but they are still not very powerful.

Scientists have been able to run computations with only a modest number of cubics.

In 2012, the largest number they have factored using Shor’s algorithm, was 21.

Many experts believe that a quantum computer large enough to break, 

RSA and Diffie-Hellman will be built in the next few decades.


Every type of public key cryptography is grounded in a hard math problem.

To secure secrets against a quantum future, scientists need to use problems so hard,

that even a quantum computer cannot solve them in a reasonable amount of time.

The NIST sought nominations for public key cryptographic algorithms, 

that could be implemented on standard computers as alternatives to RSA and Diffie-Hellman.

They shortlisted 26 algorithms.

Cryptography systems aren’t guaranteed to be secure.

So scientists poke holes in them.

NIST eventually settled on a few finalist algorithms.

Most of them were based on lattice mathematics.

A lattice is a repeating array of dots.

One of the simplest looks like a pegboard, dots arranged in a square grid.

Mathematicians think of this lattice being constructed from two foundational lines.

A vertical one and horizontal one of equal length.


Imagine drawing a point in the centre of a piece of paper.

By drawing a series of vertical or horizontal lines, all of equal length, 

extending out from the centre, and marking the points where the chains of lines end.

If you repeat these steps, the dots will form a square grid.

Different sets of initial lines create different lattices.

The two lines may not have equal lengths.

Instead of being perfectly horizontal or vertical, they could be tilted at an angle.

Drawing chains of such lines would still yield a repetitive pattern of dots, 

but with rows and columns offset, or of different heights.

Lattices are the backdrop of some deceptively tricky math problems.

Suppose we draw two lines on a piece of paper, and say they are building blocks of a lattice.

Then we draw a single dot somewhere on the page.

Can we find the lattice point that is closest to that dot.

We could probably use foundational lines to start drawing the lattice points, 

and eventually find the closest one.

But that would be possible only because the lattice on the paper is two dimensional.

Our visual imaginations are generally limited to 3 dimensions.

But mathematicians can describe lattices with hundreds of dimensions.

In those it is very difficult to find the nearest lattice point.


Scientists use such massive lattices to build cryptographic systems.

For example, start with a thousand dimensional lattice.

Out of this sea of dots, select a single point.

The precise location of this point represents the secret message.

Then wiggle a little bit away from this point, floating off the lattice into the ambient space.

We can share the new location publicly without giving away the location of the secret point.

Finding nearby lattice point is a very hard math problem.

Just like mixing the ingredients protect the brownie recipe, 

wiggling away from the secret point obscures its precise location.

Computer scientist have been studying such problems for decades, 

and a reasonable confident that they are very hard to solve.

But when designing a new algorithm, cryptographers need to balance security, 

with many other factors, such as the amount of inflation  to computers need to exchange, 

and the difficulty of the computation required to encrypt and decrypt messages.

In this respect lattice based cryptography excels.

Lattices fit into this sweet point where everything is reasonable, 

nothing is too bad, nothing is too good.

It is sort of like the Goldilocks zone.


The problem is no one can guarantee that lattice based cryptography will always be secure.

To guard against a mathematical breakthrough solving the underlying problem,

and breaking the code, cryptographers need to access a variety of algorithm types.

Three of the four algorithms selected by NIST, were lattice based algorithms.

The only one selected for standardisation in the general public-key encryption category, 

was CRYSTALS-Kyber.

The NIST contest also has a category for digital signature algorithms.

This provides guarantees about who sent the message and that it wasn’t modified.

Encryption algorithms answered the question, ‘Do I know that no one else will be reading this?’.

Digital signatures answers the question, ‘Can I trust these data have not been modified?’.

Digital signature algorithms currently in use are also vulnerable to Shor’s algorithm.

NIST choose to standardise three digital signature algorithms, two of which are lattice based.


Heavy reliance on a single type of math problem is risky.

No one can be certain that mathematicians won’t eventually crack it.

It does not give flexibility to users, who might find that another type of cryptography, 

suits their specific needs better.

For these reasons, NIST has extended the standardisation process in both categories, 

to study algorithms that are not lattice based.

NIST is now in the process of setting the standards, which describe in step by step detail,

how programmers should implement the algorithms.

Everything in the internet must have super specific, super boring standards, 

with every little detail.

Otherwise computers just can’t talk to one another.

After the standards are set, every computer system needs to switch, 

to post-quantum cryptography.

Individual software companies  need to upgrade their protocols.

Governments need to change their requirements.

Physical hardware devices need to be swapped out.

It will probably take many years, if not decades, 

to fully transition to post quantum cryptography.