Unit 17 Security


Files and Resources

NOTE: The lesson slides provide sufficient technical detail so this page is mostly filled with videos for each content area

Specification

Asymmetric Keys and Encryption Methods

Because some of you are curious, here is a little more detail on how keys are generated.  THIS IS NOT EXAMINABLE!

And here is how AES works.  Again, this IS NOT examinable, merely background information for the curious.  AES 256 is considered incredibly secure.

Extra Technical Detail 

Cryptographic details

Quantum Cryptography 

RSA algorithm by hand

Video explaining how hashing a password is not going to stop a hacker

A side video which goes into a novel encryption method

Digital Signatures and Digital Certificates

Detailed video (only first half is exam relevant) outlining public key infrastructure (PKI)

Public Key Infrastructure and Digital Certificates

The following is an extract from Microsoft's Windows Developer article on PKI.

Public-key cryptography (also called asymmetric-key cryptography) uses a key pair to encrypt and decrypt content. The key pair consists of one public and one private key that are mathematically related. An individual who intends to communicate securely with others can distribute the public key but must keep the private key secret. Content encrypted by using one of the keys can be decrypted by using the other. Assume, for example, that Bob wants to send a secure email message to Alice. This can be accomplished in the following manner:

This simplified example highlights at least one obvious concern Bob must have about the public key he used to encrypt the message. That is, he cannot know with certainty that the key he used for encryption actually belonged to Alice. It is possible that another party monitoring the communication channel between Bob and Alice substituted a different key.

The public key infrastructure concept has evolved to help address this problem and others. A public key infrastructure (PKI) consists of software and hardware elements that a trusted third party can use to establish the integrity and ownership of a public key. The trusted party, called a certification authority (CA), typically accomplishes this by issuing signed (encrypted) binary certificates that affirm the identity of the certificate subject and bind that identity to the public key contained in the certificate. The CA signs the certificate by using its private key. It issues the corresponding public key to all interested parties in a self-signed CA certificate. When a CA is used, the preceding example can be modified in the following manner:

A typical digital certificate might contain:

In summary, the certificate signing process enables Bob to verify that the public key was not tampered with or corrupted during transit. Before issuing a certificate, the CA hashes the contents, signs (encrypts) the hash by using its own private key, and includes the encrypted hash in the issued certificate. Bob verifies the certificate contents by decrypting the hash with the CA public key, performing a separate hash of the certificate contents, and comparing the two hashes. If they match, Bob can be reasonably certain that the certificate and the public key it contains have not been altered.

Digital Signature

As with all things crypto, assume Alice (sender) wants to send a secure message to Bob (recipient)

There are two problem to solve here.

Both of these problems can be solved with public key cryptography. For (1), Alice encrypts the message with Bob's public key. When bob receives the message, he can securely decrypt it with his private key. So encrypt with Bob's public key and decrypt with Bob's private key (this is basic stuff in public key crypto)

To solve (2), Alice also sends a digital signature along with the encrypted message. This is done as follows:

When Bob receives the message + digital signature he will:

As for digital certificates, notice that Alice relies on encrypting the original message with Bob's public key and Bob relies on Alice's public key to decrypt the signature. How can both of them be sure of each other's public key? Thats what digital certificates are for. Its allows a trusted third party to verify/say "Alice's public key is xyz"

A digest is a one way hashed calculation of a particular file.  It is not the fully encrypted file, but just a unique fingerprint that can be used to determine if the file has been changed in any way.  The digest works like a fingerprint of the file.

Digital Certificate Vs Digital Signature

This is an extract from RSA labs article on digital certificates

Digital signature: Suppose Alice wants to send a signed document or message to Bob. The first step is generally to apply a hash function to the message, creating what is called a message digest. The message digest is usually considerably shorter than the original message. In fact, the job of the hash function is to take a message of arbitrary length and shrink it down to a fixed length. To create a digital signature, one usually signs (encrypts) the message digest as opposed to the message itself.

Alice sends Bob the encrypted message digest and the message, which she may or may not encrypt. In order for Bob to authenticate the signature he must apply the same hash function as Alice to the message she sent him, decrypt the encrypted message digest using Alice's public key and compare the two. If the two are the same he has successfully authenticated the signature. If the two do not match there are a few possible explanations. Either someone is trying to impersonate Alice, the message itself has been altered since Alice signed it or an error occurred during transmission.

Digital certificate: In addition, someone could pretend to be Alice and sign documents with a key pair he claims is Alice's. To avoid scenarios such as this, there are digital documents called certificates that associate a person with a specific public key.

PKI explained another way

This is a partial extract taken from a security expert's post on StackExchange.

Public Key Cryptography designates the class of cryptographic algorithms that includes asymmetric encryption (and its cousin key exchange) and digital signatures. In these algorithms, there are two operations that correspond to each other (encrypt -> decrypt, or sign -> verify) with the characteristic that one of the operations can be done by everybody while the other is mathematically restricted to the owner of a specific secret. The public operation (encrypting a message, verifying a signature) uses a public parameter called a public key; the corresponding private operation (decrypting that which was encrypted, signing that which can be verified) uses a corresponding private parameter called a private key. The public and private key come from a common underlying mathematical object, and are called together a public/private key pair. The magic of asymmetric cryptography is that while the public and private parts of a key pair correspond to each other, the public part can be made, indeed, public, and this does not reveal the private part. A private key can be computed from a public key only through a computation that is way too expensive to be envisioned with existing technology.

To make the story short, if you know the public key of some entity (a server, a human user...) then you can establish a secured data tunnel with that entity (e.g. with SSL/TLS in a connected context, or encrypting emails with S/MIME).

The problem, now, is one of key distribution. When you want to connect to a server called www.example.com, how do you make sure that the public key you are about to use really belongs to that server ? By "belong", we mean that the corresponding private key is under control of that server (and nobody else).

Public Key Infrastructures are a solution for that problem. Basically:

In that sense, a PKI is a support system for usage of public key cryptography, and it itself uses public key cryptography.

The core concept of a PKI is that of a certificate. A certificate contains an identity (say, a server name) and a public key, which is purported to belong to the designated entity (that named server). The whole is signed by a Certification Authority. The CA is supposed to "make sure" in some way that the public key is really owned by the named entity, and then issues (i.e. signs) the certificate; the CA also has its own public/private key pair. That way, users (say, Web browsers) that see the certificate and know the CA public key can verify the signature on the certificate, thus gain confidence in the certificate contents, and that way learn the mapping between the designated entity (the server whose name is in the certificate) and its public key.

Take five minutes to grasp the fine details of that mechanism. A signature, by itself, does not make something trustworthy. When a message M is signed and the signature is successfully verified with public key Kp, then cryptography tells you that the message M is exactly as it was, down to the last bit, when the owner of the corresponding private key Ks computed that signature. This does not automatically tell you that the contents of M are true. What the certificate does is that it moves the key distribution problem: initially your problem was that of knowing the server's public key; now it is one of knowing the CA's public key, with the additional issue that you also have to trust that CA.

How can PKI help, then ? The important point is about numbers. A given CA may issue certificates for millions of servers. Thus, by action of the CA, the key distribution problem has been modified in two ways:

The PKI becomes a true infrastructure when recursion is applied: the public keys of CA are themselves stored in certificates signed by some über-CA. This further reduces the number of keys that need to be known a priori by users; and this also increases the trust issue. Indeed, if CA2 signs a certificate for CA1, and CA1 signs a certificate for server S, then the end user who wants to validate that server S must trust CA2 for being honest, and competent, and also for somehow taking care not to issue a certificate to incompetent or dishonest CA. Here:

If you iterate the process you end up with a handful of root CA (called "trust anchors" in X.509 terminology) that are known a priori by end users (they are included in your OS / browser), and that are considered trustworthy at all meta-levels. I.e. we trust a root CA for properly identifying intermediate CA and for being able to verify their trustworthiness, including their ability to themselves delegate such trustworthiness.

Whether the hundred or so of root CA that Microsoft found fit to include by default in Windows are that much trustworthy is an open question. The whole PKI structure holds due to the following characteristics:

Note that, down the chain, the server S is verified to really own a specific public key, but nobody says that the server is honest. When you connect to https://www.wewillgraballyourmoney.com/ and see the iconic green padlock, the whole PKI guarantees you that you are really talking to that specific server; it does not tell you that sending them your credit card number would be a good idea.

Moreover, all of this is association between the server name as it appears in the target URL and a public key. This does not extend to the name intended by the user, as that name lives only in the user's brain. If the user wants to connect to www.paypal.com but really follows a URL to www.paaypaal.com, then the PKI and the browser will in no way be able to notice that the user really wanted to talk to PayPal, and not another server with a roughly similar (but not identical) name.

The main use case for a PKI is distributing public keys for lots of entities. In the case of Web browsers and SSL, the browser user must be able to check that the server he tries to talk to is indeed the one he believes it to be; this must work for hundreds of millions of servers, some of which having come to existence after the browser was written and deployed. Reducing that problem to knowing a hundred root CA keys makes it manageable, since one can indeed include a hundred public keys in a Web browser (that's a million times easier than including a hundred million public keys in a Web browser).

Client certificates are a SSL-specific feature. In all of the above we talked about a SSL client (Web browser) trying to authenticate a SSL server (Web server with HTTPS). SSL additionally supports the other direction: a SSL server who wants to make sure that it talks to a specific, named client. The same mechanism can be used, with certificates.

An important point to notice is that the server certificate and the client certificate live in different worlds. The server certificate is validated by the client. The client certificate is validated by the server. Both validations are independent of each other; they are performed by distinct entities, and may use distinct root CA.

The main reason why SSL servers have certificates is because clients cannot possibly know beforehand the public keys of all servers: there are too many of them, and new ones are created with every passing minute. On the other hand, when a server wants to authenticate a client, this is because that client is a registered user. Usually, servers know all their users, which is why most can use a simpler password-based authentication mechanism. SSL client certificates are thus rather rare in practice, because the main advantage of certificates (authenticating entities without prior knowledge) is not a feature that most servers want.

Useful links

Quantum Encryption

Quantum encryption takes classical encryption to the next level.  As computers become ever more powerful, their ability to factor the primary keys used in the encryption key is narrowing down.  

A quantum particle has many measurable properties, such as momentum and spin.  To date, the principal method of quantum key distribution (QKD) is through measuring particle polarisation (spin).

Think of spin in a two-dimensional plane in degrees. in QE we use photons.  It starts in a superposition (all possible spin states) but when we pass the photon through one of four different filters: horizontal (—), vertical (|), 45° left (\) or 45° right (/), we collapse the state of the photon into one of its base states (eigenvalues).  We already know in advance that a vertical and left diagonal might equate to a 1, and 0 for horizontal and right diagonal.

When transmitting the secret key (Alice), we pass a long sequence of photons using one of the four filters completely at random.  We have therefore generated a sequence of random spin combinations.  Alice only needs to record which filter sequence was used.

The receiving end (Bob) will pass the same photon through either a diagonal filter (\ and /) or rectilinear (— and |). Bob has no idea what filter was used, so will pick one of the two filters and record the measurement and filter used.  A key part of quantum physics is that a particle needs to be put through a filter to have its spin measured.  If the wrong filter was used, Bob would get an incorrect reading as its spin would get changed in an unpredictable way (it stands a 50/50 % chance of being a 0 or 1).

So how can Bob possibly determine the secret key?  He shares with Alice what sequence of filters was used (but NOT the readings). Alice will confirm either correct and incorrect.  All the incorrect readings (from using the wrong filter) are discarded. The remaining bits are used as the secret key.

Whereas traditional methods are susceptible to man in the middle attacks, quantum rules make this realistically impossible. The interceptor, Eve, must herself pass the photon through a filter.  She would then need to transmit this on to Bob, who is still using his random selection.  Through incorrectly guessing the filter, Eve will disturb the state of the photons and cause decoherence.  Decoherence is a loss of information, caused by an interaction with something outside of the system.  To remain undetected, she must then pass her reading on to Bob who would also incorrectly record the measurement. 

So how can Alice or Bob detect if Eve had eavesdropped? They compare a subset of their bit strings.  If these do not match, either the sequence was subjected to interference or it was intercepted and the key is discarded. 

Benefits

Drawbacks

Additional Notes

The following is taken from a series of white papers written on quantum cryptography.

Quantum cryptography is a technology that uses quantum physics to secure the distribution of symmetric encryption keys. A more accurate name for it is quantum key distribution (QKD). It works by sending photons, which are “quantum particles” of light, across an optical link.

The principles of quantum physics stipulate that observation of a quantum state causes perturbation. The various QKD protocols are designed to ensure that any attempt by an eavesdropper to observe the transmitted photons will indeed perturb the transmission.

This perturbation will lead to transmission errors, which can be detected by the legitimate users. This is used to verify the security of the distributed keys.

QKD implementation requires interactions between the legitimate users. These interactions need to be authenticated. This can be achieved through various cryptographic means.

The end-result is that QKD can utilize an authenticated communication channel and transform it into a secure communication channel. In theory, QKD should be combined with One-Time Pad (OTP) encryption to achieve provable security. However, an OTP requires keys, which are as long as the data to be encrypted, and can be used only once.

This would impose strong limitations on the available bandwidth, due to the fact that the key distribution rate of QKD is typically 1’000 to 10’000 times lower than conventional optical communications.

Therefore, in practice, QKD is often combined with conventional symmetric encryption, such as AES, and used to frequently refresh short encryption keys. This is sufficient to provide quantum-safe security.

You can read more in the IDQ link below.

Encryption Protocols

Image result for tls