Overview
In this series, we'll explore the foundations of cryptography from a theoretical point of view. We start with a non-technical motivational essay which aims to make it clear why we need precise mathematical definitions of cryptographic primitives and constructions, and why rigorous proofs of security are a must in the field of Modern Cryptography.
We then get technical by exploring Shannon's groundbreaking notion of perfect secrecy, which guarantees secrecy in the information-theoretic sense, i.e. against computationally unbounded adversaries. A central notion here is indistinguishability: we don't care what strategies an adversary may bring to bear, now or in the future. In a perfectly secret scheme, so long as the adversary can't tell the difference between an encryption of a message and white noise, a.k.a. truly uniformly random bits, he gains no knowledge whatsoever about the encrypted messages. This is sometimes called a Cryptographic Turing Test.
Unfortunately, a theorem by Shannon states that perfect secrecy is only achievable with keys that are at least as long as the messages. Since the one-time pad isn't really practical, we're willing to sacrifice information-theoretic secrecy for a weaker notion of computational indistinguishability which permits shorter keys (e.g. 256-bit AES keys). In the notion of semantic security, we allow an efficient adversary to distinguish between the ciphertext of a plaintext, and white noise, with negligible probability. An efficient adversary will be any probabilistic polynomial-time (ppt) alogrithm: these are essentially all the real-world programs. Since we allow adversaries with polynomial runtime (in the length of the input), we are on the safe side of technological advances in computing speed, because superpolynomial runtime are for all practical purposes inpractical (provided the security parameter is big enough). We'll precisely define what "negligible" means.
Next in our exploration of the foundations of cryptography is the study of one-way functions (owf). These are functions that are easy to compute but hard to invert. OWFs are the basic building block of cryptography in general. Without them, we can't hope to achive semantic security. Unfortunately, we don't know whether owfs exist at all! For if we could prove that one-way functions existed, we would also have proven that \(\mathcal{P} \ne \mathcal{NP}\), the hardest open problem in Complexity Theory. If we are willing to make some assumptions about the intractability of some mathematical problems like factoring, the discrete logarithm problem, the subset-sum problem, and lattice problems[M07], we can propose and use candidate one-way functions. This leads us to distinguish between strong and weak owfs, and we'll see that one can turn every weak owf into a strong owf using amplification.
(... add intro to hardcore bits)
All over Cryptography, we need randomness, a lot of randomness. Since truly random numbers are hard to generate fast enough, we need pseudorandom generators (PRG). These are randomness expanders, which deterministically expand a short truly random seed into a long pseudorandom sequence. This sequence isn't truly random, but we want it to be "good enough" in the context of semantic security. We therefore define a PRG so that no ppt distinguisher can't tell the difference between the output pseudorandom sequences and sequences of truly random bits (the Cryptographic Turing Test again). An equivalent definition is next-bit indistinguishability. We'll prove that both definitions are equivalent. The proof uses the very important hybrid technique to move from one probability distribution to another via intermediary distributions called hybrids. The natural question arises how to construct PRGs in the first place. We can do this using hardcore bits of one-way permutations which are bijective owfs. An interesting theorem is that the existence of PRGs imply the existence of OWFs, so basically, both are equivalent w.r.t. existence.
(... add intro to PRFs)
(... to be written, work in progress)
Covered Topics
- Why Theoretical Cryptography?
- Perfect Secrecy
- Probability Theory and Cryptography (work in progress)
- Semantic Security (not yet written)
- One-Way Functions
- Hardcore Bits (work in progress)
- Pseudorandom Generators (PRG) (... to be completed)
- Pseudorandom Functions (PRF) (work in progress)
- Trapdoor Functions (not yet written)
- Probabilistic Encryption (Goldwasser-Micali) (not yet written)
- Bit Commitment (not yet written)
- Learning With Errors (LWE) (work in progress)
- Lattice-Based Cryptography (work in progress)
- Cryptographic Multilinear Maps (work in progress)
- Probabilistically Checkable Proofs (PCPs) (not yet written)
- (... and more)
Sources
Literature
- [FOC] Oded Goldreich: Foundations of Cryptography, Volumes 1 and 2.
- [KL15] Jonathan Katz and Yehuda Lindell: Introduction to Modern Cryptography, 2nd Edition.
- [BS] Dan Boneh and Victor Shoup: A Graduate Course in Applied Cryptography (draft 0.4).
- [M07] Daniele Micciancio: Lattice Problems. TCC 2007 Special Event, Assumptions for Cryptography. (slides pdf)