Lecture
Course material will be presented and discussed during lectures. Each week there are two opportunities to participate in lecture. Once on Tuesday and Thursday mornings from 10am to 11:20am Pacific time, and once on Tuesday nights 6:30pm to 9:20pm. You can attend whichever lecture is most convenient for you, but each week you should try to attend either both morning lectures or the night lecture. Each lecture will open with brief updates on the course.
Week 01 - Functional Programming and Proofs: Inductive Types and Recursive Functions
January 03 - January 05 : π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
At the core of the Coq proof assistant lies a pure functional programming language with little more than functions and inductive data types (trees). Functional programming differs from object-oriented or imperative programming by de-emphasizing mutation, prioritizing expressions rather than statements, using recursion to express repetition, and treating functions as first-class values. Because "in math, there are no side effects," the Coq proof assistant takes these ideas even further by eliminating I/O, mutation, exceptions, etc., and by requiring all functions to terminate! While these limitations are severe, they enable completely formal proofs about programs, allowing us to prove our software correct. The primary mechanism for reasoning about trees and recursive functions is induction, which will be our best friend for the rest of the quarter.
Week 02 - More functional programming: Lists and Trees
January 10 - January 12 : π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
This week we build on our progress from last week to show how to represent data structures (lists, trees) as inductive types in Coq. Then we show that program syntax can also be represented as data, using an Abstract Syntax Tree. With these data structures defined as inductive types, we can then write recursive functions that process them, and do inductive proofs about these functions.
Week 03 - Arith: Abstract Syntax; Semantics via Interpreters
January 17 - January 19 : π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
With the basics of Coq under our belt, we use the proof assistant as a laboratory to begin our study of programming languages. In some sense, programs are just data, namely, the string consisting of their source code. This string is called the "concrete syntax". We also study "abstract syntax," which is a tree derived from the string by parsing it. These notions of syntax have no inherent meaning, but are merely a way to write programs down. To give them meaning, we use semantics. The first kind of semantics we will study uses interpreters, which are recursive functions on the abstract syntax tree that execute a program to compute its final answer. We use these techniques to formalize and study a very simple language of arithmetic, with numerical constants, variables, addition, and multiplication. We also extend the language to include imperative features of assignment, if statements, and a restricted form of loops.
Week 04 - Transition Systems: Inductive Invariants
January 24 - January 26 : π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
Last week we studied how to formalize programming languages in terms of their syntax (just inductive trees!) and interpreters (just recursive functions!). However, because all functions in Coq must be structurally recursive (i.e., terminating), we were not able to model the semantics of languages that have features like unbounded while loops.
This week we'll introduce the crucial mechanism needed to model the semantics of arbitrary programming languages: formalizing them as transition systems. Transition systems are really just (potentially infinite) state machines -- we describe them in terms of their sets of possible states, which states the system can start in, and when one state can transition to another state. The behavior of a program (when modeled as a transition system) can then be reasoned about in terms of the sets of states its execution can ever reach. Since that set of reachable states may be infinite, we'll need techniques for reasoning about it inductively.
Trying to prove invariants (properties that are true for all reachable states of a transition system) by induction can get stuck in a couple of ways:
We might find a "counterexample to safety" (CTS) where we discover a reachable state that does not satisfy our property.
In this case the property is not an invariant at all! Perhaps our program has a bug, or perhaps we misstated the property.
We might find a "counterexample to induction" (CTI) where we discover a state S1 such that our target property holds, but S1 can step to a state S2 where the property does not hold.
Even if we find a CTI, our target property may still be an invariant!
When we get stuck trying to verify example transition systems, we'll study how we can find stronger properties (true of fewer states) that are inductive. As we move on throughout the rest of the quarter, we'll continue to use these techniques in every language and proof methodology we study!
Week 05 - Imp: Operational Semantics
January 31 - February 02 : π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
Transition systems are a general and flexible technique for modeling the behavior of programs, but manually specifying a transition system for each program we'd like to reason about is tedious and potentially error-prone. Instead, we can use operational semantics to automatically map a program P to its corresponding transition system directly from P's syntax. In particular, we'll see how to do this for the small "Imp" language used to model imperative programs with unbounded loops. In Coq, we'll focus on a fine-grained version of operational semantics called small step operational semantics and in lecture we'll also discuss "pen and paper" versions of alternative styles like big step operational semantics which can make some proofs cleaner or "more natural" but doesn't directly support non-terminating executions.
Week 06 - Hoare Logic
February 07 - February 09 : π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
As we prove more and more invariants for Imp programs, a proof methodology emerges: we can break the overall proof of a bigger program into smaller proofs for each part of the program. It turns out that there's a very nice approach for writing such proofs in a system called Hoare Logic. What's even cooler is that we can prove that any proof we write in Hoare Logic is actually true! This process of proving our proof systems correct is called metatheory and will be a major theme in this week and for the rest of the course!
Week 07 - Lambda Calculus: Scope, Church Encodings
February 14 - February 16: π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
Our little Imp language has loops but lacks functions. This week we will take a step back and think about how programming languages can implement functions. In fact, we will see that "all you need is functions". Lambda Calculus is a very old programming language that predates electronic computers themselves! In Lambda Calculus there are functions and variables, and that's it. Surprisingly, Alonzo Church and Alan Turing showed that the Lambda Calculus is just as computationally powerful as a Turing Machine. We will study the intricacies of variable names and scopes, as well as how to encode data such as numbers, pairs, and lists in a language that seems to lack them.
Week 08 - STLC: Type Safety
February 21 - February 23: π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
Although we can encode data as functions in Lambda Calculus, often we prefer to add them directly to the language. For example, we can add booleans to the Lambda Calculus, but as soon as we do this, we are faced with the possibility of a program try to treat a boolean as a function or vice versa. Such programs will "get stuck" in the operational semantics, and "stuckness" models the error that would occur in a real implementation under these conditions. Programs that "follow the rules" by always treating booleans as booleans and functions as functions, however, will never get encounter such error conditions at runtime. This week we will study type systems which are a formalization of the rules that programs can follow to guarantee that they avoid stuckness. In particular, we will study the Simply Typed Lambda Calculus (STLC), which has booleans and functions. Its type system can be implemented so that programs can be type checked fully automatically. We will prove a metatheorem that states that the type system is sound, meaning that whenever a program passes the type checker, it is guaranteed not to encounter a "stuckness" error at runtime.
Week 09 - System F: Polymorphism
February 28 - March 02: π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
STLC solved the problem of how to guarantee that programs don't get stuck because they treat booleans as functions or vice versa. But sometimes a function works correctly no matter what kind of data it is passed. For example, the identity function, which takes one argument and just immediately returns that argument works for any type of input. But if we write the identity function down in STLC, the type system will force us to choose just one type for the argument. If we wanted to use the identity function elsewhere with a different argument type, we would have to copy-paste the code to give it a different type, which feels ridiculous. To fix this, we will introduce polymorphism, which captures the pattern of "this code works no matter what type this part is". The formal system we will study, System F, is small but surprisingly expressive. We will see that we can port all of our Church Encodings into System F. Amazingly, every program in System F is guaranteed to terminate (if it type checks). System F is sophisticated enough that formalizing it in the proof assistant is more trouble than it is worth (at least for us in this course), so we will study System F mostly with pen-and-paper techniques. We will also do quite a bit of programming in System F.
Week 10 - Compiler Correctness: Simulation Relations
March 07 - March 09: π§βπ« Slides Β· π§βπ» Code Β· πΊ Videos Β· π Notes
The programs our physical computing devices (e.g., CPUs) are able to run look quite different from the languages we have studied up to this point in the course. To address this gap, folks build compilers which translate the higher-level code they write down into lower-level versions that computers can run. This raises a question: how do we know a proof about a program will "survive" the compilation process? To guarantee that proofs about high-level source programs also apply to low-level executable machine code, we need to ensure that the compiler preserves equivalence, i.e., that anything the source program can do the machine code can also do and vice versa (in song).
To formalize and prove equivalence we will need to slightly generalize our notion of transition systems to labeled transition systems so that we can talk about language-independent program behaviors -- after all, we need to show programs in two entirely different languages are equivalent! Armed with labeled transition systems, we will study how proofs of compiler correctness (i.e., semantic preservation) can be decomposed into a sequence of correctness proofs for each compiler pass. These proofs are known as bisimulation relations which we can "stack" to prove the correctness of a compiler end-to-end. Finally, we will see a neat trick where, in the special case of deterministic languages, we can derive a bisimulation relation (equivalence in both directions) from a simulation relation (equivalence in just one direction) and thus significantly reduce the proof engineering effort required to verify a compiler pass.