My primary research is in formal verification. Verification refers to checking that a program is correct: that the program obeys certain desired properties. For example, we can consider safety properties, saying that the program never enters a bad state; or fairness properties, saying that the program eventually responds to every possible input. Most programs are checked through ad-hoc testing: running the software through different scenarios to try and find errors. While testing is critical, it cannot not provide certainty. For any interesting program there is quite literally not enough time in the world to check every scenario.
Formal verification aims to automatically prove that a program will satisfy a correctness property. The goal is to take a program, define a specification of the desired properties, and have a computer check if the program satisfies that specification. A wonderful result known as Rice's Theorem states that this is impossible: any interesting property of programs is undecidable. Thus no matter how clever we are, we will never be able to have a computer verify every program. A simple thing such as impossibility need not stop us.
Much like physically engineered systems , we abstract programs into a mathematical model. We can then have a computer check if the model satisfies the given specification. If the model is correct, then we have a guarantee that the program is correct. If the model does not satisfy the specification, Rice's theorem tells us that we can't be certain the program is wrong. Even with this conservative approximation, most problems in formal verification are computationally intractable: there are pathological instances of the problem that cannot be solved in a reasonable time-span. Thus the fun lies in trying to make practical implementations that can still solve real-world problems.
My specific field is automata-theoretic formal verification, in which we use flowcharts, called automata, as models. Automata are used both to model the program and to describe the specification we want the program to obey. I focus on Büchi automata, which model potentially infinite properties like termination, progress, and fairness.
Once we have automata modeling the program and the specification, we want to check if the program satisfies that specification. To do so, we begin by creating the complement of the specification automaton. This complement describes exactly the prohibited behaviors: if the specification automaton requires the program never goes to a bad state, the complement describes every possible way the program could go to a bad state. Once we have the complement, we look for a behavior that is described by both the program automaton and the complementary automaton. Such a behavior is a counterexample to the correctness of the program. Because the complement describes every way the program could go bad, if we can't find a counterexample, we can be sure that the program is correct... at least with respect to this specific property. Looking for this counterexample is called testing containment, as we are seeing if the behaviors described by one automaton (the program) are contained in the behaviors described by another automaton (the specification).