Getting started with proving math theorems through reinforcement learning
An experiment at MIT's Brains, Minds, and Machines Lab
September 2019
September 2019
In February 2019, I was looking for undergraduate research opportunities in mathematics at MIT and soon came across one that caught my eye.
Project Title: Creating an environment to challenge AI systems in mathematical reasoning
Project Description: Artificial neural networks are showing great promise at processing sensory information, finding patterns in data or playing games, so it seems natural to ask whether they can be applied to the uniquely human discipline of asking and answering mathematical questions. Overcoming the challenges of this problem will provide a clearer path towards other high cognitive tasks. The main objective of this project will be to create a mathematical environment to test AI agents at proving mathematical theorems, as well as designing a simple agent showcasing the effectiveness of the environment.
I went to talk to the MIT postdoctoral associate in charge of the project -- Dr. Andrzej Banburski. He explained that the Brains, Minds, and Machines lab, which focuses on studying the brain's intelligent behavior and replicating that behavior in machines, had yet to study how humans do mathematics and possibly replicating it.
Since the lab had a limited experience in the fields of theorem proving and reinforcement learning, we were starting from scratch, which means we had to answer a lot of fundamental questions.
I, along with the other undergraduate researchers on the project, spent hours on the blackboard each week debating fundamental questions of creating an reinforcement learning theorem prover for mathematics:
What areas of math might be most suited to being solved through reinforcement learning?
Do we need to prove that the learning environment is well defined and consistent? For an agent playing chess, it is obvious that when we take each action we stay within a set of defined states. In math, should we prove that we can stay within a set of finite states given certain actions? If not, is it ok if the set of states could become infinite?
What does a "successful agent" look like? Is it one that we only give a few proof techniques to, and it knows which to apply in which order? One that can come up with its own proof technique to solve a theorem? One that can propose its own theorems? One that can successfully apply complex proof techniques (e.g. the probabilistic method)? One that can solve a previously unsolved theorem in math?
How do we model the state of the learning environment?
What should the set of actions the agent takes be?
When should we reward the agent?
Do we want to train the agent on human theorem-proving data, or do we want it to learn by self-play?
We spent much of the first few weeks of the project proposing and tackling many more fun philosophical questions. But we decided perhaps before finalizing our answers to these questions, we should start implementing the framework so it is clear what is feasible and not.
Due to my longstanding infatuation with Mathematica, I persuaded the group to try out Mathematica's theorem proving library to take care of the formal proving aspects of the project (we already knew we wanted to use Python for the machine learning side).
Unfortunately, building up group theory infrastructure as well as prover infrastructure in Mathematica proved tedious. After weeks, we still weren't making progress, and were considering switching to a language such as Coq that is more equipped for theorem proving.
Our supervisor decided to hold a competition: one of us needed to prove the cancellation property in Mathematica, and the other needed to prove it in Coq. Whoever came up with a solution first, won the competition, and determined the fate of the project.
Coq won.
We used GamePad's Coq-to-Python theorem proving interface to send Coq commands to Python. Since none of us had previous experience with Coq, we spent a lot of late nights in the lab, eating pizza, and having fun with type errors. After many such nights, one of us finally came up with an implementation of the problem in group theory that we wanted our agent to be able to solve: cancellation law.
Before implementing a reinforcement learning algorithm, we had to decide on the states, actions, and rewards of the algorithm.
States
We decided that the state of the theorem proving environment most logically includes the theorem's assumptions, theorem's variables, and steps taken in the proof so far.
The more complicated question was -- how do we encapsulate all of those things in one object by which to store the state in a Q-table? We decided to just store the state object as a hash string that depends only on the steps taken in the proof so far, and use that hash string for Q-table lookup, and hope for the best.
Since we didn't have a pre-determined set of states (and in fact, the set of states is potentially infinite, since it includes all possible permutations of the proofs the agent could generate), we implemented a Q-table that expands as the agent takes on new states.
Actions
The only actions required to prove our test theorem, cancellation law, were multiplying by inverses, applying associativity, applying the left inverse property of groups, and applying the identity property of groups. We also decided to add an action that will allow the agent to undo the previous action.
Each "action" in our code often consisted of multiple Coq tactics. For example, the left-multiply action was implemented as follows.
Reward Shaping
Keeping with the style of setting up a minimalistic experiment, we decided to keep the agent's rewards as simple as possible as well:
+1 for finishing the theorem
-10 for attempting to apply a tactic that yields a Coq error (e.g. applying associativity when it is not applicable to the goal).
We were shocked when we started to train the agent and found that it decided the best course of action was to repeatedly apply the tactic "Undo."
Then we realized why. The action least likely to throw a Coq error was Undo -- it depended little on the current state of the goal, unlike actions such as ApplyAssociativity or ApplyIdentity.
Our agent was behaving exactly as we had incentivized it to -- to maximize rewards by choosing the least-risky actions. And so we amended the rewards system so the agent only undoes its progress when absolutely necessary:
+1 for finishing the theorem
-10 for attempting to apply a tactic that yields a Coq error
-1000 for applying Undo.
We decided to train the agent in a manner following the OpenAI gym implementation.
Indeed, through training, our agent eventually did converge at correct proofs.
We decided to define a "successful agent" the simplest way possible: given only one theorem to prove, the agent should prove that theorem with less attempts after training than with before training.
The results came back as expected! After training on only 5 episodes, the agent became significantly faster, and made significantly fewer attempts to apply error-yielding Coq tactics.
These results seem fairly obvious -- since the agent was only given one theorem to prove, and a list of necessary tactics, the agent could always use brute force to find the right sequence of steps. And so combined with reinforcement learning, the problem of speeding up the solving became one of memorization of a particular sequence, and thus this experiment should have yielded such results. Still, to a team of us who started the semester completely unsure about the capacity of reinforcement learning algorithms to prove theorems, the obvious results were actually quite a relief.
And we were, in fact, surprised by a few of our results, including that:
The agent learned without using neural networks.
The Q-table functioned correctly despite the fact that the states consisted of hash strings (with no direct meaning in themselves, besides the fact that they essentially indexed a certain list of previous actions).
Most interesting of all, after just a few seconds with Coq, the agent seemed to generate cleaner Coq proofs than we had over the past few months.
Our human-generated proof:
Our agent-generated proof:
These results lend credence to the idea that reinforcement learning through self-play has the potential to supersede human performance. That is, by learning from self-play rather than human training data, computers may become more "creative" by utilizing their own strengths, rather than human strengths. The success of self-play in videogames has been demonstrated before by AlphaGo and the OpenAI 5, both of which defeated human competitors after training using self-play rather than human data. It seems like the success of self-playing machines may also extend to mathematical theorem proving.