Towards Automated Mathematical Discovery

The Project

The project explores a multi-agent Conjecturer–Prover framework that leverages language large models (LLMs) to generate and prove novel mathematical conjectures within the Lean theorem prover. By quantifying and formalizing the concept of "interestingness" from the perspective of utility and novelty, the framework seeks to streamline open-ended mathematical exploration and discovery. Through iterative refinement of both the Conjecturer and the Prover, this innovative approach aims to advance automated theorem proving and contribute to the development of new mathematical insights.

The Team

Aaron Courville is a professor in the Department of Computer Science and Operations Research (DIRO) at Université de Montréal and Scientific Director of IVADO. He has a PhD from the Robotics Institute, Carnegie Mellon University. Courville was an early contributor to deep learning: he is a founding member of Mila – Quebec Artificial Intelligence Institute. Together with Ian Goodfellow and Yoshua Bengio, he co-wrote the seminal textbook on deep learning. His current research focuses on the development of deep learning models and methods. He is particularly interested in reinforcement learning, multi-agent reinforcement learning, deep generative models and reasoning.

Navin Goyal received his BTech from IIT Bombay (1998) and PhD from Rutgers University (2005). After postdocs at McGill university and Georgia Tech, he joined Microsoft Research India in 2009. His research started in theoretical computer science where he worked on problems ranging from design and analysis of algorithms to computational complexity. Gradually his work shifted towards learning theory. His theoretical research sometimes involved application of techniques from various areas of mathematics (e.g., combinatorics, information theory, Fourier analysis). This has given him an appreciation for the diversity and unity of mathematics and theoretical computer science. More recently, he has been doing empirical work on understanding neural networks from multiple angles, including their inductive biases and interpretation of their inner workings, and their ability to solve mathematical problems. He is fascinated by how new discoveries are made by people---in mathematics and elsewhere.