Constraining LLMs for Theorem Proving
A Neurosymbolic Approach to Guaranteed Autoformalization
The Project
The goal of this project is to advance autoformalization—the task of translating informal mathematics into formal statements that can be automatically checked by state-of-the-art proof assistants like Lean. As even state-of-the-art models only achieve a modest success rate in generating syntactically correct Lean statements, this project will develop novel neural modules that can produce correct Lean code by construction. These modules integrate seamlessly into existing LLM architectures and can be jointly trained, thus enhancing overall performance.
The Team
Eleonora Giunchiglia is an Assistant Professor at Imperial College London's I-X initiative and the Department of Electrical and Electronic Engineering. Her research focuses on enhancing the safety and trustworthiness of deep learning systems by integrating formal logical constraints into their design. Prior to joining Imperial College, she completed her PhD at the University of Oxford in 2022, followed by a post-doctoral research position at TU Wien.
Sam Adam-Day is an AI safety researcher. After completing his doctorate in mathematical set theory, he embarked on a postdoc at the Computer Science Department in the University of Oxford, focusing on theoretical aspects of graph neural networks. Currently he is pursuing an independent research project at the intersection of machine learning and game theory. He will soon take up a research scientist position at FAR.AI: a non-profit AI safety research organization.
Joshua Ong is a first-year PhD student at Imperial College London, supervised by Dr. Eleonora Giunchiglia. His research interests lie in understanding and improving the reasoning capabilities of large language models (LLMs) through neurosymbolic approaches. His goal is to ensure LLMs generate reasoning in a trustworthy, safe, and controllable manner.
Mihaela Cătălina Stoian is a final-year PhD student in the Department of Computer Science at the University of Oxford, working on developing neuro-symbolic methods that integrate background knowledge constraints into neural networks during training and enforce them at inference time. Her work has been recognised with several awards, including the Oxford PhD Runner-up Prize awarded by G-Research. Mihaela's broader vision is to bridge the gap between neuro-symbolic AI and real-world applications to build more robust and trustworthy systems. Previously, she worked on detecting reflective symmetries in 3D models at Five AI and completed her Master's in speech-to-text machine translation at the University of Edinburgh.
Luca Andolfi received both his Bachelor's (2020) and Master's (2022) degrees in Computer Science Engineering from Sapienza University of Rome, graduating with honors (110/110 cum laude). He is currently pursuing the Italian National PhD in Artificial Intelligence at Sapienza under the supervision of Professor Marco Console. His research primarily focuses on enhancing the expressiveness of knowledge representation methods and improving the reliability of neuro-symbolic machine learning models. \n \n