Autoformalization with Large Language Models
Abstract
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs --- a task that is similar to software synthesis.
Here, we study the ability of large language models to produce correct formalization of informally given statements. These models are trained using a (next-token-prediction) language modeling objective on a large web corpus containing both natural language and formal (including programming) languages. We use few-shot prompting of large language models and find that they can correctly translate a significant portion of mathematical competition problems to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover for Isabelle via training on these autoformalized theorems. Our methodology results in state-of-the-art on the miniF2F theorem proving benchmark, improving the proof rate from $29.6\%$ to $35.2\%$
Here, we study the ability of large language models to produce correct formalization of informally given statements. These models are trained using a (next-token-prediction) language modeling objective on a large web corpus containing both natural language and formal (including programming) languages. We use few-shot prompting of large language models and find that they can correctly translate a significant portion of mathematical competition problems to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover for Isabelle via training on these autoformalized theorems. Our methodology results in state-of-the-art on the miniF2F theorem proving benchmark, improving the proof rate from $29.6\%$ to $35.2\%$