We built a neural theorem prover for Lean that learned to solve a variety of challenging high-school olympiad problems, including problems from the AMC12 and AIME competitions, as well as two problems adapted from the IMO.[1] The prover uses a language model to find proofs of formal statements. Each time we find a new proof, we use it as new training data, which improves the neural network and enables it to iteratively find solutions to harder and harder statements.
We achieved a new state-of-the-art (41.2% vs 29.3%) on the miniF2F benchmark, a challenging collection of high-school olympiad problems. Our approach, which we call statement curriculum learning, consists of manually collecting a set of statements of varying difficulty levels (without proof) where the hardest statements are similar to the benchmark we target. Initially our neural prover is weak and can only prove a few of them. We iteratively search for new proofs and re-train our neural network on the newly discovered proofs, and after 8 iterations, our prover ends up being vastly superior when tested on miniF2F.
Formal mathematics is an exciting domain to study because of (i) its richness, letting you prove arbitrary theorems which require reasoning, creativity and insight and (ii) its similarity to games—where AI has been spectacularly successful—in that it has an automated way of determining whether a proof is successful (i.e., verified by the formal system). As demonstrated in the trivial example below, proving a formal statement requires generating a sequence of proof steps, each proof step consisting in a call to a tactic.[2] These tactics take mathematical terms as arguments and each tactic call will transform the current statement to prove, into statements that are easier to prove, until nothing is left to prove.
Prove that if $|x – 2| = p$, where $x < 2$, then $x – p = 2 – 2p$.
theorem amc12_2000_p5 -- ← theorem name
(x p : ℝ) -- ← the statement we want
(h₀ : x < 2) -- to prove
(h₁ : abs (x - 2) = p) :
x - p = 2 - 2 * p :=
begin -- ← formal proof starts here
-- This first tactic requires that the prover invent
-- the term: `abs (x - 2) = -(x - 2)`.
have h₂ : abs (x - 2) = -(x - 2), {
apply abs_of_neg,
linarith,
},
rw h₁ at h₂,
-- At this stage the remaining goal to prove is:
-- `x - p = 2 - 2 * p` knowing that `p = -(x - 2)`.
linarith,
end
Since $x < 2$, $|x – 2| = -(x – 2)$. Using $p = |x – 2|$ we have $x = 2-p$ and finally $x – p = x – 2 – 2p$.
We observe that the capability to generate original mathematical terms required as arguments of tactics, which cannot be done without a neural language model, emerges from our training procedure. The proof below is an example of it: the proof step use n + 1
(entirely generated by our models) proposes to use n + 1
as a solution, the rest of the formal proof relying on the ring_exp
tactic to verify that it is indeed valid.
For all integers $n ≥ 9$, prove that $((n + 2)! −(n + 1)!) / n!$ is a perfect square.
theorem amc12b_2020_p6
(n : ℕ)
(h0 : 9 ≤ n) :
∃ x : ℕ, (x:ℝ)^2 =
(nat.factorial (n + 2) - nat.factorial (n + 1))
/ nat.factorial n :=
begin
-- The model directly proposes `n + 1` as solution.
use n + 1,
field_simp [nat.factorial_ne_zero, pow_succ'],
ring_exp
end
Expanding the expression we get:
$$((n + 2)! −(n + 1)!) / n! = ((n + 2)(n+1)n! −(n + 1)n!) / n!$$
Dividing by $n!$ we obtain:
$$(n+2)(n+1) – (n+1)$$
Factoring $(n+1)$ we get: $(n+1)(n+2-1) = (n+1)^2$ which concludes the proof.
We also observe that our models and search procedure are capable of producing proofs that chain multiple non-trivial reasoning steps. In the proof below, the model starts by using contraposition leading to the existential statement (∃ (x : ℝ), f x ≠ a * x + b
). It then generates a witness for it with use (0 : ℝ)
and finishes the proof by leveraging the norm_num
tactic.
Let $f(x) = Ax + B$ and $g(x) = Bx + A$, where $A ne B$. If $f(g(x)) – g(f(x)) = B – A$, prove that $A + B = 0$.
theorem mathd_train_algebra_217
(a b : ℝ)
(f g : ℝ → ℝ)
(h₀ : ∀ x, f x = a * x + b)
(h₁ : ∀ x, f x = b * x + a)
(h₂ : a ≠ b)
(h₃ : ∀ x, f (g x) - g (f x) = b - a) :
a + b = 0 :=
begin
revert h₀ h₁ h₂ h₃,
-- Initial contraposition.
contrapose!,
rintro ⟨h₀, ⟨h₁, h₂⟩⟩,
-- The model proposes `0` as witness for the current
-- goal that consists in `∃ (x : ℝ), f x ≠ a * x + b`.
use (0 : ℝ),
simp only [sub_eq_iff_eq_add, h₀, mul_zero, zero_add],
norm_num at h₀,
end
First we find that:
$$f(g(x)) = A(Bx + A) + B = ABx + A^2 + B$$$$g(f(x)) = B(Ax + B) + A = ABx + B^2 + A$$
Now we plug this back in $f(g(x)) – g(f(x)) = B – A$ and get:
$$(ABx + A^2 + B) – (ABx + B^2 + A) = B – A$$
That is:
$$A^2 – B^2 + B – A = B – A$$
Hence:
$$A^2 – B^2 = (A-B)(A+B) = 0$$
Since we are given that $A ne B$, necessarily, $A + B = 0$.
Our models, trained with statement curriculum learning, were able to close a variety of problems from training textbooks as well as AMC12 and AIME competitions, and 2 problems adapted from the IMO. We present below three examples of such generated proofs.
Suppose $a$, $b$, $c$ are the sides of a triangle.
Prove that $a^2(b + c − a) + b^2(c + a − b) + c^2(a + b − c) leq 3abc$.
theorem imo_1964_p2
(a b c : ℝ)
(h₀ : 0 < a ∧ 0 < b ∧ 0 < c)
(h₁ : c < a + b)
(h₂ : b < a + c)
(h₃ : a < b + c) :
a^2 * (b + c - a) + b^2 * (c + a - b) + c^2 * (a + b - c)
≤ 3 * a * b * c :=
begin
-- Arguments to `nlinarith` are fully invented by our model.
nlinarith [sq_nonneg (b - a),
sq_nonneg (c - b),
sq_nonneg (c - a)]
end
Rearrange to get $a(a-b)(a-c) + b(b-a)(b-c) + c(c-a)(c-b) >= 0$ which is true by Schur’s inequality.
Prove that $a2 + a4 + a6 + a8 + …+ a98 = 93$ if $a1$, $a2$, $a3…$ is an arithmetic progression with common difference $1$, and $a1 + a2 + a3 + … + a98 = 137$.
theorem aime_1984_p1
(u : ℕ → ℚ)
(h₀ : ∀ n, u (n + 1) = u n + 1)
(h₁ : ∑ k in finset.range 98, u k.succ = 137) :
∑ k in finset.range 49, u (2 * k.succ) = 93 :=
begin
rw finset.sum_eq_multiset_sum,
dsimp [finset.range] at h₁,
simp [h₀],
ring,
norm_num at h₁,
norm_num,
apply eq_of_sub_eq_zero,
{ simp only [*, abs_of_pos, add_zero] at *, linarith },
end
For $n geq 1$ we have $a(2n-1) = a(2n)-1$. Substituting this into the equation given, we get:
$$(a(2)-1) + a(2) + (a(4)-1) + a(4) + … + (a(98)-1) + (a(98)) = 137$$
But the left-hand side is simply $2(a2 + a4 + a6 + … + a98) – 49$, so:
$$(a2 + a4 + a6 + … + a98) = (137 + 49) / 2 = 93$$
Adapted from IMO Longlist 1990 Problem 77[3]
For $a, b, c$ reals, prove that $(a^2 + ab + b^2)(b^2 + bc + c^2)(c^2 + ca + a^2) geq (ab + bc + ca)^3$.
theorem imo_longlist_1990_p77
(a b c : ℝ) :
(a * b + b * c + c * a)^3 ≤
(a^2 + a * b + b^2) * (b^2 + b * c + c^2) *
(c^2 + c * a + a^2) :=
begin
-- The three initial steps use Cauchy–Schwarz to prove
-- `(a * b + b * c) ^ 2 ≤ (a ^ 2 + b ^ 2) * (b ^ 2 + c ^ 2)`
-- which is required for the final call to `nlinarith`.
let u : euclidean_space ℝ (fin 2) := ![a, b],
let v : euclidean_space ℝ (fin 2) := ![b, c],
have h₀ := real_inner_mul_inner_self_le u v,
simp [u, v, fin.sum_univ_succ,
←pow_two, ←pow_two, le_of_lt, mul_assoc] at h₀,
-- The model introduces another required cut (i.e. invent
-- the term `0 ≤ (c + a) * (c + a)` and proves it).
have h₃ : 0 ≤ (c + a) * (c + a),
{ nlinarith, },
have h₄ := sq_nonneg (a * b + b * c + c * a),
simp [sq, h₀, h₃, mul_add, add_mul] at h₄ ⊢,
nlinarith [sq_nonneg (b - a),
sq_nonneg (c - b),
sq_nonneg (a - c)]
end
After cancelling terms appearing on both sides, we are left to prove that:
$$3a^2b^2c^2 + sum_{sym} a^3b^2c leq sum_{cyc} a^4bc + sum_{cyc} (a^4b^2 + b^4c^2)$$
After multiplying both sides by $2$, we can rearrange the above inequality to:
$$0 leq sum_{cyc} (a^2b + a^2c – b^2c)^2$$
which clearly holds, giving the claim.
Formal mathematics involves two main challenges that make a naive application of reinforcement learning unlikely to succeed.
- (i) Infinite action space: not only does formal mathematics have an extremely large search space (like Go for example), it also has an infinite action space. At each step of a proof search, the model must choose not from a well-behaved finite set of actions, but a complex and infinite set of tactics, involving exogenous mathematical terms that have to be generated (e.g., generating a mathematical statement to be used as a witness, an object used in steps such as “there exists an $x$ s.t. …”, or a cut, the introduction and the chaining of a lemma in the middle of a proof).
- (ii) Lack of self-play: conversely to 2-player games, a prover is not playing against an opponent but against a set of statements to prove. When faced with a statement that is just too hard, there is no obvious reframing that will let the prover generate intermediary easier statements to tackle first. This asymmetry prevents naive application of the self-play algorithms that were successful with 2-player games.
In our work, we address the infinite action space problem by sampling actions from a language model as we search for a proof. Language models have the capability to generate the tactic calls as well as the original mathematical terms often required as arguments. Our basis for addressing the lack of self-play is the observation that the key role of self-play in 2-player games is to provide an unsupervised curriculum. Our methodology proposes to replace this unsupervised curriculum with an auxiliary set of problem statements (without requiring proofs) of varying difficulty. We empirically show that, when the difficulty of these auxiliary problems is varied enough, our training procedure is able to solve a curriculum of increasingly difficult problems, eventually generalizing to the set of problems we care about.
While these results are extremely exciting, as they demonstrate that deep learning models are capable of non-trivial mathematical reasoning when interacting with a formal system, we are still very far from best-student performance on these competitions, only occasionally, rather than consistently, closing challenging olympiad problems. We hope nonetheless that our work will motivate research in this domain, in particular towards the IMO Grand Challenge and that the statement curriculum learning methodology we propose will help accelerate progress in automated reasoning in general.
OpenAI