DeepMind AI crushes tough maths problems on par with top human solvers
Briefly

AlphaGeometry2, developed by Google DeepMind, has significantly advanced AI's ability to solve complex mathematical problems, now surpassing the performance of average gold medallists in the International Mathematical Olympiad. This upgraded system utilizes a combination of specialized language models and neuro-symbolic reasoning to provide rigorous proofs in Euclidean geometry and other mathematical branches. Improvements include integration with Google's Gemini model and enhanced geometrical reasoning capabilities. Mathematicians like Kevin Buzzard suggest that AI's proficiency could soon lead to perfect scores in the IMO, transforming perceptions of computer-assisted problem-solving in mathematics.
A year ago AlphaGeometry, an artificial-intelligence (AI) problem solver created by Google DeepMind, surprised the world by performing at the level of silver medallists in the International Mathematical Olympiad (IMO), a competition that sets tough maths problems for gifted high-school students.
AlphaGeometry2, its upgraded system, has surpassed the level of the average gold medallist, marking a significant advancement in AI's capabilities in rigorous mathematical reasoning.
The team trained the language model to speak a formal mathematical language, enabling automatic checks for logical rigor and filtering out incoherent statements.
I imagine it won't be long before computers are getting full marks on the IMO,” says Kevin Buzzard, reflecting on the future potential of AI in mathematics.
Read at Nature
[
|
]