Article 6PG04 Google DeepMind’s new AI systems can now solve complex math problems

Google DeepMind’s new AI systems can now solve complex math problems

by
Rhiannon Williams
from MIT Technology Review on (#6PG04)
Story Image

AI models can easily generate essays and other types of text. However, they're nowhere near as good at solving math problems, which tend to involve logical reasoning-something that's beyond the capabilities of most current AI systems.

But that may finally be changing. Google DeepMind says it has trained two specialized AI systems to solve complex math problems involving advanced reasoning. The systems-called AlphaProof and AlphaGeometry 2-worked together to successfully solve four out of six problems from this year's International Mathematical Olympiad (IMO), a prestigious competition for high school students. They won the equivalent of a silver medal.

It's the first time any AI system has ever achieved such a high success rate on these kinds of problems. This is great progress in the field of machine learning and AI," says Pushmeet Kohli, vice president of research at Google DeepMind, who worked on the project. No such system has been developed until now which could solve problems at this success rate with this level of generality."

There are a few reasons math problems that involve advanced reasoning are difficult for AI systems to solve. These types of problems often require forming and drawing on abstractions. They also involve complex hierarchical planning, as well as setting subgoals, backtracking, and trying new paths. All these are challenging for AI.

It is often easier to train a model for mathematics if you have a way to check its answers (e.g., in a formal language), but there is comparatively less formal mathematics data online compared to free-form natural language (informal language)," says Katie Collins, an researcher at the University of Cambridge who specializes in math and AI but was not involved in the project.

Bridging this gap was Google DeepMind's goal in creating AlphaProof, a reinforcement-learning-based system that trains itself to prove mathematical statements in the formal programming language Lean. The key is a version of DeepMind's Gemini AI that's fine-tuned to automatically translate math problems phrased in natural, informal language into formal statements, which are easier for the AI to process. This created a large library of formal math problems with varying degrees of difficulty.

Automating the process of translating data into formal language is a big step forward for the math community, says Wenda Li, a lecturer in hybrid AI at the University of Edinburgh, who peer-reviewed the research but was not involved in the project.

We can have much greater confidence in the correctness of published results if they are able to formulate this proving system, and it can also become more collaborative," he adds.

The Gemini model works alongside AlphaZero-the reinforcement-learning model that Google DeepMind trained to master games such as Go and chess-to prove or disprove millions of mathematical problems. The more problems it has successfully solved, the better AlphaProof has become at tackling problems of increasing complexity.

Although AlphaProof was trained to tackle problems across a wide range of mathematical topics, AlphaGeometry 2-an improved version of a system that Google DeepMind announced in January-was optimized to tackle problems relating to movements of objects and equations involving angles, ratios, and distances. Because it was trained on significantly more synthetic data than its predecessor, it was able to take on much more challenging geometry questions.

To test the systems' capabilities, Google DeepMind researchers tasked them with solving the six problems given to humans competing in this year's IMO and proving that the answers were correct. AlphaProof solved two algebra problems and one number theory problem, one of which was the competition's hardest. AlphaGeometry 2 successfully solved a geometry question, but two questions on combinatorics (an area of math focused on counting and arranging objects) were left unsolved.

Generally, AlphaProof performs much better on algebra and number theory than combinatorics," says Alex Davies, a research engineer on the AlphaProof team. We are still working to understand why this is, which will hopefully lead us to improve the system."

Two renowned mathematicians, Tim Gowers and Joseph Myers, checked the systems' submissions. They awarded each of their four correct answers full marks (seven out of seven), giving the systems a total of 28 points out of a maximum of 42. A human participant earning this score would be awarded a silver medal and just miss out on gold, the threshold for which starts at 29 points.

This is the first time any AI system has been able to achieve a medal-level performance on IMO questions. As a mathematician, I find it very impressive, and a significant jump from what was previously possible," Gowers said during a press conference.

Myers agreed that the systems' math answers represent a substantial advance over what AI could previously achieve. It will be interesting to see how things scale and whether they can be made faster, and whether it can extend to other sorts of mathematics," he said.

Creating AI systems that can solve more challenging mathematics problems could pave the way for exciting human-AI collaborations, helping mathematicians to both solve and invent new kinds of problems, says Collins. This in turn could help us learn more about how we humans tackle math.

There is still much we don't know about how humans solve complex mathematics problems," she says.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments