Article 6J2CE Why does AI being good at math matter?

Why does AI being good at math matter?

by
Melissa Heikkilä
from MIT Technology Review on (#6J2CE)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week the AI world was buzzing over a new paper inNature from Google DeepMind, in which the lab managed to create an AI system that can solve complex geometry problems. Named AlphaGeometry, the system combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions, writes my colleague June Kim.You can read more about AlphaGeometry here.

This is the second time in recent months that the AI world has got all excited about math. The rumor mill went into overdrive last November, when there werereportsthat the boardroom drama at OpenAI, which saw CEO Sam Altman temporarily ousted, was caused by a new powerful AI breakthrough. It was reported that the AI system in question was called Q* and could solve complex math calculations. (The company has not commented on Q*, and we still don't know if there was any link to the Altman ouster or not.)I unpacked the drama and hype in this story.

You don't need to be really into math to see why this stuff is potentially very exciting. Math is really, really hard for AI models. Complex math, such as geometry, requires sophisticated reasoning skills, and many AI researchers believe that the ability to crack it could herald more powerful and intelligent systems. Innovations like AlphaGeometry show that we are edging closer to machines with more human-like reasoning skills. This could allow us to build more powerful AI tools that could be used to help mathematicians solve equations and perhaps come up with better tutoring tools.

Work like this can help us use computers to reach better decisions and be more logical, says Conrad Wolfram of Wolfram Research. The company is behind WolframAlpha, an answer engine that can handle complex math questions. I caught up with him last week in Athens atEmTech Europe. (We're hosting another edition in London in April! Join us?I'll be there!)

But there's a catch. In order for us to reap the benefits of AI, humans need to adapt too, he says. We need to have a better understanding of how the technology works so we can approach problems in a way that computers can solve.

As computers get better, humans need to adjust to this and know more, get more experience about whether that works, where it doesn't work, where we can trust it, or we can't trust it," Wolfram says.

Wolfram argues that as we enter the AI age with more powerful computers, humans need to adopt computational thinking," which involves defining and understanding a problem and and then breaking it down into pieces so that a computer can calculate the answer.

He compares this moment to the rise of mass literacy in the late 18th century, which put an end to the era when just the elite could read and write.

The countries that did that first massively benefited for their industrial revolution ... Now we need a mass computational literacy, which is the equivalent of that."

Deeper Learning

How satellite images and AI could help fight spatial apartheid in South Africa

Raesetje Sefala grew up sharing a bedroom with her six siblings in a cramped township in the Limpopo province of South Africa. The township's inhabitants, predominantly Black people, had inadequate access to schools, health care, parks, and hospitals. But just a few miles away in Limpopo, white families lived in big, attractive houses, with easy access to all these things. The physical division of communities along economic and racial lines is just one damaging inheritance from South Africa's era of apartheid.

Fixing the problem using AI:Alongside computer scientists Nyalleng Moorosi and Timnit Gebru at the nonprofit Distributed AI Research Institute (DAIR), which Gebru set up in 2021, Sefala is deploying computer vision tools and satellite images to analyze the impacts of racial segregation in housing, with the ultimate hope that their work will help to reverse it.Read more from Abdullahi Tsanni.

Bits and Bytes

A new AI-based risk prediction system could help catch deadly pancreatic cancer cases earlier
The system outperformed current diagnostic standards. One day it could be used in a clinical setting to identify patients who might benefit from early screening or testing, helping catch the disease earlier and save lives. (MIT Technology Review)

Meta says it is developing open-source AGI
Et tu, Zuck? Meta is now an AGI company. In an Instagram Reels video, CEO Mark Zuckerberg announced a new long-term goal to build open-source full general intelligence." The company is doing this by bringing its generative AI and AI research teams closer together, and building the next version of its Llama model and a massive computing infrastructure to support that. (Meta)

Read the full text of the AI Act
The EU reached apolitical agreement on the AI Actlate last year. Negotiators are still finalizing technical details of the bill, and it still needs to go through a round of approvals before it enters into force. Euractiv's Luca Bertuzzi got hold of the nearly900-page final textof the bill and a comparison on how it compares to earlier texts.Here is a simpler version of the bill.

Sharing deepfake nudes could soon become a federal crime in the US
The bipartisan Preventing Deepfakes of Intimate Images Act was introduced in the US last week. It could outlaw the nonconsensual sharing of digitally altered nude images. It was prompted by anincident at a New Jersey high schoolwhere teenage boys were sharing AI-generated images of their female classmates. (Wall Street Journal)

A shocking" amount of the web Is already AI-translated trash
The internet is already full of machine-translated garbage, particularly in languages spoken in Africa and the Global South, researchers at Amazon Web Services found. Over half the sentences on the web have been machine-translated into other languages. This could have severe consequences for the quality of data used to train future AI models. I wrote about thisphenomenonall the way back in 2022. (Vice)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments