Article 6K3MQ Nobody knows how AI works

Nobody knows how AI works

by
Melissa Heikkilä
from MIT Technology Review on (#6K3MQ)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I've been experimenting with using AI assistants in my day-to-day work. The biggest obstacle to their being useful is they often get things blatantly wrong. In one case, I used an AI transcription platform while interviewing someone about a physical disability, only for the AI summary to insist the conversation was about autism. It's an example of AI's hallucination" problem, where large language models simply make things up.

Recently we've seen some AI failures on a far bigger scale. In the latest (hilarious) gaffe, Google's Gemini refused to generate images of white people, especially white men. Instead, users were able to generate images of Black popes and female Nazi soldiers. Google had been trying to get the outputs of its model to beless biased, but this backfired, and the tech company soon found itself in the middle of the US culture wars, with conservative critics and Elon Musk accusing it of having a woke" bias and not representing history accurately. Google apologized andpaused the feature.

In another now-famous incident, Microsoft's Bing chat told aNew York Timesreporter to leave his wife.Andcustomer service chatbotskeep getting their companies in all sorts of trouble. For example, Air Canada was recently forced to give a customer a refund in compliance with a policy its customer service chatbot had made up. The list goes on.

Tech companies are rushing AI-powered products to launch, despite extensive evidence that they are hard to control and often behave in unpredictable ways. This weird behavior happens because nobody knows exactly how-or why-deep learning, the fundamental technology behind today's AI boom, works. It's one of the biggest puzzles in AI. My colleague Will Douglas Heaven justpublished a piecewhere he dives into it.

The biggest mystery is how large language models such as Gemini and OpenAI's GPT-4 can learn to do something they were not taught to do. You can train a language model on math problems in English and then show it French literature, and from that, it can learn to solve math problems in French. These abilities fly in the face of classical statistics, which provide our best set of explanations for how predictive models should behave, Will writes.Read more here.

It's easy to mistake perceptions stemming from our ignorance for magic.Even the name of the technology, artificial intelligence, is tragically misleading. Language models appear smart because they generate humanlike prose by predicting the next word in a sentence. The technology is not truly intelligent, and calling it that subtly shifts our expectations so we treat the technology as more capable than it really is.

Don't fall into the tech sector's marketing trap by believing that these models are omniscient or factual, or even near ready for the jobs we are expecting them to do. Because of their unpredictability,out-of-control biases,security vulnerabilities, and propensity to make things up, their usefulness is extremely limited. They can help humans brainstorm, and they can entertain us. But, knowing how glitchy and prone to failure these models are, it's probably not a good idea to trust them with your credit card details, your sensitive information, or any critical use cases.

As the scientists in Will's piece say, it's still early days in the field of AI research. According to Boaz Barak, a computer scientist at Harvard University who is currently on secondment to OpenAI's superalignment team, many people in the field compare it to physics at the beginning of the 20th century, when Einstein came up with the theory of relativity.

The focus of the field today ishow the models produce the things they do, but more research is needed into why they do so. Until we gain a better understanding of AI's insides, expect more weird mistakes and a whole lot of hype that the technology will inevitably fail to live up to.

Now read the rest of The AlgorithmDeeper Learning

Google DeepMind's new generative model makes Super Mario-like games from scratch

OpenAI's recent reveal of its stunning generative model Sora pushed the envelope of what's possible with text-to-video. Now Google DeepMind brings us text-to-video games. The new model, called Genie, can take a short description, a hand-drawn sketch, or a photo and turn it into a playable video game in the style of classic 2D platformers like Super Mario Bros. But don't expect anything fast-paced. The games run at one frame per second, versus the typical 30 to 60 frames per second of most modern games.

Level up:Google DeepMind's researchers are interested in more than just game generation. The team behind Genie works on open-ended learning, where AI-controlled bots are dropped into a virtual environment and left to solve various tasks by trial and error. It's a technique that could have the added benefit of advancing the field of robotics.Read more from Will Douglas Heaven.

Bits and Bytes

What Luddites can teach us about resisting an automated future
This comic is a nice look at the history of workers' efforts to preserve their rights in the face of new technologies, and draws parallels to today's struggle between artists and AI companies. (MIT Technology Review)

Elon Musk is suing OpenAI and Sam Altman
Get the popcorn out. Musk, who helped found OpenAI, argues that the company's leadership has transformed it from a nonprofit that is developing open-source AI for the public good into a for-profit subsidiary of Microsoft. (The Wall Street Journal)

Generative AI might bend copyright law past the breaking point
Copyright law exists to foster a creative culture that compensates people for their creative contributions. Thelegal battle between artists and AI companiesis likely to test the notion of what constitutes fair use." (The Atlantic)

Tumblr and WordPress have struck deals to sell user data to train AI
Redditis not the only platform seeking to capitalize on today's AI boom. Internal documents reveal that Tumblr and WordPress are working with Midjourney and OpenAI to offer user-created content as AI training data. The documents reveal that the data set Tumblr was trying to sell included content that should not have been there, such as private messages. (404 Media)

A Pornhub chatbot stopped millions from searching for child abuse videos
Over the last two years, an AI chatbot has directed people searching for child sexual abuse material on Pornhub in the UK to seek help. This happened over 4.4 million times, which is a pretty shocking number.(Wired)

The perils of AI-generated advertising. Case: Willy Wonka
An events company in Glasgow, Scotland, used an AI image generator to attract customers to Willy's Chocolate Experience," where chocolate dreams become reality"-only for customers to arrive at a half-deserted warehouse with asad Oompa Loompaand depressing decorations. The police were called, the event went viral, and the internet has been having a field day since. (BBC)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments