Article 6HQ0Z What to expect from the coming year in AI

What to expect from the coming year in AI

by
Melissa Heikkilä
from MIT Technology Review on (#6HQ0Z)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Happy new year! I hope you had a relaxing break. I spent it up in the Arctic Circle skiing, going to the sauna, and playing card games with my family by the fire. 10/10 would recommend.

I also had plenty of time to reflect on the past year. There are so many more of you reading The Algorithm than when we first started this newsletter, and for that I am eternally grateful. Thank you for joining me on this wild AI ride. Here's a cheerleading pug as a little present!

So what can we expect in 2024?All signs point to there being immense pressure on AI companies to show that generative AI can make money and that Silicon Valley can produce the killer app" for AI. Big Tech, generative AI's biggest cheerleaders, is betting big on customized chatbots, which will allow anyone to become a generative-AI app engineer, with no coding skills needed. Things are already moving fast: OpenAI isreportedlyset to launch its GPT app store as early as this week. We'll also see cool new developments in AI-generated video, a whole lot more AI-powered election misinformation, and robots that multitask. My colleague Will Douglas Heaven and I shared our four predictions for AI in 2024 last week-read the full story here.

This year will also be another huge year for AI regulation around the world.In 2023 the firstsweeping AI lawwas agreed upon in the European Union, Senate hearings and executive orders unfolded in the US, and China introduced specific rules for things like recommender algorithms. If last year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Together with my colleagues Tate Ryan-Mosley and Zeyi Yang, I've written a piece that walks you through what to expect in AI regulation in the coming year.Read it here.

But even as the generative-AI revolution unfolds at a breakneck pace, there are still some big unresolved questions that urgently need answering,writes Will. He highlights problems around bias, copyright, and the high cost of building AI, among other issues.Read more here.

My addition to the list would be generative models' hugesecurity vulnerabilities.Large language models, the AI tech that powers applications such as ChatGPT, are really easy to hack. For example, AI assistants or chatbots that can browse the internet are very susceptible to an attack called indirect prompt injection, which allows outsiders to control the bot by sneaking in invisible prompts that make the bots behave in the way the attacker wants them to. This could make them powerful tools for phishing and scamming,as I wrote back in April. Researchers have also successfully managed to poison AI data sets with corrupt data, which can break AI models for good. (Of course, it's not always a malicious actor trying to do this. Using a new tool calledNightshade, artists can add invisible changes to the pixels in their art before they upload it online so that if it's scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.)

Despite these vulnerabilities, tech companies are in a race to roll out AI-powered products, such as assistants or chatbots that can browse the web. It's fairly easy for hackers to manipulate AI systems by poisoning them with dodgy data, so it's only a matter of time until we see an AI system being hacked in this way. That's why I was pleased to see NIST, the US technology standards agency, raise awareness about these problems and offer mitigation techniques in anew guidancepublished at the end of last week. Unfortunately, there is currently no reliable fix for these security problems, and much more research is needed to understand them better.

AI's role in our societies and lives will only grow bigger as tech companies integrate it into the software we all depend on daily, despite these flaws. As regulation catches up, keeping an open, critical mind when it comes to AI is more important than ever.

Deeper Learning

How machine learning might unlock earthquake prediction

Our current earthquake early warning systems give people crucial moments to prepare for the worst, but they have their limitations. There are false positives and false negatives. What's more, they react only to an earthquake that has already begun-we can't predict an earthquake the way we can forecast the weather. If we could, it would let us do a lot more to manage risk, from shutting down the power grid to evacuating residents.

Enter AI:Some scientists are hoping to tease out hints of earthquakes from data-signals in seismic noise, animal behavior, and electromagnetism-with the ultimate goal of issuing warnings before the shaking begins. Artificial intelligence and other techniques are giving scientists hope in the quest to forecast quakes in time to help people find safety.Read more fromAllie Hutchison.

Bits and Bytes

AI for everything is one of MIT Technology Review's 10 breakthrough technologies
We couldn't put together a list of the tech that's most likely to have an impact on the world without mentioning AI. Last year tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry. We haven't even begun to make sense of it all, let alone reckon with its impact. (MIT Technology Review)

Isomorphic Labs has announced it's working with two pharma companies
Google DeepMind's drug discovery spinoff has two new strategic collaborations" with major pharma companies Eli Lilly and Novartis. The deals are worth nearly $3 billion to Isomorphic Labs and offer the company funding to help discover potential new treatments using AI, thecompany said.

We learned more about OpenAI's board saga
Helen Toner, an AI researcher at Georgetown's Center for Security and Emerging Technology and a former member of OpenAI's board, talks to theWall Street Journalabout why she agreed to fire CEO Sam Altman. Without going into details, she underscores that it wasn't safety that led to the fallout, but a lack of trust. Meanwhile, Microsoft executive Dee Templeton hasjoined OpenAI's boardas a nonvoting observer.

A new kind of AI copy can fully replicate famous people. The law is powerless.
Famous people are finding convincing AI replicas in their likeness. A new draft bill in the US called the No Fakes Act would require the creators of these AI replicas to license their use from the original human. But this bill would not apply in cases where the replicated human or the AI system is outside the US. It's another example of just how incredibly difficult AI regulation is. (Politico)

The largest AI image data set was taken offline after researchers found it is full of child sexual abuse material
Stanford researchers made the explosive discovery about the open-source LAION data set, which powers models such as Stable Diffusion. We knew indiscriminate scraping of the internet meant AI data sets contain tons of biased and harmful content, but this revelation is shocking. We desperately needbetter data practices in AI! (404 Media)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments