Article 6M98P Three things we learned about AI from EmTech Digital London

Three things we learned about AI from EmTech Digital London

by
Melissa Heikkilä
from MIT Technology Review on (#6M98P)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week,MIT Technology Reviewheld its inaugural EmTech Digital conference in London. It was a great success! I loved seeing so many of you there asking excellent questions, and it was a couple of days full of brain-tickling insights about where AI is going next.

Here are the three main things I took away from the conference.

1. AI avatars are getting really, really good

UK-based AI unicorn Synthesia teased its next generation of AI avatars, which are far more emotive and realistic than any I have ever seen before. The company is pitching these avatars as a new, more engaging way to communicate. Instead of skimming through pages and pages of onboarding material, for example, new employees could watch a video where a hyperrealistic AI avatar explains what they need to know about their job. This has the potential to change the way we communicate, allowing content creators to outsource their work to custom avatars and making it easier for organizations to share information with their staff.

2. AI agents are coming

Thanks to the ChatGPT boom, many of us have interacted with an AI assistant that can retrieve information. But the next generation of these tools, called AI agents, can do much more than that. They are AI models and algorithms that can autonomously make decisions by themselves in a dynamic world. Imagine an AI travel agent that can not only retrieve information and suggest things to do, but also take action to book things for you, from flights to tours and accommodations. Every AI lab worth its salt, from OpenAI to Meta to startups, is racing to build agents that can reason better, memorize more steps, and interact with other apps and websites.

3. Humans are not perfect either

One of the best ways we have of ensuring that AI systems don't go awry is getting humans to audit and evaluate them. But humans are complicated and biased, and we don't always get things right. In order to build machines that meet our expectations and complement our limitations, we should account for human error from the get-go. In a fascinating presentation, Katie Collins, an AI researcher at the University of Cambridge, explained how she found that allowing people to express how certain or uncertain they are-for example, by using a percentage to indicate how confident they are in labeling data-leads to better accuracy for AI models overall. The only downside with this approach is that it costs more and takes more time.

And we're doing it all again next month, this time at the mothership.

Join us forEmTech Digital at the MIT campus in Cambridge, Massachusetts, on May 22-23, 2024.I'll be there-join me!

Our fantastic speakers include Nick Clegg, president of global affairs at Meta, who will talk about elections and AI-generated misinformation. We also have the OpenAI researchers who built the video-generation AISora, sharing their vision on how generative AI will change Hollywood. ThenMax Tegmark, the MIT professor who wrote an open letter last year calling for a pause on AI development, will take stock of what has happened and discuss how to make powerful systems more safe. We also have a bunch of top scientists from the labs at Google, OpenAI, AWS, MIT, Nvidia and more.

Readers of The Algorithm get 30% off with the discount code ALGORITHMD24.

I hope to see you there!

Now read the rest of The AlgorithmDeeper Learning

Researchers taught robots to run. Now they're teaching them to walk.

Researchers at Oregon State University have successfully trained a humanoid robot called Digit V3 to stand, walk, pick up a box, and move it from one location to another. Meanwhile, a separate group of researchers from the University of California, Berkeley, have focused on teaching Digit to walk in unfamiliar environments while carrying different loads, without toppling over.

What's the big deal:Both groups are using an AI technique called sim-to-real reinforcement learning, a burgeoning method of training two-legged robots like Digit. Researchers believe it will lead to more robust, reliable two-legged machines capable of interacting with their surroundings more safely-as well as learning much more quickly.Read more from Rhiannon Williams.

Bits and Bytes

It's time to retire the term user"
The proliferation of AI means we need a new word. Tools we once called AI bots have been assigned lofty titles like copilot," assistant," and collaborator" to convey a sense of partnership instead of a sense of automation. But if AI is now a partner, then what are we?(MIT Technology Review)

Three ways the US could help universities compete with tech companies on AI innovation
Empowering universities to remain at the forefront of AI research will be key to realizing the field's long-term potential, argue Ylli Bajraktari, Tom Mitchell, and Daniela Rus. (MIT Technology Review)

AI was supposed to make police body cams better. What happened?
New AI programs that analyze bodycam recordings promise more transparency but are doing little to change culture. This story serves as a useful reminder that technology is never a panacea for these sorts of deep-rooted issues. (MIT Technology Review)

The World Health Organization's AI chatbot makes stuff up
The World Health Organization launched a virtual health worker to help people with questions about things like mental health, tobacco use, and healthy eating. But the chatbot frequently offers outdated information or just simply makes things up, a common issue with AI models. This is a great cautionary tale of why it's not always a good idea to use AI chatbots. Hallucinating chatbots can lead to serious consequences when they are applied to important tasks such as giving health advice. (Bloomberg)

Meta is addingAI assistants everywhere in its biggest AI push
The tech giant is rolling out its latest AI model, Llama 3, in most of its apps including Instagram, Facebook, and WhatsApp. People will also be able to ask its AI assistants for advice, or use them to search for information on the internet. (New York Times)

Stability AI is in trouble
One of the first new generative AI unicorns, the company behind the open-source image-generating AI model Stable Diffusion, is laying off 10% of its workforce. Just a couple of weeks ago its CEO, Emad Mostaque, announced that he was leaving the company. Stability has also lost several high-profile researchers and struggled to monetize its product, and it is facing a slew of lawsuits over copyright. (The Verge)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments