Article 6CYWT 4 things we learned from Mozilla’s Responsible AI challenge

4 things we learned from Mozilla’s Responsible AI challenge

by
Kristina Bravo
from The Mozilla Blog on (#6CYWT)
AIChallenge_Distilled-text-1024x576.png

From chat engines and generative apps to self-driving cars, technologies that use artificial intelligence continue to transform our lives in new ways. But how do we create AI that serves society without disempowering some of us? How can we make sure these innovations are fair and trustworthy?

Those are the questions we asked last May at Mozilla's Responsible AI Challenge event in San Francisco. Some of the industry's best thinkers, ethicists, technologists and builders got together to celebrate what's possible with AI while considering the responsibility that comes with its immense capabilities. Here's what we learned from the event.

U7A9479-1024x683.jpgImo Udom, Mozilla's SVP of innovation ecosystems, speaks at Mozilla's Responsible AI Challenge event in San Francisco in May 2023. Credit: Mozilla1. Accessibility makes AI better

Leading AI ethicist Dr. Margaret Mitchell shared that in 2011, while working on technology that generated image descriptions, she and her colleagues encountered what they called the everything is awesome" problem. The system learned from the photos and captions that people shared on social media, which were largely positive. So the model kept generating positive descriptions, even when the images depicted tragic events.

People share beautiful sunsets, great views, gorgeous skies... [So when the computer] sees this horrific blast, its response is, This is great.' And I realized at that moment how the data we use directly affects what the model learns," Dr. Mitchell said. We see a moment where tons of people are potentially hurt, and the computer has no understanding of what mortality is. It has no understanding that explosions and bombs are negative things. It sees the purples and the pinks in the sky."

With the potential for connecting vision to language in assistive technology, Dr. Mitchell concluded that the models prioritized data that did not align with the needs of the visually impaired. Blind people need context and practical information about images, not descriptions solely based on available data, which can be skewed.

Since then, AI systems like image generation have made progress. The lesson: Accessibility doesn't just ensure equitable access to tech advancements - it encourages us to innovate so that we can create better systems for all.

U7A9674-1024x683.jpgDr. Margaret Mitchell discussed how human bias affects AI technology at Mozilla's Responsible AI Challenge event in San Francisco in May 2023. Credit: Mozilla 2. To address AI's limitations, we need (human) thinkers

Despite the advancements in artificial intelligence over the years, Dr. Gary Marcus, a prominent academic who recently testified before the U.S. Senate on AI, argues that deep learning systems still face significant challenges. For example, they can't be updated incrementally with new knowledge. This can lead to a lot of misinformation.

We have laws about defamation, but what if somebody makes a billion pieces of perfectly well-formed misinformation a day?" Dr. Marcus said. Do we want to treat that like free speech? Or is that more like commercial speech? Should it be treated differently? We just don't have the laws yet to do that."

That's where experts, who pay attention to constant changes in the field, come in.

The choices that we make now will shape the next century," Dr. Marcus said. If we don't have scientists and ethicists at the table, our prospects are not great. We cannot afford to not regulate. And we cannot afford regulatory capture either. We have to get this one right."

KGK_9934-1024x683.jpgDr. Gary Marcus argued that deep learning systems face significant challenges, some of which can lead to the spread of misinformation at Mozilla's Responsible AI Challenge event in San Francisco in May 2023. Credit: Mozilla3. Experts are optimistic about AI's capabilities, including advancements for the healthcare field

Kevin Roose - a tech columnist, podcaster, and author - has covered AI for more than a decade. He made several predictions: Meaningful regulation will be unlikely before 2030, Roose said, because of AI's relative novelty and lawmakers' limited understanding of the technology. He anticipates better performing models, and that researchers and companies using AI models to build products will push for safer systems.

He also predicted the great potential of AI, particularly for advancing drug development.

Roose said, I think there's like a 30% chance that a drug that is discovered using AI will reduce mortality from a top 10 cancer by 50% before 2030... I'm not a doctor, but this is the kind of thing that people who are experts in this field tell me. I believe them."

U7A0174-1024x683.jpgKevin Roose shared several predictions around the potential of AI at Mozilla's Responsible AI Challenge event in San Francisco in May 2023. Credit: Mozilla4. AI systems may collaborate more effectively than humans do!

Roose recounted a conversation with a source who pointed out a significant advantage of AI systems over humans: their tendency to share knowledge.

The expert explained that when one node in a neural network establishes a connection, it propagates that information to all other nodes in the network. For example, in a fleet of self-driving cars, when one vehicle learns to identify a new obstacle, it shares that knowledge with all other cars in the fleet, including those from different manufacturers. People, on the other hand, don't have this tendency - often keeping research and data to themselves.

Roose argued that in this aspect, people can learn from machines. He said, I think if we want any chance of competing and surviving and thriving in this new world of AI, we have to be able to do what the machines do and share with each other all of the things that we're learning."

The post 4 things we learned from Mozilla's Responsible AI challenge appeared first on The Mozilla Blog.

External Content
Source RSS or Atom Feed
Feed Location http://blog.mozilla.com/feed/
Feed Title The Mozilla Blog
Feed Link https://blog.mozilla.org/en/
Reply 0 comments