Article 6SAC6 Charging ahead on AI openness and safety

Charging ahead on AI openness and safety

by
Ayah Bdeir, Camille François and Ludovic Peran
from The Mozilla Blog on (#6SAC6)
Story ImageOn the official "road to the French Government's AI Action Summit," Mozilla and Columbia University's Institute of Global Politics are bringing together AI experts and practitioners to advance AI safety approaches that embody the values of open source.

On Tuesday in San Francisco, Mozilla and Columbia University's Institute of Global Politics will hold the Columbia Convening on AI Openness and Safety. The convening, which takes place on the eve of the convening of the International Network of AI Safety Institutes, will bring together leading researchers and practitioners to advance practical approaches to AI safety that embody the values of openness, transparency, community-centeredness and pragmatism. The Convening seeks to make these values actionable, and demonstrate the power of centering pluralism in AI safety to ultimately empower developers to create safer AI systems.

The Columbia Convening series started in October 2023 before the UK Safety Summit, where over 1,800 leading experts and community members jointly stated in an open letter coordinated by Mozilla and Columbia that when it comes to AI Safety and Security, openness is an antidote not a poison." In February 2024, the first Columbia Convening was held with this community to explore the complexities of openness in AI. It culminated in a collective framework characterizing the dimensions of openness throughout the stack of foundation models.

This second convening holds particular significance as an official event on the road to the AI Action Summit, to be held in France in February 2025. The outputs and conclusions from the collective work will directly shape the agenda and actions for the Summit, offering a crucial opportunity to foreground openness, pluralism and practicality in high-level conversations on AI safety.

The timing is particularly relevant as the open ecosystem gains unprecedented momentum among AI practitioners. Open models now cover a large range of modalities and sizes with performance almost on par with the best closed models, making them suitable for most AI use cases. This growth is reflected in the numbers: Hugging Face reported an 880% increase in the number of generative AI model repositories in two years, from 160,000 to 1.57 million. In the private sector, according to a 2024 study by the investment firm a16z, 46% of Fortune 500 company leaders say they strongly prefer to leverage open source models.

In this context, many researchers, policymakers and companies are embracing openness in AI as a benefit to safety, rather than a risk. There is also an increased recognition that safety is as much (if not more of) a system property than a model property, making it critical to extend open safety research and tooling to address risks arising at other stages of the AI development lifecycle.

The technical and research communities invested in openness in AI systems have been developing tools to make AI safer for years - to include better evaluations and benchmarks, deploying content moderation systems, and creating clear documentation for datasets and AI models. This second Columbia Convening seeks to address the needs of these AI systems developers to ensure the safe and trustworthy deployment of their systems, and to accelerate building safety tools, systems, and interventions that incorporate and reflect the values of openness.

Working with a group of leading researchers and practitioners, the convening is structured around five key tracks:

  1. What's missing from taxonomies of harm and safety definitions? The convening will examine gaps in popular taxonomies of harms and explore what notions of safety popularized by governments and big tech companies fail to capture, working to put critical concerns back on the agenda.
  2. Safety tooling in open AI stacks. As the ecosystem of open source tools for AI safety continues to grow, developers need better ways to navigate it. This work will focus on mapping technical interventions and related tooling, and will help identify gaps that need to be addressed for safer system deployment.
  3. The future of content safety classifiers. This discussion will chart a future roadmap for foundation models based on open source content safety classifiers, addressing key questions, necessary resources, and research agenda requirements, while drawing insights from past and current classifier system deployments. Participants will explore gaps in the content safety filtering ecosystem, considering both developer needs and future technological developments.
  4. Agentic risks for AI systems interfacing with the web. With growing interest in agentic applications," participants will work toward a robust working definition and map the specific needs of AI-system developers in developing safe agentic systems, while identifying current gaps to address.
  5. Participatory inputs in safety systems. The convening will examine how participatory inputs and democratic engagement can support safety tools and systems throughout development and deployment pipelines, making them more pluralistic and better adapted to specific communities and contexts.

Through these tracks, the convening will develop a community-informed research agenda at the intersection of safety and openness in AI, which will inform the AI Action Summit. In keeping with the principles of openness and working in public, we look forward to sharing our work on these issues.

The post Charging ahead on AI openness and safety appeared first on The Mozilla Blog.

External Content
Source RSS or Atom Feed
Feed Location http://blog.mozilla.com/feed/
Feed Title The Mozilla Blog
Feed Link https://blog.mozilla.org/en/
Reply 0 comments