Article 6G7PA Mozilla Joins Latest AI Insight Forum

Mozilla Joins Latest AI Insight Forum

by
Mozilla
from The Mozilla Blog on (#6G7PA)
Story Image

Today Mozilla Foundation President, Mark Surman, spoke with members of the US Senate, including Senator Leader Schumer, Senator Rounds, Senator Heinrich and Senator Young about two of what Mozilla believes are the most critical questions we must ask if we're to chart a better path forward with AI: How can we protect people's privacy in the AI era? And how can we ensure that those who cause harm through AI can be held accountable - and liable?

At Mozilla, we have a unique vantage point in finding answers to these questions: that of a non-profit foundation and a tech company. As a foundation, we've spent the past five years exploring what it takes to make AI trustworthy and, along with nine other philanthropic foundations, have joined Vice President Harris in announcing a $200 million investment in the trustworthy AI ecosystem. As a tech company, we're investing heavily in leveraging AI in our products and have set up our own AI R&D lab, Mozilla.ai.

As progress in AI accelerates, it is critical that we take action to ensure that the benefits of AI are shared widely across society and to protect people from harm. Binding rules should be a part of this course of action, and privacy, openness, and transparency should be core principles underlying any regulatory framework.

Open source AI, in particular, faces significant threat from speculative fears about its potential misuse. Rushing to shut down open source AI could hurt our ability to harness AI's potential. Abuse is not a problem of open source AI - we've seen time and time again that proprietary technologies are equally susceptible to abuse. In fact, openness has the potential to play a significant role in promoting competition in AI and large language models - something organizations like AI2, EleutherAI, Mistral, and Mozilla.ai are focused on - and also allows for governments and public interest groups to assess the technology and flag bias, security flaws, and other issues, therefore improving the quality of these technologies and the oversight of them too.

In contemplating new rules for AI, we've asked the Senate to consider the following recommendations:

  1. Incentivize openness and transparency: Open AI ecosystems facilitate scrutiny and help foster an environment where responsibility for AI-driven outcomes can be appropriately attributed. Moreover, openness in AI stimulates innovation by providing the building blocks with which the market can build competitive products. Ensuring that all projects, open source or not, meet minimum criteria of responsible release is different from effectively banning open source approaches due to hypothetical future harms. Openness is not a problem but a core part of the solution that will help a broad group of actors engage core questions in this space, including privacy or liability.
  2. Distribute liability equitably: The complexity of AI systems necessitates a nuanced approach to liability that considers the entire value chain, from data collection to model deployment. Liability should not be concentrated but rather distributed in a manner that reflects how AI is developed and brought to market. Rather than just looking at the deployers of these models, who often might not be in a position to mitigate the underlying causes for potential harms, a more holistic approach would regulate practices and processes across the development stack'.
  3. Champion privacy by default: Privacy legislation must be at the forefront of the AI regulatory framework. The American Data Privacy and Protection Act, endorsed by Mozilla, would represent a significant step towards providing the necessary privacy guarantees that underpin responsible AI. Until Congress passes a federal law, the FTC should push forward its critical Commercial Surveillance and Data Security rulemaking, and existing rules protecting consumers and competition need to be enforced.
  4. Invest in privacy-enhancing technologies: Investment in privacy-enhancing technologies, with government funding at its heart, is crucial for the development of AI that protects individual privacy - beginning with data collection. Such investment not only aligns with ethical standards but also drives innovation in creating more responsible and trustworthy methodologies for AI development.

At Mozilla, we will continue to fight for and invest in more trustworthy AI. We've shared Mark Surman's full written submission with the Senate, that includes more of Mozilla's perspective on AI regulation and governance. Below is a link to the submission:

Mozilla Senate AI Insight Forum Nov 8 Written StatementDownload

The post Mozilla Joins Latest AI Insight Forum appeared first on The Mozilla Blog.

External Content
Source RSS or Atom Feed
Feed Location http://blog.mozilla.com/feed/
Feed Title The Mozilla Blog
Feed Link https://blog.mozilla.org/en/
Reply 0 comments