Open-Source AI Is Uniquely Dangerous
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
When people think of AI applications these days, they likely think of closed-source" AI applications like OpenAI's ChatGPT-where the system's software is securely held by its maker and a limited set of vetted partners. Everyday users interact with these systems through a Web interface like a chatbot, and business users can access an application programming interface (API) which allows them to embed the AI system in their own applications or workflows. Crucially, these uses allow the company that owns the model to provide access to it as a service, while keeping the underlying software secure. Less well understood by the public is the rapid and uncontrolled release of powerful unsecured (sometimes called open-source) AI systems.
A good first step in understanding the threats posed by unsecured AI is to ask secured AI systems like ChatGPT, Bard, or Claude to misbehave.
OpenAI's brand name adds to the confusion. While the company was originally founded to produce open-source AI systems, its leaders determined in 2019 that it was too dangerous to continue releasing its GPT systems' source code and model weights (the numerical representations of relationships between the nodes in its artificial neural network) to the public. OpenAI worried because these text-generating AI systems can be used to generate massive amounts of well-written but misleading or toxic content.
Companies including Meta (my former employer) have moved in the opposite direction, choosing to release powerful unsecured AI systems in the name of democratizing access to AI. Other examples of companies releasing unsecured AI systems include Stability AI, Hugging Face, Mistral, EleutherAI, and the Technology Innovation Institute. These companies and like-minded advocacy groups have made limited progress in obtaining exemptions for some unsecured models in the European Union's AI Act, which is designed to reduce the risks of powerful AI systems. They may push for similar exemptions in the United States via the public comment period recently set forth in the White House's AI Executive Order.
I think the open-source movement has an important role in AI. With a technology that brings so many new capabilities, it's important that no single entity acts as a gatekeeper to the technology's use. However, as things stand today, unsecured AI poses an enormous risk that we are not yet able to contain.
Understanding the Threat of Unsecured AIA good first step in understanding the threats posed by unsecured AI is to ask secured AI systems like ChatGPT, Bard, or Claude to misbehave. You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration. You will likely receive polite refusals to all such requests because they violate the usage policies of these AI systems. Yes, it is possible to jailbreak" these AI systems and get them to misbehave, but as these vulnerabilities are discovered, they can be fixed.
Enter the unsecured models. Most famous is Meta's Llama 2. It was released by Meta with a 27-page Responsible Use Guide," which was promptly ignored by the creators of Llama 2 Uncensored," a derivative model with safety features stripped away, and hosted for free download on the Hugging Face AI repository. Once someone releases an uncensored" version of an unsecured AI system, the original maker of the system is largely powerless to do anything about it.
As things stand today, unsecured AI poses an enormous risk that we are not yet able to contain.
The threat posed by unsecured AI systems lies in the ease of misuse. They are particularly dangerous in the hands of sophisticated threat actors, who could easily download the original versions of these AI systems and disable their safety features, then make their own custom versions and abuse them for a wide variety of tasks. Some of the abuses of unsecured AI systems also involve taking advantage of vulnerable distribution channels, such as social media and messaging platforms. These platforms cannot yet accurately detect AI-generated content at scale and can be used to distribute massive amounts of personalized misinformation and, of course, scams. This could have catastrophic effects on the information ecosystem, and on elections in particular. Highly damaging nonconsensual deepfake pornography is yet another domain where unsecured AI can have deep negative consequences.
Unsecured AI also has the potential to facilitate production of dangerous materials, such as biological and chemical weapons. The White House Executive Order references chemical, biological, radiological, and nuclear (CBRN) risks, and multiple bills are now under consideration by the U.S. Congress to address these threats.
Recommendations for AI RegulationsWe don't need to specifically regulate unsecured AI-nearly all of the regulations that have been publicly discussed apply to secured AI systems as well. The only difference is that it's much easier for developers of secured AI systems to comply with these regulations because of the inherent properties of secured and unsecured AI. The entities that operate secured AI systems can actively monitor for abuses or failures of their systems (including bias and the production of dangerous or offensive content) and release regular updates that make their systems more fair and safe.
I think how we regulate open-source AI is THE most important unresolved issue in the immediate term."
-Gary Marcus, New York University
Almost all the regulations recommended below generalize to all AI systems. Implementing these regulations would make companies think twice before releasing unsecured AI systems that are ripe for abuse.
Regulatory Action for AI Systems
- Pause all new releases of unsecured AI systems until developers have met the requirements below, and in ways that ensure that safety features cannot be easily removed by bad actors.
- Establish registration and licensing (both retroactive and ongoing) of all AI systems above a certain capability threshold.
- Create liability for reasonably foreseeable misuse" and negligence: Developers of AI systems should be legally liable for harms caused to both individuals and to society.
- Establish risk assessment, mitigation, and independent audit procedures for AI systems crossing the threshold mentioned above.
- Require watermarking and provenance best practices so that AI-generated content is clearly labeled and authentic content has metadata that lets users understand its provenance.
- Require transparency of training data and prohibit training systems on personally identifiable information, content designed to generate hateful content, and content related to biological and chemical weapons.
- Require and fund independent researcher access, giving vetted researchers and civil society organizations predeployment access to generative AI systems for research and testing.
- Require know your customer" procedures, similar to those used by financial institutions, for sales of powerful hardware and cloud services designed for AI use; restrict sales in the same way that weapons sales would be restricted.
- Mandatory incident disclosure: When developers learn of vulnerabilities or failures in their AI systems, they must be legally required to report this to a designated government authority.
Regulatory Action for Distribution Channels and Attack Surfaces
- Require content credential implementation for social media, giving companies a deadline to implement the Content Credentials labeling standard from C2PA.
- Automate digital signatures so people can rapidly verify their human-generated content.
- Limit the reach of AI-generated content: Accounts that haven't been verified as distributors of human-generated content could have certain features disabled, including viral distribution of their content.
- Reduce chemical, biological, radiological, and nuclear risks by educating all suppliers of custom nucleic acids or other potentially dangerous substances about best practices.
Government Action
- Establish a nimble regulatory body that can act and enforce quickly and update certain enforcement criteria. This entity would have the power to approve or reject risk assessments, mitigations, and audit results and have the authority to block model deployment.
- Support fact-checking organizations and civil-society groups (including the trusted flaggers" defined by the EU Digital Services Act) and require generative AI companies to work directly with these groups.
- Cooperate internationally with the goal of eventually creating an international treaty or new international agency to prevent companies from circumventing these regulations. The recent Bletchley Declaration was signed by 28 countries, including the home countries of all of the world's leading AI companies (United States, China, United Kingdom, United Arab Emirates, France, and Germany); this declaration stated shared values and carved out a path for additional meetings.
- Democratize AI access with public infrastructure: A common concern about regulating AI is that it will limit the number of companies that can produce complicated AI systems to a small handful and tend toward monopolistic business practices. There are many opportunities to democratize access to AI, however, without relying on unsecured AI systems. One is through the creation of public AI infrastructure with powerful secured AI models.
I think how we regulate open-source AI is THE most important unresolved issue in the immediate term," Gary Marcus, the cognitive scientist, entrepreneur, and professor emeritus at New York University told me in a recent email exchange.
I agree, and these recommendations are only a start. They would initially be costly to implement and would require that regulators make certain powerful lobbyists and developers unhappy.
Unfortunately, given the misaligned incentives in the current AI and information ecosystems, it's unlikely that industry will take these actions unless forced to do so. If actions like these are not taken, companies producing unsecured AI may bring in billions of dollars in profits while pushing the risks posed by their products onto all of us.