OpenAI & Anthropic Agree to a Pre-release Testing by the U.S. AI Safety Institute
- In a press release published on Thursday, the U.S. AI Safety Institute revealed that OpenAI and Anthropic have agreed to let the institute test its products before mass release.
- The main purpose of this institute, which was founded in 2023, is to ensure that the models developed by OpenAI/Anthropic are safe for public use.
- Both Sam Altman (CEO of Open AI) and Jack Clark (co-founder of Anthropic) have released their individual statements, expressing their pleasure over this collaboration.
OpenAI and Anthropic have agreed to let the U.S. AI Safety Institute test their new models before their public release.
The news comes directly from the Institute, which works under the National Institute of Standards and Technology (NIST). In a press release, it said that it will be getting access to major new models from each company prior to and following their public release."
These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.' - Elizabeth Kelly, director of the U.S. AI Safety Institute
With the help of these tests, the institute will provide valuable feedback to both firms so they can make necessary improvements and ensure their models are safe for public use.
If there are any risks associated with the models, they will immediately come to light and all the parties can work together to find a solution that will mitigate those risks.
About the US AI Safety InstituteThe institute was founded in 2023 by the Commerce Department following President Joe Biden's executive order on AI. Its mission is pretty simple - to create a network of AI experts who can help hold AI firms accountable and ensure that their products/models are safe for public use.
The establishment of the institute follows two important events.
- First is the 2023 AI Safety Summit which was held at Bletchley Park where companies like Google and OpenAI agreed to allow third-party testing on their products for the first time. This was followed by the Seoul AI Summit in May 2024 where OpenAI & Microsoft signed an agreement, promising to develop AI safely.
- Second is the April 2024 announcement of the US and UK joining hands for AI testing. As a result of this collaboration, the US AI testing institute and the UK AI testing institute are working together, sharing whatever they learn with their counterparts.
Speaking about the same, OpenAI CEO Sam Altman said that he is happy they could reach an agreement with the U.S. AI Safety Institute about the pre-release safety test.
Jason Kwon, OpenAI's chief strategy officer, also added that they strongly support the mission that the institute has set out to accomplish and are looking forward to working together and curating the best safety practices for AI use.Jack Clark, the co-founder of Anthropic, shared similar sentiments and said that such a collaboration would help them leverage the institute's wide expertise to properly test their models before mass release.
Now that the world has come face to face with the fact that AI is here to stay, governments across the globe have taken to curating laws to regulate its use instead of pushing it away.
The post OpenAI & Anthropic Agree to a Pre-release Testing by the U.S. AI Safety Institute appeared first on The Tech Report.