Article 6K4PF AI Researchers Demand to Conduct Safety Audits on AI Firms

AI Researchers Demand to Conduct Safety Audits on AI Firms

by
Krishi Chowdhary
from Techreport on (#6K4PF)
pexels-thisisengineering-3861969-1200x80
  • More than 150 researchers, evaluators, and ethicists have signed an open letter seeking permission to audit AI firms.
  • The letter has 3,100+ signatures so far, including professors from Ivy League schools, MIT, and Apple co-founder Steve Wozniak

pexels-thisisengineering-3861969-300x200

More than 150 AI researchers and ethicists have signed an open letter calling on AI giants like OpenAI, Midjourney, and Meta to give them access to their systems for independent evaluation.

Researchers from reputed institutions like MIT, Stanford, and Princeton are concerned over the implications of the growing AI industry. Since AI these days is becoming a part of the public's day-to-day life, they feel there's a need for a more robust set of safety practices that can keep its impact on society in check.

For example: AI is intensifying global ransomware threat, warns the NCSC

The researchers have labeled the current business models of most AI firms to be opaque-meaning there's no scope for conducting independent evaluations or understanding how the models work.

In their letter, they argued that such limitations not only restrict innovation but also prevent them from identifying and mitigating risks before they take a bigger shape. AI, just like any other industry, isn't free of mistakes, but without the opportunity for experts to investigate them, it can turn into a bigger problem for society.

It's also worth noting that even AI tools that are seemingly for the betterment of people can have negative implications. For example, AI tools that improve people's health are damaging the environment.

The open letter has received tremendous support from the industry and has now received over 3,100 signatures, including that of Apple co-founder Steve Wozniak, professors from Ivy League schools, MIT, and some top-level executives from Mozilla and Hugging Face.

It seeks two levels of protection for the researchers:

  • a commitment from the companies to allow equitable access to researchers, and
  • a legal safe harbor to ensure secure and reliable AI research

Along with getting permission to test the AI firms, this letter also seeks to pause the development of new AI models for a while. For example, some tech leaders asked OpenAI (that's facing a lawsuit by Elon Musk) to stop testing its GPT-4 model until new standards and rules are in place.

Read more: EU becomes first major world power to introduce AI laws

Why Are Companies Not In Favor For Independent EvaluationNot many AI firms seem to be fans of the proposed evaluation scheme. The researchers believe that it could be the fear of consequences that are stopping them from being transparent.

However, without a testing mechanism in place, there won't be any accountability. What the AI industry needs today is for experts to come together, collaborate, and find a sustainable way to let the public use the technology.

The only silver lining is that some firms-OpenAI and Cohere to be precise-are changing their policies to give researchers a better insight into their tools.

According to them,

  • Cohere allows intentional stress testing of the API and adversarial attacks" and
  • OpenAI has started to allow model vulnerability research" and academic model safety research" after the group released a draft of their sample proposal.

However, these are the only two exceptions. Out of the remaining firms, some have completely blocked researcher accounts from their tools and even changed their terms and policies to keep future researchers away. In some cases, investigating the tool without the company's consent can also lead to legal repercussions for the researchers.

It's also worth noting that the concern over the implications of AI is not just limited to the aforementioned researchers. The Biden administration in January said that AI firms will now be required to disclose their safety test results with the government.

The post AI Researchers Demand to Conduct Safety Audits on AI Firms appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments