Article 6QTHM AI Researchers Call For a Contingency Plan in Case Humans Lose Control of AI

AI Researchers Call For a Contingency Plan in Case Humans Lose Control of AI

by
Krishi Chowdhary
from Techreport on (#6QTHM)
julien-tromeur-6UDansS-rPI-unsplash-1-1200x675.jpg
  • On September 16, a group of 30+ AI researchers released a joint statement, urging nations to create a global contingency plan in case AI gets out of control.
  • They said that the technology they helped develop is equally beneficial and dangerous. If left unchecked, it can cause devastation beyond imagination.

julien-tromeur-6UDansS-rPI-unsplash-1-300x169.jpg

A group of AI researchers have put together an open letter, urging all nations to create a global contingency plan in case AI systems become uncontrollable.

The statement was released on September 16. It said that the technology they have helped develop could turn catastrophic if it loses human oversight.

The group comprises more than 30 signatories from the US, China, Canada, Britain, Singapore and other countries. Each of them comes from a reputed AI research institute and universities and there are several Turing Award (equivalent of Nobel prize in computing) winners among them.

It's important to note that they are not simply speaking from personal beliefs. Their concerns are based upon findings from the International Dialogue on AI Safety in Venice in early September.

So, if some of the brightest minds in the world say there's a cause for concern with the way AI is developing, it shouldn't be taken lightly.

What's the Proposed Course of Action?

The researchers have three primary goals with this letter:

  • Agreements that will prepare all nations for an emergency
  • A safety assurance framework
  • An independent global AI safety and verification research.

They further elaborated that every nation needs a team of authorities who will keep an eye on all developments in the AI industry, detect incidents or anomalies, and handle any risk that may come with it.

If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?' - Johns Hopkins University Professor Gillian Hadfield

Their statement is also a wake-up call for superpowers to increase their collaboration on matters of global concern. As they pointed out, scientific exchange between many superpowers (take the US and China for example) has decreased owing to diplomatic differences. This could hinder the process of setting up a global AI framework.

The World Is Finally Realizing the Risks of AI

These scientists aren't the only ones worried about the risks associated with AI. Now that the world has recovered from the sudden boom of AI, every nation is taking steps to put regulatory frameworks in place that will keep the power of this new-found technology because they know what devastations it's capable of if left unchecked.

Apart from individual efforts, nations have also started collaborating to build a global framework.

  • For example, in July 2024, the United States and the United Kingdom signed an agreement to develop a framework that will allow both the institutes to address problems and threats that AI imposes on our society.'
  • Similarly, UN Secretary-General Antonio Guterres also warned that AI, if left unchecked, is a huge threat to democracies.
  • We have already seen how it can fuel the spread of misinformation. Fabricated posts against the government can weaken people's trust in the system, ultimately leading to a complete political collapse.

So the need of the hour is to create rules and guidelines and to ensure that AI companies stick to them.

While most policymakers and world leaders agree with this view, tech companies are against it. Many times, they have said that too much regulation can hinder innovation.

Let's take the example of the California bill.

California State Sen. Scott Wiener introduced the California bill, officially known as SB 1047 this February. This bill is supposed to regulate the development and usage of AI systems, especially the advanced ones that cost $100 million.

It has already been passed by the state Senate (by a 32-1 vote) and the state Assembly appropriations committee. This shows that most policymakers support regulating AI.

But on the other hand, major AI companies like OpenAI, Anthropic, and Meta are actively lobbying against it.

They feel that such regulation will hinder innovation. If developers are burdened with so many rules and are held accountable for every small mistake, they'll be less likely to encourage the open-source movement.

They also added that since AI regulation already exists at the federal level, there's no level for a state-level law. It will only make things more complicated and might force businesses to pull their facilities and investments out of California.

The post AI Researchers Call For a Contingency Plan in Case Humans Lose Control of AI appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments