Article 5CZSD Superintelligent AI May Be Impossible to Control; That's the Good News

Superintelligent AI May Be Impossible to Control; That's the Good News

by
Charles Q. Choi
from IEEE Spectrum on (#5CZSD)

It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it's on the verge of being created.

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.

Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. The question about whether superintelligence could be controlled if created is quite old," says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. It goes back at least to Asimov's First Law of Robotics, in the 1940s."

The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story Runaround," are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us but also investigated potential control strategies for such a machine-and the reasons they might not work.

Bostrom outlined two possible types of solutions of this control problem." One is to control what the AI can do, such as keeping it from connecting to the Internet, and the other is to control what it wants to do, such as teaching it rules and values so it would act in the best interests of humanity. The problem with the former is that Bostrom thought a supersmart machine could probably break free from any bonds we could make. With the latter, he essentially feared that humans might not be smart enough to train a superintelligent AI.

Now Alfonseca and his colleagues suggest it may be impossible to control a superintelligent AI, due to fundamental limits inherent to computing itself. They detailed their findings this month in the Journal of Artificial Intelligence Research.

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine's behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm.

However, the scientists said it was impossible for any containment algorithm to simulate the AI's behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI's behavior or accurately predict the consequences of the AI's actions and not recognize such failures.

Asimov's first law of robotics has been proved to be incomputable," Alfonseca says, and therefore unfeasible."

We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice's theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains.

On the other hand, there's no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group's predictions.

First, Alfonseca estimates AI's moment of truth remains, he says, At least two centuries in the future."

Second, he says researchers do not know if so-called artificial general intelligence, also known as strong AI, is theoretically even feasible. That is, a machine as intelligent as we are in an ample variety of fields," Alfonseca explains.

Last, Alfonseca says, We have not proved that superintelligences can never be controlled-only that they can't always be controlled."

Although it may not be possible to control a superintelligent artificial general intelligence, it should be possible to control a superintelligent narrow AI-one specialized for certain functions instead of being capable of a broad range of tasks like humans. We already have superintelligences of this type," Alfonseca says. For instance, we have machines that can compute mathematics much faster than we can. This is [narrow] superintelligence, isn't it?"

vGT95mxC94s
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments