Article 72DTE We May Never be Able to Tell If AI Becomes Conscious, Argues Philosopher

We May Never be Able to Tell If AI Becomes Conscious, Argues Philosopher

by
jelizondo
from SoylentNews on (#72DTE)

hubie writes:

This gulf in knowledge could be exploited by a tech industry intent on selling the "next level of AI cleverness":

A University of Cambridge philosopher argues that our evidence for what constitutes consciousness is far too limited to tell if or when artificial intelligence has made the leap - and a valid test for doing so will remain out of reach for the foreseeable future.

As artificial consciousness shifts from the realm of sci-fi to become a pressing ethical issue, Dr Tom McClelland says the only "justifiable stance" is agnosticism: we simply won't be able to tell, and this will not change for a long time - if ever.

While issues of AI rights are typically linked to consciousness, McClelland argues that consciousness alone is not enough to make AI matter ethically. What matters is a particular type of consciousness - known as sentience - which includes positive and negative feelings.

"Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state," said McClelland, from Cambridge's Department of History and Philosophy of Science.

"Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in," he said. "Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about."

"For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn't matter. If they start to have an emotional response to their destinations, that's something else."

Companies are investing vast sums of money pursuing Artificial General Intelligence: machines with human-like cognition. Some claim that conscious AI is just around the corner, with researchers and governments already considering how we regulate AI consciousness.

McClelland points out that we don't know what explains consciousness, so don't know how to test for AI consciousness.

"If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake."

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments