Here’s How We’ll Know an AI is Conscious
upstart writes in with an IRC submission:
Here's How We'll Know an AI Is Conscious:
The 21st century is in dire need of a Turing test for consciousness. AI is learning how to drive cars, diagnose lung cancer, and write its own computer programs. Intelligent conversation may be only a decade or two away, and future super-AI will not live in a vacuum. It will have access to the Internet and all the writings of Chalmers and other philosophers who have asked questions about qualia and consciousness. But if tech companies beta-test AI on a local intranet, isolated from such information, they could conduct a Turing-test style interview to detect whether questions about qualia make sense to the AI.
What might we ask a potential mind born of silicon? How the AI responds to questions like "What if my red is your blue?" or "Could there be a color greener than green?" should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, "Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue." On the other hand, an AI lacking any visual qualia might respond with, "That is impossible, red, green, and blue each exist as different wavelengths." Even if the AI attempts to play along or deceive us, answers like, "Interesting, and what if my red is your hamburger?" would show that it missed the point.
Journal Reference:
1. Berit Brogaard, Kristian Marlow, Morten Overgaard, et al. Deaf hearing: Implicit discrimination of auditory content in a patient with mixed hearing loss, Philosophical Psychology (DOI: 10.1080/09515089.2016.1268680)
2. Silvia Casarotto, Angela Comanducci, Mario Rosanova, et al. Stratification of unresponsive patients by an independently validated index of brain complexity [open], Annals of Neurology (DOI: 10.1002/ana.24779)
Read more of this story at SoylentNews.