How We’ll Know an AI Is Conscious

The Australian philosopher David Chalmers famously asked whether “philosophical zombies” are conceivable—people who behave like you and me yet lack subjective experience. It’s an idea that has gotten many scholars interested in consciousness, including myself. The reasoning is that, if such zombies, or sophisticated unfeeling robots, are conceivable, then physical properties alone—about the brain or a brain-like mechanism—cannot explain the experience of consciousness. Instead, some additional mental properties must account for the what-it-is-like feeling of being conscious. Figuring out how these mental properties arise has become known as the “hard problem” of consciousness.

But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have. In an episode of the Making Sense podcast with neuroscientist and author Sam Harris, Chalmers addressed this puzzle. “I don’t think it’s particularly hard to at least conceive of a system doing this,” Chalmers told Harris. “I mean, I’m talking to you now, and you’re making a lot of comments about consciousness that seem to strongly suggest that you have it. Still, I can at least entertain the idea that you’re not conscious and that you’re a zombie who’s in fact just making all these noises without having any consciousness on the inside.”

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

 

“The 21st century is in dire need of a Turing test for consciousness.”
Autor: JOEL FROHLICH

Our conscious experiences are composed of qualia, the subjective aspects of sensation—the redness of red, the sweetness of sweet. The qualia that compose conscious experiences are irreducible, incapable of being mapped onto anything else. If I were born blind, no one, no matter how articulate, would ever be able to give me a sense of the color blood and roses share. This would be true even if I were among a number of blind people who develop something called blindsight—the ability to avoid obstacles and accurately guess where objects appear on a computer monitor despite being blind.

Blindsight seems to demonstrate that some behaviors can be purely mechanized, so to speak, occurring without any subjective awareness—echoing Chalmers’ notion of zombies. The brains of blindsighted people appear to exploit preconscious areas of the visual system, yielding sighted behavior without visual experience. This often occurs after a person suffers a stroke or other injury to the visual cortex, the part of the cerebral cortex that processes visual information. Because the person’s eyes are still healthy, they may feed information hidden from consciousness to certain brain regions, such as the superior colliculus.

By the same token, there are at least a few documented cases of deaf hearing. One such case, detailed in a 2017 Philosophical Psychology report, is patient LS, a man deaf since birth, yet able to discriminate sounds based on their content. For people such as LS, this discernment occurs in silence. But if a deaf-hearing person were to ask the sort of questions people who can hear ask—“Doesn’t that sound have a weird sort of brassiness to it?”—then we’d have good reason to suspect this person isn’t deaf at all. (We couldn’t be absolutely sure because the question could be a prank.) Likewise, if an AI began asking, unprompted, the sorts of questions only a conscious being could ask, we’d reasonably form a similar suspicion that subjective experience has come online.

The 21st century is in dire need of a Turing test for consciousness. AI is learning how to drive cars, diagnose lung cancer, and write its own computer programs. Intelligent conversation may be only a decade or two away, and future super-AI will not live in a vacuum. It will have access to the Internet and all the writings of Chalmers and other philosophers who have asked questions about qualia and consciousness. But if tech companies beta-test AI on a local intranet, isolated from such information, they could conduct a Turing-test style interview to detect whether questions about qualia make sense to the AI.

What might we ask a potential mind born of silicon? How the AI responds to questions like “What if my red is your blue?” or “Could there be a color greener than green?” should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, “Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue.” On the other hand, an AI lacking any visual qualia might respond with, “That is impossible, red, green, and blue each exist as different wavelengths.” Even if the AI attempts to play along or deceive us, answers like, “Interesting, and what if my red is your hamburger?” would show that it missed the point.