Saturday, June 11, 2022

Engineer says his AI software is close to consciousness or sentience

Consciousness (psychology): an organism’s awareness of something either internal or external to itself; the waking state; in medicine and brain science, the distinctive electrical activity of the waking brain, as recorded via scalp electroencephalogram, that is commonly used to identify conscious states and their pathologies

Consciousness (philosophy): The problem of consciousness is arguably the central issue in current theorizing about the mind; An animal, person or other cognitive system may be regarded as conscious in a number of different senses; Sentience. It may be conscious in the generic sense of simply being a sentient creature, one capable of sensing and responding to its world; Wakefulness. One might further require that the organism actually be exercising such a capacity rather than merely having the ability or disposition to do so. Thus one might count it as conscious only if it were awake and normally alertSelf-consciousness. A third and yet more demanding sense might define conscious creatures as those that are not only aware but also aware that they are aware, thus treating creature consciousness as a form of self-consciousness; and more and more prose ad nauseum!

Sentient: able to perceive or feel things


CONTEXT
An issue of long-standing personal interest has been the nature of human consciousness, the mind-body problem and the question of free will. All of those are entangled with each other, at least according to my understanding. Each of those issues has its own, large body of science and philosophical literature devoted to understanding and characterizing them. This is an area of inquiry that the human mind is not well adapted to dealing with. Most of what goes on in our heads, > 99.9%, is unconscious and reality-distorting. What little is consciously perceived, < 0.1%, is perceived as factual and rational. This is a very tricky area for research. People get self-deluded and easily wind up going down rabbit holes to dead ends. Our brains-minds did not evolve to be adept at this kind of self-inquiry. Progress is slow and error-prone.


The assertion of a conscious machine
The Washington Post reports that a Google engineer is claiming that his AI software is close to achieving consciousness. WaPo writes:
SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA [Language Model for Dialogue Applications], Google’s artificially intelligent chatbot generator, and began to type.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”  
Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The free online class Minds and Machines, taught by philosopher Alex Byrne, was pretty convincing that hardware and software can never be conscious or sentient. The arguments there are based in logic. They were convincing for me. This 1:16 video explains some of the logic in the well-known thought experiment called the Chinese Room. That logic experiment was dreamed up by philosopher John Searle.


 

One can reasonably wonder if Lemoine is aware of logic like Chinese Room that suggests his AI program will never be conscious or sentient in the sense that individual humans are, with one caveat.

The caveat is progress in brain-machine interface (BMI) technology, which links human minds with machines and AI software. Research there has shown that when a brain is connected to a machine running self-teaching AI software, the software learns from the brain and the brain learns from the AI. AI has learned to translate brain electrical patterns and communicate thoughts over the internet between machines and humans, humans and rats, humans and other humans, and between mice. As BMI technology advances, there could be some sort of machine-human melding where the machine component can become so highly responsive to the brain it is linked to that it comes very close to sentience. But even there, Chinese Room reasoning argues that the machine is just very, very good at mimicking a human, but still not conscious or sentient. 


A BMI technology schematic


But, if one goes one step further and links a highly human-adapted AI machine to sophisticated sensors that closely mimic human nerves and senses in sensitivity and electrical or other outputs [eyes, ears (hearing and balance), nose, skin (touch, temperature, pressure, movement), tongue], what would that machine act or be like in terms of a human mind after it was disconnected from the human(s) it learned from? 

My recollection is that engineers do not fully understand what is happening when AI software learns, but it can rewrite its own code as it learns. Something is going on and that could be a non-living mimic of the neuronal plasticity that characterizes the human brain. Would that constitute some form of consciousness or sentience, even if not human? Or, does Chinese Room logic forever bar that possibility?

No comments:

Post a Comment