Google Engineer Claims AI Chatbot Is Sentient: Why That Matters

Tue, 12 Jul 2022 03:45:00 GMT
Scientific American - Technology

Is it possible for an artificial intelligence to be sentient

"I want everyone to understand that I am a person," wrote LaMDA in an "Interview" conducted by engineer Blake Lemoine and one of his colleagues.

Lemoine, a software engineer at Google, had been working on the development of LaMDA for months.

In April, Lemoine explained his perspective in an internal company document, intended only for Google executives.

After his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm-and Google placed him on administrative leave.

Many technical experts in the AI field have criticized Lemoine's statements and questioned their scientific correctness.

Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient.

"If we refer to the capacity that Lemoine ascribed to LaMDA-that is, the ability to become aware of its own existence, there is no 'metric' to say that an AI system has this property."

"Even considering the theoretical possibility of making an AI system capable of simulating a conscious nervous system, a kind of in silico brain that would faithfully reproduce each element of the brain," two problems remain, Iannetti says.

"If a machine claims to be afraid, and I believe it, that's my problem!" Scilingo says.

One alternative, Scilingo suggests, might be to measure the "Effects" a machine can induce on humans-that is, "How sentient that AI can be perceived to be by human beings."

Summarized by 84%, original article size 1523 characters