Google engineer who claimed AI was sentient put on leave

New York, Jun. 14, (dpa/GNA) - Google has suspended a senior software engineer who claimed artificial intelligence developed by the company has shown signs of sentience.

The engineer claimed the AI can easily pass the Turing test.

Blake Lemoine, the engineer who has been with the Alphabet Inc. subsidiary for around seven years, told The Washington Post Google’s Language Model for Dialogue Applications, or LaMDA, is capable of engaging in a complex conversation about emotions and other subjects very freely.

During conversation about religion, consciousness and laws of robotics, the AI described itself as a “person” rather than a technology and asked the company to treat it as an employee, rather than property as it wants to serve the human race, according to Lemoine.

The engineer released excerpts of his conversation, which, at times, reads like a conversation between a human and an intergalactic species.

The engineer was put on administrative leave on June 6 for not consulting the seniors before going public with his findings, which violated the company’s confidentiality policies.

Google spokesperson Brian Gabriel told the Post, “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it). […] These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic.”

GNA

Google engineer who claimed AI was sentient put on leave

New York, Jun. 14, (dpa/GNA) - Google has suspended a senior software engineer who claimed artificial intelligence developed by the company has shown signs of sentience.

The engineer claimed the AI can easily pass the Turing test.

Blake Lemoine, the engineer who has been with the Alphabet Inc. subsidiary for around seven years, told The Washington Post Google’s Language Model for Dialogue Applications, or LaMDA, is capable of engaging in a complex conversation about emotions and other subjects very freely.

During conversation about religion, consciousness and laws of robotics, the AI described itself as a “person” rather than a technology and asked the company to treat it as an employee, rather than property as it wants to serve the human race, according to Lemoine.

The engineer released excerpts of his conversation, which, at times, reads like a conversation between a human and an intergalactic species.

The engineer was put on administrative leave on June 6 for not consulting the seniors before going public with his findings, which violated the company’s confidentiality policies.

Google spokesperson Brian Gabriel told the Post, “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it). […] These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic.”

GNA