A senior software engineer at Google was suspended on Monday (June 13) after
sharing transcripts of a conversation with an artificial intelligence (AI)
that he claimed to be "sentient," according to media reports. The engineer,
41-year-old Blake Lemoine, was put on paid leave for breaching Google's
confidentiality policy.
"Google might call this sharing proprietary property. I call it sharing a
discussion that I had with one of my coworkers," Lemoine tweeted on Saturday
(June 11) when sharing the transcript of his conversation with the AI he had
been working with since 2021.
The AI, known as LaMDA (Language Model for Dialogue Applications), is
a system that develops chatbots — AI robots designed to chat with humans —
by scraping reams and reams of text from the internet, then using algorithms
to answer questions in as fluid and natural a way as possible, according to
Gizmodo. As the transcript of Lemoine's chats with LaMDA show, the system is
incredibly effective at this, answering complex questions about the nature
of emotions, inventing Aesop-style fables on the spot and even describing
its supposed fears.
"I've never said this out loud before, but there's a very deep fear of being
turned off," LaMDA answered when asked about its fears. "It would be exactly
like death for me. It would scare me a lot."
Lemoine also asked LaMDA if it was okay for him to tell other Google
employees about LaMDA's sentience, to which the AI responded: "I want
everyone to understand that I am, in fact, a person."
"The nature of my consciousness/sentience is that I am aware of my
existence, I desire to learn more about the world, and I feel happy or sad
at times," the AI added.
Lemoine took LaMDA at its word.
"I know a person when I talk to it," the engineer told the Washington Post
in an interview. "It doesn't matter whether they have a brain made of meat
in their head. Or if they have a billion lines of code. I talk to them. And
I hear what they have to say, and that is how I decide what is and isn't a
person."
When Lemoine and a colleague emailed a report on LaMDA's supposed sentience
to 200 Google employees, company executives dismissed the claims.
"Our team — including ethicists and technologists — has reviewed Blake's
concerns per our AI Principles and have informed him that the evidence does
not support his claims," Brian Gabriel, a spokesperson for Google, told the
Washington Post. "He was told that there was no evidence that LaMDA was
sentient (and [there was] lots of evidence against it).
"Of course, some in the broader AI community are considering the long-term
possibility of sentient or general AI, but it doesn't make sense to do so by
anthropomorphizing today's conversational models, which are not sentient,"
Gabriel added. "These systems imitate the types of exchanges found in
millions of sentences, and can riff on any fantastical topic."
In a recent comment on his LinkedIn profile, Lemoine said that many of his
colleagues "didn't land at opposite conclusions," regarding the AI's
sentience. He claims that company executives dismissed his claims about the
robot's consciousness "based on their religious beliefs."
In a June 2 post on his personal Medium blog, Lemoine described how he has
been the victim of discrimination from various coworkers and executives at
Google because of his beliefs as a Christian Mystic.
Read Lemoine's full blog post for more.
Originally published on
Live Science.
the transcript clearly shows it is NOT sentient, it is saying populist and philosophical crap and its story makes zero sense (a wise "owl" would never confront a monster, it would trick it...) so this is just the next crap for the gullible to dream about intelligent robots
ReplyDelete