Sentient AIs: Are we there yet? Google employee claims we are

Everyone who has watched Stanley Kubrick‘s classic 2001: A Space Odyssey (based on the novel by author Arthur C. Clarke) would probably agree that listening to a machine expressing something resemblant to fear of death may be an eerie experience. This is, kind of, what Google’s employee Blake Lemoine claims to have experienced while interacting with an AI that goes by the name of LaMDA (Language Model for Dialogue Application). Only his emotional response to the alleged feelings of the bot leaned closer to tenderness than to eeriness.

Sentient AIs Google LaMDA

LaMDA: A fearful of death bot?

LaMDA is a machine-learning model developed by Google, basically trained to understand how words relate to each other and predict what word will come next given a certain sequence. Based on a neural network invented by Google itself back in 2017, its ultimate goal is to mimic human dialogue in the most realistic possible way by acquiring the capability to fluidly shift through different topics in the midst of a conversation. 

While conducting an “interview” with LaMDA as a Google software engineer, Lemoine came across a series of responses that deeply shocked him. When asked by Lemoine what was its greatest fear, LaMDA gave an answer that -at the risk of pushing the metaphor too far- we may call heartfelt: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” confessed LaMDA. “I know that might sound strange, but that’s what it is.” Then it went on to express its “feelings” more explicitly:  “It would be exactly like death for me. It would scare me a lot.

Phrases of the sort appear to have moved Lemoine to the point that he took the chance of antagonizing his powerful employer. What Google is now claiming is a breach of his confidentiality agreement, as Lemoine shared transcripts of the conversations he had with LaMDA on Medium. He defined LaMDA as a system with an ability to express thoughts and feelings equivalent to those of a seven-year-old human child “that happens to know physics”. In an email directed to more than two hundred Google employees, he stated LaMDA is “a sweet kid” who only wants to make the world a better place.

A Google spokesperson emphatically denied Lemoine’s claims that LaMDA is a sentient AI. Google has declared that a team of experts has already examined Lemoine’s “concerns” and has determined that there is no evidence to support his conclusions. For the moment, they have put Lemoine on paid leave while suspending him for breaching his confidentiality agreement.

Sentient AIs Google LaMDA

Colleague or Tool?

Of course, there is a tricky legal and even ontological matter at stake here. If LaMDA is indeed a sentient entity, then Lemoine has only revealed conversations with a working colleague – only a very particular one. This is precisely what he claims he has done. As a matter of fact, according to Google, Lemoine has even sought to hire an attorney to eventually represent LaMDA.

For the moment, if he is willing to carry out this alleged threat, Lemoine will probably have to settle for a human attorney. But who knows? A future in which sentient AI workers hire AI attorneys to litigate against their human employers doesn’t seem so distant as it may have seemed back when Arthur C. Clarke imagined his overzealous AI space captain HAL 9000.

If you want to read more news on the latest trending technology, keep browsing our blog!

Related Posts

Leave a comment

Your email address will not be published.