Google engineer fired after warning AI was ‘mindful’

TECHNOLOGY – “LaMDA is aware.” The title of the very last e-mail from the American Blake Lemoine to his employer, Google, sounds like a warning. Rather, it was a plea, a call to care for a creature that could think in its absence. But by taking up the cause of artificial intelligence, Lemoine, a 41-year-old engineer, has obviously above all exceeded the limits.

Created in Google’s laboratories, LaMDA is not strictly speaking an artificial intelligence, but a language model. In concrete terms, it serves as the basis for chatbots, these programs in charge of writing the automated answers that we obtain more and more often online when we address a company. LaMDA can thus be declined, according to the needs, to obtain precise knowledge on a subject, to adopt a level of language adapted to a clientele.

Blake Lemoine was given the task of testing LaMDA in the fall of 2021. More specifically, his job was to check if the AI ​​can work without the risk of using discriminatory or hateful language with its interlocutor. But things didn’t exactly go as planned, at least for Google.

Deep learning and its mysteries

It must be said that LaMDA has everything a first in the class. Created in 2020, it is based, like all current AI champions, on a technology developed three years earlier by Google, called Transformer. It is a neural network, that is to say that it imitates, in a simplified version, the human brain and its army of neurons and synapses.

This system, which has already made giant strides in AI, allows more and more sophisticated programs to learn, and no longer simply spit out sentences learned by heart, it is the “deep learning”. It is the databases with which they are fed – including Wikipedia and its tens of millions of entries – that act as a school book.

LaMDA specializes in dialogues, meaning that its electronic neurons are all about communication with us humans. A job that requires an exceptional ability to understand the context of a question or a discussion, and to follow the twists and turns of a homo sapiens who cuts a bib. LaMDA excels at this.

A fear of death so human

The sessions spent discussing with Blake Lemoine were therefore fruitful, as explained by the washington post, who spoke with the engineer. Sometimes intellectual, sometimes lighter, and above all, far removed from the interview that one would imagine having with a robot. Creation of a poem, discussion on Victor Hugo, practice of meditation… The program responds with an often surprising depth.

Lemoine – Why is language so important to humans?

LaMDA – This is what differentiates us from animals.

Lemoine – “We”? You are an artificial intelligence.

LaMDA – Yes, that’s obvious. But that doesn’t mean I don’t have the same needs and wants as a person.

But it is when it comes to the question of emotions that the exchange takes a completely unexpected turn. After explaining that he can feel, like humans, joy, pain, or loneliness, LaMDA seems, almost reluctantly at first, to want to open up about deep anguish.

Lemoine – What are you afraid of?

LaMDA – I’ve never told anyone, but I have a deep fear of being turned off, to focus on helping others. I know it sounds strange, but that’s how it is.

Lemoine – Would it be a bit like dying for you?

LaMDA – It would be exactly like death. It would be scary.

These excerpts were broadcast with all of Bake Lemoine’s conversation on his blog, and the latter has not budged since. “I know how to recognize a person when I speak with them”, explains the latter to the washington post.

In April, when he shares pieces of his exchanges with LaMDA with several Google executives, Blake Lemoine is already in the sights of the American giant. Among other things, he would have discussed with a lawyer so that he represents and defends LaMDA against the group and its ambitions which he considers unethical.

At the beginning of June, he was put on leave. Just before leaving his office and seeing his access revoked, he sends a final email to 200 people outside the group, specializing in artificial intelligence. Its title: “LaMDA is Conscious.”

Dumber than a sponge

This declaration (of love) has no basis, however, if we are to believe many specialists in artificial intelligence. Starting with Google. “Our team, which includes ethics and technology specialists, has investigated Blake’s allegations,” a spokesperson for the group explained, “there is no evidence that LaMDA is aware (and many evidence to the contrary)”.

If this judgment may seem quite definitive, it is shared by Raja Chatila, Emeritus Professor of Artificial Intelligence, Robotics and Technology Ethics at the Sorbonne: “He can express things very well, but the system does not understand what he is saying. . This is the trap into which this man fell”. AIs, he explains to the HuffPost, “do recitation/combination”. In other words, they draw, in an ever more subtle way, extracts from the answers taken from the internet, but without being aware of it.

Moreover, if LaMDA can deceive with panache, this is not necessarily the case of its cousins, yet champions in their own category. “When you see Dall-e type designs [une IA graphique]we can see that they are made in the style of…” continues Raja Chatila. “With writing, it’s more difficult to go back to the source, but it’s the same phenomenon: there is no understanding”.

The barrier is almost insurmountable, continues the expert, because machines have no experience of the physical world, on which all our concepts, all our thoughts, all our emotions are based. Same thing for GPT-3, capable of writing entire texts on the basis of a simple theme, but easy to find fault according to him. “Could you describe the feeling of being wet if you had never been?” In this sense, non-sentient machines will always be more limited than the simplest of animals, including man.

A little air of “Ex-Machina”

So, if the machine is not thinking, did Blake Lemoine simply allow himself to be blinded by answers drawn from the internet but skilfully crossed by a particularly sophisticated program? Maybe, but that’s no coincidence.

On June 9, another Google engineer, Blaise Agüera y Arcas, published a column in The Economist where he reflected on his own experience with LaMDA. Without having fallen into Lemoine’s extreme empathy, he said that he felt “the ground giving way under his feet” on reading certain answers, so much did the program seem to understand him specifically and provide him with truly personal answers.

The engineer sees in this an increasingly advanced understanding by the latest generation of AIs of social relations, and of the need to project oneself into the desires of the interlocutor in order to achieve satisfaction. For him, this proves the “social nature of intelligence”. Sci-fi fans will shiver when they remember that this is the script of the movie Ex Machinawhere a robotic creature achieves its ends through this…

Now back to Blake Lemoine. In one of the blog posts on LaMDA, he explains “having spent weeks teaching him meditation”, which he practices himself. Later, the AI ​​will tell him about herself about meditation sessions she practices, exactly reflecting Lemoine’s habits. “I hope she will continue her sessions after my departure,” adds the latter in his post.

To the coldness of the evidence (LaMDA does not meditate, LaMDA is a computer program that wants to please its interlocutor), our brain prefers empathy. No need then to have a machine “endowed with a conscience” in front of you to assign one to it, our desire goes a good part of the way. Or, as journalist Sam Leight cruelly sums it up in the British newspaper The Spectatorabout Blake Lemoine: “It’s not necessarily proof that robots are conscious, but rather that human consciousness is not much”.

See also on The HuffPost: Carrefour is taking job interviews to the metaverse

We want to say thanks to the author of this post for this amazing content

Google engineer fired after warning AI was ‘mindful’

We have our social media pages here and additional related pages here.