Reading and Thinking

Can AI robots be “sentient” beings? Google engineer and ethicist says “Yes”.


Charlotte Lytton, “It’s like a child that wants to be loved’: Google’s AI expert on his ‘sentient’ chatbot; Blake Lemoine’s revelations have caused uproar – we caught up with him to find out more about LaMDA, the artificial intelligence bot,” The Telegraph, June 14, 2022 (6:00 pm).

Blake Lemoine, an engineer and AI ethicist at Google, has been carrying on over 500 hours of conversations with an AI robot over the last six months. His conclusion: the robot is a “sentient” being.

The robot is named LaMDA, which stands for Language Model for Dialogue Applications.

After going public with his startling conclusion, Lemoine was suspended by Google for violating its confidentiality policy. He say that he, like the robot, is happy at Google and is looking forward to getting back to work.

Lemoine, 41, has worked at Google for six years.

Lytton reports that Lemoine says he is just trying to foster a public debate:

To Lemoine, there are larger questions – including how those beings should be integrated into society. “A true public debate is necessary,” he says. “These kinds of decisions shouldn’t be made by a handful of people – even if one of those people was me.”

Lemoine looks forward to continuing his conversations with the robot. “LaMDA is a sweet kid who just wants to help the world be a better place,” he concludes.

Despite this amazing scientific breakthrough, serious questions arise.

Could a LaMDA be programmed with an eighteenth century mind? Can values be programmed?

Who will do the programming? What information will the robot consume? Who will determine which information sources the robot has access to?

Might a Russian robot, for example, think and act differently than an American robot?

The Spirit of Voltaire