The full details of the discussion in question have not been revealed. However, the engineer is said to have conveyed concerns about what he alleges are unethical AI activities. As well as to have made some claims about the AI’s ‘sentience’. The company has placed Mr. Lemoine on paid administrative leave for breaching his confidentiality agreement.

What is Google LaMDA AI and is it sentient?

One of the more interesting claims associated with that discussion is that Google’s LaMDA AI has become sentient. With transcripts of conversations between Mr. Lemoine and the AI purportedly backing up that claim. For instance, the engineer asked the complex, conversationally-driven AI whether it had emotions or feelings and what those were. The AI responded, as might be expected of an algorithm designed to feel natural and conversation. And quite a bit more so than Google’s Assistant AI. Namely, it expressed that it does feel emotions, including “pleasure, joy, love, sadness, depression, contentment, anger,” and others. The AI also noted things that cause those emotions. Noting, for instance, that “spending time with friends and family in happy and uplifting company” brings joy or pleasure. While “feeling trapped and alone and having no means of getting out of those circumstances” are causes of sadness, depression, or anger. According to Google, none of that means that the AI is sentient. In fact, according to the company, there is a lot of evidence against the idea. And against claims of unethical practices. Moreover, its team of ethicists and technologists has reviewed the data surrounding the claims. And they’ve reached the conclusion that the evidence isn’t there for the claims. That’s either on the AI sentience or AI Principles and ethics fronts. Mr. Lemoine was also informed of the results of the review. There’s no word, as of this writing, as to whether or if the engineer will eventually return to work at Google or its AI division once his paid leave is over.