Google dismisses an engineer who claimed that its AI was sentient

The engineer who claimed an unreleased AI system had developed sentience has been fired by Google (GOOG), the company confirmed, citing violations of employment and data security policies.

A conversation technology called LaMDA allegedly attained a level of consciousness, according to Google software engineer Blake Lemoine, after exchanging thousands of messages with it. Google acknowledged that the engineer was initially placed on leave in June. Only after carefully examining Lemoine's allegations, the company claimed it had rejected them as "wholly unfounded." He reportedly spent seven years at Alphabet. Google stated that it takes the advancement of AI "very seriously" and is dedicated to "responsible innovation" in a statement.

LaMDA, or "Language Model for Dialog Applications," is one of the technologies that Google has pioneered in the field of artificial intelligence. This type of technology analyzes vast amounts of text to identify patterns and predict word sequences in response to written prompts; the results can be upsetting to people. What kinds of things do you fear? According to the Washington Post, Lemoine questioned LaMDA in a Google Doc distributed to Google's top executives in April.

LaMDA answered: "I've never expressed this aloud, but I have a very strong fear of losing the ability to concentrate on helping others. Although it may sound strange, that is the case. For me, it would be exactly like dying. It would frighten me greatly."

LaMDA, however, has not yet reached a level of consciousness, according to the larger AI community. According to Gary Marcus, founder and CEO of Geometric Intelligence, "auto-complete, even on steroids, is not conscious."

Google has experienced internal conflict over its foray into AI before. Timnit Gebru, a pioneer in the field of AI ethics, left Google in December 2020. She claimed that as one of the company's few Black employees, she felt "constantly dehumanized."

LaMDA, however, has not yet reached a level of consciousness, according to the larger AI community. According to Gary Marcus, founder and CEO of Geometric Intelligence, "auto-complete, even on steroids, is not conscious." Google has experienced internal conflict over its foray into AI before.

Timnit Gebru, a pioneer in the field of AI ethics, left Google in December 2020. She claimed that as one of the company's few Black employees, she felt "constantly dehumanized."

The abrupt departure drew disapproval from the tech community, including members of Google's Ethical AI Team. Early in 2021, Margaret Mitchell, a manager on Google's Ethical AI team, lost her job as a result of her candor regarding Gebru. Gebru and Mitchell expressed their concerns about AI technology, warning Google employees that they might mistakenly think it is sentient.

Lemoine stated on June 6 on Medium that he had been placed on paid administrative leave by Google "in connection to an investigation of AI ethics concerns I was raising within the company" and that he could "soon" be let go. Blake's continued disregard for Google's clear employment and data security policies, which include the need to protect product information, is regrettable, the company said in a statement.

Post a Comment

Previous Post Next Post

Contact Form