By Tiffany Wertheimer
Google has fired one of its engineers who said the company's artificial intelligence system has feelings.
Last month, Blake Lemoine went public with his theory that Google's language technology is sentient and should therefore have its "wants" respected.
Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked.
Mr Lemoine told the BBC he is getting legal advice, and declined to comment further.
In a statement, Google said Mr Lemoine's claims about The Language Model for Dialogue Applications (Lamda) were "wholly unfounded" and that the company worked with him for "many months" to clarify this.
"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," the statement said.
Lamda is a breakthrough technology that Google says can engage in free-flowing conversations. It is the company's tool for building chatbots.
Blake Lemoine started making headlines last month when he said Lamda was showing human-like consciousness. It sparked discussion among AI experts and enthusiasts about the advancement of technology that is designed to impersonate humans.
Mr Lemoine, who worked for Google's Responsible AI team, told The Washington Post that his job was to test if the technology used discriminatory or hate speech.
He found Lamda showed self-awareness and could hold conversations about religion, emotions and fears. This led Mr Lemoine to believe that behind its impressive verbal skills might also lie a sentient mind.
His findings were dismissed by Google and he was placed on paid leave for violating the company's confidentiality policy.
Mr Lemoine then published a conversation he and another person had with Lamda, to support his claims.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
His firing was first reported by Big Technology in its newsletter.
In its statement, Google said it takes the responsible development of AI "very seriously" and published a report detailing this. It added that any employee concerns about the company's technology are reviewed "extensively", and that Lamda has been through 11 reviews.
"We wish Blake well", the statement ended.
Mr Lemoine is not the first AI engineer to go public with claims that AI technology is becoming more conscious. Also last month, another Google employee shared similar thoughts with The Economist.
This video can not be played
Robot "priests" can recite prayers, perform funerals, and even comfort those experiencing a spiritual crisis
Google engineer says AI system may have feelings
First grain ship leaves Ukraine under Russia deal
Part of Beirut port's grain silos collapses. Video
Star Trek actress Nichelle Nichols dies at 89
'It's come home' – Lionesses celebrate success
Watch: Fans react to England's historic Euro 2022 win. Video
Reality of Ukraine war hidden from Fortress Russia
From TV presenter to refugee overnight
Why millions in India are still without tap water. Video
Why is Elon Musk launching thousands of satellites?
South Africa's clean president faces his own scandal
How one surprising idea is saving Mexico's underwater world
The state where abortion is on the ballot
The isle that France and Spain share
This curious geographic transaction has been going on for more than 350 years
The surprising benefits of pruney skin
Why do our fingers shrivel in water?
The young workers who 'want it all'
The youngest in the labour market have a slew of demands
© 2022 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.