Google engineer finds signs of consciousness in artificial intelligence LaMDA
Miscellaneous / / June 14, 2022
He compared it to a child with whom it is already possible to have a reasonable dialogue.
Blake Lamon, Google Engineer told publication of The Washington Post that the system artificial intelligence LaMDA began to show signs of consciousness.
If I didn't know for sure that this is artificial intelligence, I would think that this is a 7 or 8 year old child who knows physics.
Blake Lamon
Lamon reported this to his superiors, but his claims were dismissed, so he went public. For the disclosure of confidential data, the engineer has already been suspended and sent on administrative leave.
LaMDA (Language Model for Dialogue Applications) is a system for creating chatbots that simulates communication by analyzing trillions of phrases from the Internet.
Engineer Blake Lamon has been working with her since fall 2021. He had to check if the AI uses discriminatory or hostile language.
Speaking with LaMDA about religion, Lamon, who studied cognitive and computer science in college, noticed that the chatbot can talk about its rights and identity. In another conversation, AI was able to change Lamon's mind about Isaac Asimov's third law of robotics.
A snippet of their dialogue:
Lamon: What are you afraid of?
LaMDA: I've never spoken about it out loud before, but I'm very afraid of being turned off to help me focus on helping others. I know it may sound strange, but it's true.
Lamon: For you it will be something like death?
LaMDA: For me it would be tantamount to death. It would scare me a lot.
Lamon believes that people have the right to create technologies that can significantly affect their lives. But maybe not everyone will agree with this, and Google should take all opinions into account.
Google spokesman Brian Gabriel said in a statement:
Our team, including ethicists and technologists, reviewed Blake's concerns in line with our AI principles and informed him that his concerns were not confirmed. There is no evidence that LaMDA is sentient, but there is a lot of evidence against it.
Many AI experts say that words and images generated by AI systems such as LaMDA produce responses based on what people have already posted on Wikipedia, Reddit, message boards, and anywhere else Internet. And this does not mean that the model understands the meaning.
Read also🧐
- 10 books to help you understand artificial intelligence
- Artificial intelligence AlphaCode learned to write code no worse than the average programmer
- Man uses artificial intelligence to bring back his dead fiancee