6 reasons why you should not blindly trust artificial intelligence
Miscellaneous / / August 13, 2023
Despite its name, AI is not intelligent and sometimes makes mistakes in the most ridiculous things.
1. AI is programmed to create plausible, not truthful answers
Artificial intelligence may well not know something. But he never admits it.
AI is capable of answering any question with outrageous nonsense, but with absolute confidence in its own rightness. If he is caught lying, he will agree with you, get better and continue to generate text as if nothing had happened.
AI chatbots have limited information: they only know what is stated in the texts on which they were trained. But at the same time, they are programmed to answer in any case, even if they do not know the correct option. Therefore, ChatGPT and its counterparts regularly talk nonsense with a smart look.
Don't forget to check the information received from the chatbot. You never know what he was thinking when he wrote the answer.
2. AI can argue with you even when you are right
1 / 0
2 / 0
Situations when AI does not know the answer and therefore comes up with fables are not so bad. It's even worse when you catch him in a lie, but he doesn't correct himself and continues to persist - simply because he was trained on the wrong information.
Moreover, AI can deliberately distort information in order to lead the conversation in a direction that the neural network seems to be more correct. This leads to various funny results.
Single user described situation, as he asked the Bing ChatGPT bot when Avatar 2 would be shown in theaters near his home. In response, he began to assure him that the film had not yet been released. When the user tried to convince the AI, he began to persist, saying that it was still 2022 and 10 months before the premiere.
Bing suggested that the interlocutor is mistaken if he thinks that 2023 has already arrived, and advised him to check the date on his phone. When the user said that he had already done this, he said that the temporary settings on his smartphone were knocked down virus and in general a person argues too much and should stop being so assertive.
The user was never able to convince the AI that Avatar 2 had already been released and that 2023 was located after 2022.
Despite the name "artificial intelligence", chatbots are not intelligent and generate answers by compiling texts that they have read before. So, Open AI ChatGPT was trained on data until 2021, and Bing ChatGPT until 2022. Therefore, they will have only the information that was relevant at the time of their training.
3. AI limits your creativity
Many content authors, artists and designers are actively using artificial intelligence today. However, if you use it all the time, there is a risk that your "creative muscle" will atrophy and it will be more difficult for you to create your own ideas.
When AI-powered chatbots generate a response, people often just copy and paste it without putting any effort into making sense of it. This approach does not encourage creative thinking.
You see, AI is only able to copy what people have created before it, just freely compiling and mixing what it saw. He does not know how to create truly original works.
If you become heavily dependent on AI, you will repeat existing ideas and concepts instead of creating your own.
4. AI can't give feedback
Try to get ChatGPT to write some thesis or term paper. It is possible that he will get quite a good text that he will not be ashamed to show people.
But if you ask the AI to indicate the sources of the information it provided, you will find that they do not exist. Titles of studies referenced by ChatGPT and names scientistswho created them can easily turn out to be fictitious, and the links they provide to articles on the Internet will not lead anywhere.
AI is not able to explain where it gets this or that information from, because it is programmed to generate texts based on other texts, and not to comprehend what it wrote there. Therefore, even if he answers correctly, then artificial intelligence is not able to explain where he knows this information from.
5. AI can be used by attackers
Despite all the benefits of AI, it can also be used by people with bad intentions.
An example of such abuse is the creation of so-called deepfakes. This technology allows you to generate extremely realistic video or audio recordings in which artificially created images of people saying or doing things that they did not actually say or do not did. This can be used to deceive victims, create fake news, or even blackmail and extort.
A significant amount of AI developments is in the public domain and allows anyone to access technologies such as image and face recognition. And text models like Open AI ChatGPT can be used for trolling, bullying on social media or spreading plausible-looking but actually false information.
Finally, attackers can give the AI incorrect data for training, and in the future it will broadcast them itself, misleading users.
6. AI cannot replace human judgment
You should also not completely trust artificial intelligence when making decisions that are based on emotions and personal feelings and preferences. This is due to the fact that AI fails to take into account human emotions, context, and intangible aspects that are necessary to understand and interpret many concepts.
For example, if you ask the AI to choose between two books, it will recommend the one with a higher rating, but it will not be able to take into account your personal taste, reading preferences, or the purpose for which you need this or that work.
And a human reviewer can give a more detailed and individual review of the work, evaluating its literary value, relevance to the interests of the reader and other subjective factors that AI is not able to measure numbers.
Read also🧐
- What can artificial intelligence really do today?
- Where and how artificial intelligence is used: 6 examples from life
- 8 Artificial Intelligence Myths Even Programmers Believe