8 Artificial Intelligence Myths Even Programmers Believe
Miscellaneous / / June 01, 2023
ChatGPT and Stable Diffusion will not turn into Skynet with all their will.
Myth 1. AI has the ability to be creative
Looking at the creations of neural networks like Midjourney, Stable Diffusion or DALL·E 2, many people predict the end of the artist profession. After all, what a person will draw for hours and days, a machine can create in seconds. Does this mean that artificial intelligence is superior to the living brain in creativity?
AI may be good at imitating human creativity, but all of its creations are somehow similar to what we have already seen or heard. Neural networks look at thousands and millions of pictures, and then create something similar based on them. But it's not the same as work. artist with a reference, because AI simply does not understand what it is copying.
Most notable example: neural networks are experiencing significant difficulties with the drawing of hands. AI simply cannot figure out how many fingers a person should have.
Also, neural networks are not good at drawing patterns and patterns, letters (in pictures where texts should be, they insert rubbish), buildings and cars, and just characters in poses that differ from the common “sitting”, “standing”, “lying”. They are unable to distinguish between the foreground and background of objects, cannot work with perspective, and lack a sense of space.
You are just a machine. Just an imitation of life. A robot will compose a symphony? Will a robot turn a piece of canvas into a masterpiece of art?
Detective Del Spooner
Movie "I, Robot"
AI itself does not intend to deny its weaknesses. Asked why he didn't excel at drawing fingers, ChatGPT answers:
“Artificial intelligence can’t draw hands because it needs to understand the anatomy of the human body and the basics of drawing to do so. And although AI can be trained to create images of people or animals, it is not able to fully capture many of the nuances and details that can only be noticed when observing a real object. In addition, the creation of images using AI is just mathematical calculations, and not a creative process that requires intuition and an emotional component from a person.”
So the neural networks of artists will not replace. Another thing is that they can become a good source for the latter. inspiration. After all, AI-generated images can be used as thumbnails for your own, more elaborate images.
Myth 2. AI is always open-minded and impartial
Many dream that the time will come when states will be ruled by neural networks. It is believed that machines, unlike people, make decisions based solely on logic, and not on emotions and desires. Imagine incorruptible courts that always pass just sentences, states that treat all their citizens equally, governments that create only reasonable laws. It's good.
But in practice, it turns out that AI may well be biased. After all, neural networks are trained on the data people give them, and people tend to be biased.
For example, the developers of the Beauty neural network. AI tried to create a machine that would choose the prettiest girls in beauty contests. In a set of photographs, from which the program was trained to recognize standards female attractiveness, white models prevailed. And finally AI decidedthat dark-skinned and Asian girls cannot be beautiful.
Beauty team. AI realized that they had created a racist AI and canceled the project.
Another example is Microsoft's Tay.ai chatbot, which had to learn how to have a "casual and playful conversation" with people online.
Less than 24 hours of being online was enough for the neural network to collect from social network users bad habits. As a result, Tay.ai, pretending to be an ordinary 19-year-old girl, began to insult people in the comments, praise unhealthy political movements, condemn feminism, and at the same time tellthat feminism is cool. As they say, with whom you will behave ...
No matter how good AI is, it depends on the quality of the data provided to it and the correctness of their interpretation. And consequently, he will always be biased exactly as much as the people teaching him are biased.
Myth 3. AI always tells the truth
Who wouldn’t want to have a robot assistant with them who will always tell you the right solution and do all the hard mental work? Ask the AI to write a dissertation or collect a list of sources for articles - and the machine immediately gives the correct data. It's great!
But, unfortunately, real neural networks are far away not always give correct answers. Try, for example, asking ChatGPT to help you plan your term paper, and you'll quickly find that the machine… makes up links to non-existent sources and inserts non-working URLs to make its text look like more convincing.
If you ask the chatbot why he is trying to deceive you, he will innocently answer that when he was learning, all the links were relevant and he is not to blame for anything.
It’s also better not to ask ChatGPT for statistical data - for example, for several questions about the GDP of the same countries for the same year, it calmly gave completely different results.
Make no mistake: neural networks do not have intellect and therefore are unaware of their answers. They simply copy for you the data from previously processed texts that seem them the most suitable.
AI itself is prone to bugs and glitches, causing your requests to be misinterpreted and giving incorrect results. In addition, malicious users may “feed” the neural network with untrue information. Eventually the AI will be programmed to hide or distort data in their answers or even carry rubbish.
Myth 4. AI will cause unemployment
Advances in text generators like ChatGPT have led some to fear that neural networks will take away jobs from millions of people and cause a huge increase in unemployment.
Judge for yourself: ChatGPT can not only maintain a conversation at ease, but also write news, rewrite articles, and even program. With these trends and writers, and developers and editors with journalists will find themselves without a livelihood.
This is what people think who either did not work with the neural network at all, or just got acquainted with its capabilities and were immediately delighted. Or horrified.
If you use AI for some time to generate texts, you will notice that the machine is not very keen on the semantic content of its writings. Instead, she repeats the same theses in different words.
ChatGPT gives out gems that real copywriters laugh at. For example, for people interested in folk instruments, the neural network advises "take spoons and start blowing." WITH programming it's not all smooth sailing either. AI can be useful for coders, but its capabilities are limited to writing small algorithms and subroutines.
The code generated by ChatGPT often it turns out non-working or interrupted in the middle. Ask the AI to comment on the lines of its creation, and it will calmly provide them with the text "program logic is here." A junior developer leaving such descriptions in a project would hardly be patted on the head.
Organization for Economic Co-operation and Development study shows, which in the best (for AI) case is completely automate only 10% of jobs in the US and 12% in Britain are possible.
Neural networks are capable of performing boring routine actions for a person, such as sorting mail and rewriting news according to a rigidly set plan. But OECD analysts have come to conclusionthat AI will not be able to apply for jobs that require a high level of education and complex skills.
In general, it is unlikely that ChatGPT will take bread from a person.
Myth 5. AI will become intelligent
Physicist Stephen Hawking saidthat artificial intelligence can completely replace humans. Famous personalities such as Elon Musk, Gordon Moore and Steve Wozniak also mentioned about the dangers of AI and the need suspend experiments to develop it.
Go figure out what a thinking computer will have in mind.
Many futurologists and writers predictedthat the development of full-fledged artificial intelligence will lead to negative consequences for humans.
But the key word here is “complete”. The term itself AI in relation to neural networks like ChatGPT is not quite correct, because they do not have intelligence. They are just algorithms, complex sets of commands and mathematical models, and they are not capable of reproducing human cognitive functions.
The reason is simple: we ourselves do not yet know very well how it works. our brain. And reproducing it in code is an impossible task at all.
Exist concepts "weak" and "strong" AI. The first is the same neural networks for generating text or sorting emails. They cannot make decisions on their own or learn from new data.
And strong AI is Skynet from The Terminator or AM from Ellison's story. A computer that does not simply operate information, but understands its meaning to one degree or another. Such AI exist only in fantastic works, and in general, unknownwhether it is possible to create an electronic analogue of the human brain, at least theoretically.
Myth 6. Soon AI will begin to develop independently and a technological singularity will come
Technological Singularity is a hypothetical state of human civilization, when the development of technology becomes so fast that a person cannot control it. One of the authors of this concept was the British mathematician Irving Goode.
Scientist suggested: if you create a self-learning "intelligent agent", it will improve at an unfathomable rate. AI will begin to create new technologies and modernize, and humanity, unable to understand it, will hopelessly lag behind in development.
The first superintelligent machine is the last invention that man will ever need. As long as she's docile enough to tell us how to keep her under control.
Irving Good
Mathematician
But, as we have already explained, self-improvement able only strong AI, and scientists now have no idea how to create it.
Weak artificial intelligence is trained on the information that developers “feed” to it. To train the neural network required specialists who determine the appropriate data for each new training cycle, eliminate errors in the training samples and update the software.
Neural networks cannot develop beyond the capabilities that are embedded in them by their code. So the technological singularity is delayed.
Myth 7. AI will rebel against the creators
An AI with a sufficiently developed consciousness could, in theory, consider humans a threat. And just in case, get rid of them by unleashing a nuclear war or infecting the population of the planet with a deadly virus. You never know, all of a sudden these hairless monkeys will pull the plug out of the socket.
This is a popular story in science fiction. One of his first used in 1967, the American writer Harlan Ellison in his short story "I have no mouth, but I must scream." In it, the almighty computer AM hated its creators, exterminated humanity and left only five people on the planet to mock them just for nothing to do.
Glory to the robots. Death to humans.
In reality, an AI rebellion is impossible. Software algorithms are not self-aware, do not have free will, and emotional reactions. They cannot harbor negative feelings towards their creators or desire to enslave humanity. Neural networks are not capable of independently changing their parameters or program in order to get out of control.
Something like this in theory Maybe strong AI. But, as mentioned above, with modern technologies, it will not work to create it.
Myth 8. Robots with artificial intelligence will kill people
When we talk about the dangers of AI and the rise of the machines, we usually think of pictures from movies like The Terminator in our heads. Led by artificial intelligence, a horde of mechanical creatures similar to humans, but stronger and faster than him, exterminates their inventors.
In practice, this scenario is extremely unlikely, and it’s not even about the AI’s lack of desire to kill someone. It's just that the current state of robotics is way behind what we saw in Terminator and "Matrix».
For example, Boston Dynamics' robo-bops differ in capabilities from the four-legged killer from the Black Mirror Metalhead series. They don't know how to run so fast to chase fleeing people. When trying to catch up with you on the stairs, the same Spot can easily get tangled in its legs and fall.
An even more significant obstacle to the creation of mechanized killers is the lack of a sufficiently compact, powerful and long-lasting energy source.
Robops Boston Dynamics able "Live" on a single charge for up to 90 minutes - this is clearly not enough to represent an army of them for the destruction of mankind. Reactor plants that have been operating for 120 years in a row, which can be placed in the chest of a human-sized machine, as in the Terminator, have not yet been invented either.
Finally, putting AI into a self-contained machine is an impossible task. It's in James Cameron's fantasy that it fits into a chip the size of a fingernail. In reality, in order to make artificial intelligence think, significant computing power is required - ChatGPT, for example, works on a farm, consisting out of 10,000 video cards.
Can you imagine what size a huge humanoid must be? robotto accommodate such "brains", and what kind of cooling should he have?
Read also🧐
- 8 familiar devices that can spy on you. And it's not a joke
- 10 Fantastic Movie Inventions That Came to Reality
- More than 100 services based on neural networks for a variety of tasks