Drones and unmanned vehicles as a weapon: why we need to be afraid of hackers
Technologies / / December 19, 2019
No one would deny that artificial intelligence can put our lives on new level. AI is able to solve many problems that are beyond the power of man.
However, many believe that superintelligence will want to destroy us, as SkyNet, or begin to conduct experiments on people as GLADoS from the game Portal. The irony is that to make an artificial intelligence can only people good or evil.
Researchers from Yale, Oxford, Cambridge, and the company posted OpenAI report on the abuse of artificial intelligence. It says that the real danger comes from hackers. With the help of malicious code, they can disrupt the operation of automated systems under AI control.
Researchers fear that the technology created with good intentions, will be used to harm. For instance, surveillance can be applied not only to catch the terrorists, but also to spy on ordinary citizens. The researchers also worrisome commercial drones, which deliver food. They are easy to intercept and put a something explosive.
Another scenario is destructive use of AI - unmanned vehicles. It is enough to change a few lines of code, and the machine will start to ignore safety rules.
Scientists believe that the threat may be digital, physical and political.
- Artificial intelligence is already being used to study the vulnerabilities of various program codes. In the future, hackers could create such a bot, which will bypass any protection.
- With AI person can automate many of the processes: for example, to control a swarm of drones, or a group of vehicles.
- With technologies such as DeepFake, can influence the political life of the state, spreading false information about the leaders of the world via the internet bots.
These frightening examples exist only as an hypothesis. The study's authors do not offer a complete rejection of technology. Instead, they believe that governments and large companies should take care of the security, while industry artificial intelligence infancy.
Politicians should learn the technology and work together with experts in the field, to effectively regulate the establishment and use of artificial intelligence.
Developers, in turn, should assess the danger posed by high-technology, to anticipate and prevent the worst consequences of these world leaders. The report calls for AI developers: team up with security experts in other areas and to find out whether you can use the principles that ensure the safety of these technologies for the protection of artificial intelligence.
Full report describes the problem in more detail, but the fact that the AI - a powerful tool. All stakeholders need to learn a new technology and make sure it is not being used for criminal purposes.