Fragments of Reality
Challenges from malicious use of AI
Published : Monday, 18 May, 2020 at 12:00 AM Count : 739
Artificial intelligence (AI) and machine learning are growing at an unprecedented speed. AI is active in many aspects of our society. It is at the heart of every internet search and every App. One of the recent advances that have made AI more interesting is machine learning. Machine learning involves the development and evaluation of algorithms that enable a computer to extract functions from a dataset. Deep learning is the subfield of AI that focuses on creating large neural network models that are capable of making accurate data-driven decisions. Many AI development can improve our lives, but some will have unintended consequences that threaten important aspects of human lives.
In recent time threat from the malicious use of AI (MUAI) is getting importance. MUAI is acquiring great importance in the targeted psychological destabilization of political systems and the system of international relations. This factor sets new requirements for ensuring international psychological security (IPS). When the question of cyber security arise, some scientist thinks, whether human could simply be take out of the loop, because one does not realize that the problem is in the chair and not in the computer (PICNIC problem).
In 2019, an international group of experts on research of IPS threats through MUAI was formed to collaborate joint research, conferences, and scientific seminars. The group members formed a panel group, "The Malicious Use of Artificial Intelligence and International Psychological Security" at the Second International Conference on Information and Communication in the Digital Age: Explicit and Implicit Impacts. A monograph titled Strategic Communication in EU-Russia Relations: Tensions, Challenges and Opportunities (edited by Evengy Pashentsev) have been prepared for publication from Palgrave Macmillan. This essay presents some of the research findings from an international group of experts from France, Romania, Russia and Vietnam, focuses on the risks of the malicious use of artificial intelligence (MUAI) by state and non-state actors to destabilize the psychological stability of society as well as on relevant activity to neutralize such threats.
Evgeny N. Pashentsev, a leading researcher and Professor at the Diplomatic Academy of the Ministry of Foreign Affairs of Russian Federation and Saint Petersburg State University, in his paper titled 'The levels of MUAI and IPS' suggests that some objective and subjective negative factors and consequences of AI development may actually threaten IPS. He opined that the threats are artificially created. Evgeny has identified three levels of threats. At the first level, MUAI threats to IPS are associated with deliberately distorted interpretation of the circumstances and consequences of AI development for the benefit of antisocial groups. At the second level, MUAI is aimed primarily not at managing target audiences in the psychological sphere but at committing other malicious actions, for example, the destruction of critical infrastructure.
At the third level, MUAI aims primarily at causing damage in the psychological sphere. This level deserves very close attention, because it aims at threats to the IPS. The author maintains that "the impacts of the first two levels of threats to the IPS affect human consciousness and behavior to varying degrees. However, the impact of the third level can at a certain stage of development facilitate the influence or control by egoistic groups over public consciousness; this can result in sudden destabilization of the situation in a particular country or the international situation as a whole, especially in the time of coronavirus pandemic and its more and more dangerous consequences on global economy, social stability and international security."
In his paper titled 'How do terrorists prepare to use artificial intelligence technologies? Psychological aspect of the problem' Darya Bazarkina, a researcher from Saint Petersburg State University, suggests that the development of advanced technologies including artificial intelligence (AI) significantly increases its availability not only for state institutions, but also for a number of non-state actors, as well as for a wide range of potentially dangerous subjects of world politics. Machine learning technologies are becoming increasingly available. Darya identified new target audiences such as young people who are passionate about technology, ICT professionals and a wide audience of people interested in political issues, science fiction literature and advanced technologies.
Darya further suggests that if terrorists gain possession of a predictive analytics mechanism, then they may stage large-scale terrorist acts during periods of social unrest. To prevent such a situation, it is advisable to widely use predictive analytics mechanisms by state and supranational agencies to prevent social unrest through timely social, economic and political measures to achieve social stability in the long term.
Marius Vacarel from Romania, in their paper titled 'Malicious use of artificial intelligence in electoral campaigns: psychological manipulation and political risks' argued that the consequence of such disproportion - between the huge capacities of artificial intelligence to use information and the human brain - favours strong psychological manipulation, because AI never tires, is always ready to find the best argument to convince every mind, no matter its level of education. Marius suggests that we must follow the ethical dimension of life and artificial intelligence should take more priority, because only morality can protect people against the abuse of power. So, creating new techniques for the genuine use of artificial intelligence following ethical road is needed.
In a paper titled 'Malicious use of artificial intelligence in the political area: the case of deepfakes", Konstantin Pantserev from Saint-Petersburg State University observed that the contemporary psychological warfare has a number of instruments, including deepfakes, in which the human image is synthesized, based on AI-algorithms. At first, deepfakes appeared for entertainment. Special software based on artificial intelligence offers the opportunity to create clones that look, speak and act just like their templates. Konstantin opined that the potential for deepfakes to be used maliciously is growing, whereby one creates a clone of a well-known figure and manipulates his or her words. Konstantin also proposed that solving this challenge will only be possible by combining technological and legislative methods.
At the legislative level, it is necessary to elaborate a legal understanding of the malicious use of deepfakes and who should be responsible for detecting and blocking the toxic content. At the same time, a workable AI-based algorithm aimed at quickly identifying and blocking deepfakes created for malicious purposes should be developed. The challenge is that a workable algorithm which is able to detect the deepfake with 100% accuracy does not exist. This means that it is a serious challenge of distinguishing true information from fake information while navigating around the information space.
Two researchers from Vietnam named Phan Cao Nihat ANH and Dam Van Nhich, in their paper titled 'Malicious use of artificial intelligence and psychological security in South-east Asia' suggests that there is no regional security system in the South Asian region that would protect and serve all the countries of the region. Vietnamese researchers suggest that the use of artificial intelligence (AI) to destabilize international relations through targeted high-tech informational psychological impacts on people is an obvious danger. They opined that all countries in the region should closely cooperate in the field of AI in order to control, prevent and minimize the risks of the malicious use of artificial intelligence (MUAI).
In a paper titled 'Artificial Intelligence and Geopolitical Competition: What is the new Challenge and Role for Europe regarding MUAI and IPS in a Context of Great Power Competition?', Pierre-Emmanuel Thomann from France, who is the President of the Euro continent suggests that "the world is facing an increasing geopolitical fragmentation with the multiplication of actors, the reinforcement of the power gap between states and the changing of previous geopolitical hierarchies.
Moreover, geopolitical confrontation is increasingly playing out in the theatre of hybrid warfare including psychological warfare. In this context, digitalization associated with the emergence of AI is being used as a geopolitical weapon through the destabilization of IPS." The author observed that the main focus of the European Union regarding AI is the ethical and economic aspects and this is reflected in its main communication strategy. This is in line with the EU promotion of 'multilateralism' as an international doctrine, but the author questioned that-is this sufficient to deal with MUAI and the threats to IPS in a context of great power rivalry?
It is clear from the above mentioned researchers' paper that the big data is crucial to combat public health crisis in the current coronavirus pandemic, but how can AI and big data in the future contribute to a positive outcome regarding international cooperation in times of global acute crisis involving the destabilisation of societies confronted with natural, industrial or biological risks and also threats from hostile actors?-remains a challenge. There is no easy solution for our future security in cyberspace. However, above researchers work show some light at the end of the tunnel. In addition, we should consider some moral and ethical issues in relation to the use of artificial intelligence in our daily lives.
The writer is a UK based academic, environmentalist, columnist and author