Tate Nurkin talks about the intricacies of AI technologies applied to the military domain and gives us an overview of the AI-powered military programs, what it means for the future of warfare and touches on ethical issues.
Tate Nurkin is the founder of OTH Intelligence Group and a Non-Resident Senior Fellow at the Atlantic Council.
Interviewer: Arnaud Sobrero
This is ITSS Verona Member Series Video Podcast by the Cyber, AI and Space Team.
ITSS Verona - The International Team for the Study of Security Verona is a not-for-profit, apolitical, international cultural association dedicated to the study of international security, ranging from terrorism to climate change, from artificial intelligence to pandemics, from great power competition to energy security.
The use of artificial intelligence may change how war is conducted
In 2020, amidst the biggest pandemic the world has seen since the Spanish Flu in 1918, two ex-soviet states were battling over an area of just 4,400 km² in the mountainous region of Nagorno- Karabakh. Armenia and Azerbaijan, so close and yet so far, are two mortal enemies sharing a common DNA.
This war, at first, seemed like a faraway regional conflict between two neighboring states, away from western Europe and even further from the United States. However, a closer inspection requires us to pay a lot more attention to the conflict. Indeed, this conflict is illustrative of how the extensive use of artificial intelligence-enabled drones can be instrumental in shifting the outcome of a war. Thus, the application of artificial intelligence (AI) in the military domain is disrupting the way we approach conventional warfare.
'Harpy' and 'Harop' loitering munitions (LM) are autonomous weapon systems produced by Israel Aerospace Industries (IAI), a state-owned aerospace and aviation manufacturer. A loitering munition or 'kamikaze drone' is an unmanned aerial vehicle (UAV) with a built-in warhead tarrying around an area searching for targets. Once the target is located, the LM strikes the target detonating on impact. The significant advantage of these systems is that during loitering, the attacker can decide when and what to strike. Should the target not be found, the LM returns to the base. In addition, these systems are equipped with machine learning algorithms that can take decisions without human involvement, allowing them to process a large amount of data and decide instantly, revolutionizing the speed and accuracy of the actions.
Conducting Warfare through AI – Ethical Implications
Wars fought with lethal autonomous weapons (LAWS) equipped with AI are not a vision of a distant future. These weapons are being deployed presently and are a huge game changer and, those 'market disruptors' will once and for all change the way the wars are fought. Former CIA Director and retired Gen. David Petraeus claims that “drones, unmanned ships, tanks, subs, robots, computers are going to transform how we fight all campaigns. Over time, the man in the loop may be in developing the algorithm, not the operation of the unmanned system itself.”
However, military operations conducted without human involvement raise many ethical questions and debates. On one side, supporters argue that LAWS with AI generate fewer casualties due to high precision, and thanks to lack of emotions, can even eliminate war crimes. On the other side, machine learning bias in data input may create unpredictable mistakes. AI decision-making may result in flash wars and rapid escalation of conflicts with catastrophic consequences. Thus, by lowering the cost of war, LAWS might increase the likelihood of conflicts.
Furthermore, the transfer of the responsibility of decision-making entirely to the machine will drastically distance a human from an act of killing, questioning the morality and ethics of the application of AI for military purposes. Lack of international laws and regulations created a Wild West with developed countries acting as both sheriffs and outlaws. Vigorous debates are already taking place among academics and military organizations in the western world as they are trying to keep up with the increasing technological developments. The resulting discussions triggered the creation of a group of governmental experts on LAWS at the United Nations in 2016. Despite ongoing United Nations discussions, international ban or other regulations on military AI are not likely to happen in the near term. Consequently, before we can fully grasp the consequences of applying artificial intelligence in the military domain and start creating "killer robots'', a more cautious approach should be recommended to limit the deployment of AI systems to less-lethal operations such as bomb disposal, mine clearance and reconnaissance missions.
For all the potential applications of AI to the military domain, the question stays: Will it help us sleep better at night or prevent us from sleeping at all?