Author: Andrea Rebora, Federica Montanaro and Oleg Abdurashitov.
Although the concept of artificial intelligence is quite complex and nuanced, many people imagine its use in warfare as a brutal slaughter conducted by evil robots. The reality is much different, and the current conflict between Russia and Ukraine offers a glimpse of what AI is being used for on today’s battlefields.
Researchers and military experts have spent years trying to visualize and understand military operations conducted with the support of artificial intelligence systems. One of the most prominent examples is the use of lethal autonomous weapons (LAWs), systems designed to locate, identify, and engage targets based on programmed information without requiring constant human control. Russia is using its KUB-BLA during the invasion in Ukraine, a loitering munition (commonly known as kamikaze drone) designed to identify and attack ground targets using AI technology.However, the Russian aerial campaign leveraging these drones appears weak overall, and its fleet surprisingly small. On the Ukrainian side, the Bayraktar TB2 drone fleet arguably appears to be its most potent force, alongside the “kamikaze drone fleet,” with an estimated 20-30% of the registered Ukrainian kills to be the result of the successful employment of these systems.
Another application envisioned on the battlefield is the use of AI to automate the mobility of vehicles, such as tanks and vessels, and make them more effective at identifying routes and prioritizing target selection and engagement.
AI is being increasingly included in the military decision-making process, from the straightforward calculation of aircraft or missile trajectories to the identification of targets during sensitive operations via automated target recognition. In 2021, Secretary of the Air Force Frank Kendall confirmed that AI had already been used during at least one “live operational kill chain,” demonstrating its effectiveness on the battlefield.
Finally, the idea of artificial intelligence being applied to cyber operations is a thought that keeps many professionals up at night. Cyberattacks have already become one the most pervasive issues of this decade and, if enhanced by AI, they could be used to cause significant damage and potentially destabilize entire countries.
The war in Ukraine, however, seems to be a far cry from swarms of unmanned drones and autonomous vehicles clashing with robotic systems of the adversary, as envisioned by several researchers of future warfare. While AI algorithms reveal themselves on the battlefield, they do so in far more mundane aspects.
The most illustrative example is the use of Ukraine’s specialist application for artillery (GIS Arta), which combines the conventional geo-mapping tools with the ability to sift information on the enemy’s location from a variety of sources and data types, including military and civilian drones and smartphones, GPS trackers and radars. The intuitive and data-agnostic system has since become a force multiplier for the outgunned Ukrainian artillery, increasing the precision of their strikes.
The algorithmic geo-mapping itself, however, is not new and both high-resolution maps and image and processing algorithms are widely available and used among a range of civilian apps - from online maps to food delivery. The GIS Arta allows the blending of imagery intelligence (IMINT) and signals intelligence (SIGINT) feeds into an actionable solution for cheap and proves the ingenuity of Ukrainian developers and army in adopting civilian AI and machine learning technology for military use, but also highlights that the use of technology is shaped by the conditions and demands of the battlefield, not vice versa.
Russia, in turn, claims it is working on updating its reconnaissance and reconnaissance-strike drones with “electronic [optical and infrared] images of military equipment adopted in NATO countries” obtained through the application of neural network training algorithms. With images and videos of NATO-supplied equipment requested by Ukraine widely available on the internet in almost ready-made datasets, the use of neural networks may be justified. Given that both Russia and Ukraine rely on human operators of unmanned aerial vehicles (UAVs), whose ability to identify snippets of objects is dramatically outmatched by the image recognition technology, such an AI-augmented approach may lead to better prioritization and increased accuracy of the Russian strikes targeting.
Another example is the use of AI-enabled face recognition algorithms largely thanks to the ubiquity of visual content, ranging from smartphone videos and security camera feeds to social media pages. The use of face recognition ranges from inspecting people and vehicles at checkpoints to identifying the potential perpetrators of war crimes. While lacking the immediacy often required on a battlefield, face recognition may become an essential deterring component of warfare preventing the most horrendous crimes on the battlefield.
The use and credibility of such technology are not without controversy since AI algorithms are prone to bias and software flaws. In particular, the first person officially accused by Ukraine of war crimes in Bucha, who was caught on camera in a courier service office in a Belarus town used by Russian soldiers to send looted goods back to Russia, is a Belarusian citizen who vehemently denies even serving in the military.
What emerges from this evolving environment is that the employment of AI on the Ukrainian battlefield is very human-centered and not very different from what has been seen in other conflicts. Despite the technological progress and innovations, there is still no clear evidence of the use of fully autonomous weapons in Ukraine. The presence of human beings “in the loop” still prevents us from finding a paradigmatic change in the employment of AI in this conflict. Artificial intelligence is leveraged as an instrument, which shapes and facilitates decision making and enables the implementation of decisions already taken, but is still not allowed to, or capable of, making autonomous decisions.
Despite the relatively limited use of advanced AI systems, the conflict in Ukraine provides a significant amount of operational and technical information. On the operational side, AI systems are being examined, tested, and deployed in various degrees and scope of application, allowing researchers and officials to understand the advantages and challenges in leveraging such systems in active conflict. On the technical side, the data collected such as images, audio, and geographical coordinates, can be used to train and improve current and future systems capable of, for example, recognizing camouflaged enemy vehicles, identifying optimal attack and counterattack routes, and predicting enemy movements.
The conflict in Ukraine provides an overview of the AI military capabilities of the two countries and the level of risk they are willing to accept with their top-of-the-line AI systems. After all, the cost-benefit analysis associated with using autonomous weapons in low-intensity conflict is much different from using the same weapons in open conflict, where the risk of losing just one AI-powered system is very high and particularly expensive. The military invasion of Ukraine is unfortunately not over yet, but militaries around the world will study its execution and aftermath for years to understand how to leverage AI systems for offensive and, most importantly, defensive applications.