June 18, 2022No Comments

AI goes to War: Observations from the Battlefields in Ukraine

Author: Andrea Rebora, Federica Montanaro and Oleg Abdurashitov.

Although the concept of artificial intelligence is quite complex and nuanced, many people imagine its use in warfare as a brutal slaughter conducted by evil robots. The reality is much different, and the current conflict between Russia and Ukraine offers a glimpse of what AI is being used for on today’s battlefields.

Researchers and military experts have spent years trying to visualize and understand military operations conducted with the support of artificial intelligence systems. One of the most prominent examples is the use of lethal autonomous weapons (LAWs), systems designed to locate, identify, and engage targets based on programmed information without requiring constant human control. Russia is using its KUB-BLA during the invasion in Ukraine, a loitering munition (commonly known as kamikaze drone) designed to identify and attack ground targets using AI technology.[1]However, the Russian aerial campaign leveraging these drones appears weak overall, and its fleet surprisingly small.[2] On the Ukrainian side, the Bayraktar TB2 drone fleet arguably appears to be its most potent force,[3] alongside the “kamikaze drone fleet,” with an estimated 20-30% of the registered Ukrainian kills to be the result of the successful employment of these systems.[4]

Another application envisioned on the battlefield is the use of AI to automate the mobility of vehicles, such as tanks and vessels, and make them more effective at identifying routes and prioritizing target selection and engagement.

AI is being increasingly included in the military decision-making process, from the straightforward calculation of aircraft or missile trajectories to the identification of targets during sensitive operations via automated target recognition. In 2021, Secretary of the Air Force Frank Kendall confirmed that AI had already been used during at least one “live operational kill chain,” demonstrating its effectiveness on the battlefield.[5]

Finally, the idea of artificial intelligence being applied to cyber operations is a thought that keeps many professionals up at night. Cyberattacks have already become one the most pervasive issues of this decade and, if enhanced by AI, they could be used to cause significant damage and potentially destabilize entire countries.[6]

The war in Ukraine, however, seems to be a far cry from swarms of unmanned drones and autonomous vehicles clashing with robotic systems of the adversary, as envisioned by several researchers of future warfare[7]. While AI algorithms reveal themselves on the battlefield, they do so in far more mundane aspects. 

The most illustrative example is the use of Ukraine’s specialist application for artillery (GIS Arta), which combines the conventional geo-mapping tools with the ability to sift information on the enemy’s location from a variety of sources and data types, including military and civilian drones and smartphones, GPS trackers and radars.[8] The intuitive and data-agnostic system has since become a force multiplier for the outgunned Ukrainian artillery, increasing the precision of their strikes.

The algorithmic geo-mapping itself, however, is not new and both high-resolution maps and image and processing algorithms are widely available and used among a range of civilian apps - from online maps to food delivery. The GIS Arta allows the blending of imagery intelligence (IMINT) and signals intelligence (SIGINT) feeds into an actionable solution for cheap and proves the ingenuity of Ukrainian developers and army in adopting civilian AI and machine learning technology for military use, but also highlights that the use of technology is shaped by the conditions and demands of the battlefield, not vice versa.

Russia, in turn, claims it is working on updating its reconnaissance and reconnaissance-strike drones with “electronic [optical and infrared] images of military equipment adopted in NATO countries” obtained through the application of neural network training algorithms.[9] With images and videos of NATO-supplied equipment requested by Ukraine widely available on the internet in almost ready-made datasets, the use of neural networks may be justified. Given that both Russia and Ukraine rely on human operators of unmanned aerial vehicles (UAVs), whose ability to identify snippets of objects is dramatically outmatched by the image recognition technology, such an AI-augmented approach may lead to better prioritization and increased accuracy of the Russian strikes targeting. 

Another example is the use of AI-enabled face recognition algorithms largely thanks to the ubiquity of visual content, ranging from smartphone videos and security camera feeds to social media pages.[10] The use of face recognition ranges from inspecting people and vehicles at checkpoints to identifying the potential perpetrators of war crimes. While lacking the immediacy often required on a battlefield, face recognition may become an essential deterring component of warfare preventing the most horrendous crimes on the battlefield. 

The use and credibility of such technology are not without controversy since AI algorithms are prone to bias and software flaws. In particular, the first person officially accused by Ukraine of war crimes in Bucha, who was caught on camera in a courier service office in a Belarus town used by Russian soldiers to send looted goods back to Russia, is a Belarusian citizen who vehemently denies even serving in the military.[11]

What emerges from this evolving environment is that the employment of AI on the Ukrainian battlefield is very human-centered and not very different from what has been seen in other conflicts. Despite the technological progress and innovations, there is still no clear evidence of the use of fully autonomous weapons in Ukraine. The presence of human beings “in the loop” still prevents us from finding a paradigmatic change in the employment of AI in this conflict. Artificial intelligence is leveraged as an instrument, which shapes and facilitates decision making and enables the implementation of decisions already taken, but is still not allowed to, or capable of, making autonomous decisions.

Despite the relatively limited use of advanced AI systems, the conflict in Ukraine provides a significant amount of operational and technical information. On the operational side, AI systems are being examined, tested, and deployed in various degrees and scope of application, allowing researchers and officials to understand the advantages and challenges in leveraging such systems in active conflict. On the technical side, the data collected such as images, audio, and geographical coordinates, can be used to train and improve current and future systems capable of, for example, recognizing camouflaged enemy vehicles, identifying optimal attack and counterattack routes, and predicting enemy movements.

The conflict in Ukraine provides an overview of the AI military capabilities of the two countries and the level of risk they are willing to accept with their top-of-the-line AI systems. After all, the cost-benefit analysis associated with using autonomous weapons in low-intensity conflict is much different from using the same weapons in open conflict, where the risk of losing just one AI-powered system is very high and particularly expensive. The military invasion of Ukraine is unfortunately not over yet, but militaries around the world will study its execution and aftermath for years to understand how to leverage AI systems for offensive and, most importantly, defensive applications.

February 28, 2022No Comments

Agriculture 4.0 – The Revolutionary Power of Artificial Intelligence

Author: Zrinka Boric, Giorgia Zaghi, and Beatrice Gori

According to the estimates, the global population will reach 9.7 billion people by 2050. To meet such growing food demand, the food production in the world will need to increase by 70% in the upcoming decades. At the same time, the agricultural sector is currently facing several challenges, such as limited availability of arable land and fresh water, a slowdown in the growth of crop yields, consequences of climate change, and covid-19. The UN's second Sustainable Development Goal (SDG2) targets to end hunger, double agricultural productivity, and ensure sustainable food production systems by 2030. To successfully address the challenges and achieve food security digital technologies are expected to become a foundation in future food production. At the World Summit on Food Security 2009, the four pillars of food security were identified as availability, access, utilization, and stability.

Recently the Focus Group on Artificial Intelligence (AI) and Internet of Things (IoT) for Digital Agriculture (FG-AI4A) was formed, in cooperation with Food and Agriculture Organization (FAO), to explore the potential of technologies (AI, IoT) in the acquisition and handling of necessary data, optimization of agricultural production processes, and to ultimately identify best ways (and possible challenges) to use such technologies within the agricultural domain.Artificial intelligence (AI) technologies are forecast to add US$15 trillion to the global economy by 2030. According to the Government AI Readiness Index 2019, the governments of high income-countries have better odds to utilize these gains than low-income countries. Therefore, there is a risk that low-income countries could be left behind by the fourth industrial revolution.

Image Source: https://www.pexels.com/it-it/foto/piante-a-foglia-verde-2132171/

Examples of the use of digital technologies in agriculture

TECHNOLOGYUSE IN AGRICULTURE
AI The utilization of AI and Human Intelligence can increase the capabilities and knowledge of farmers and improve the sustainability of their productions. Meanwhile, farmers can better manage their resources and obtain superior production rates. Sustainable green farms with optimal yielding are a fundamental step towards the Sustainable Development Goal 12 which provides for a “responsible consumption and production."Farms produce massive amounts of data daily, which AI and machine learning models could utilize to increase agricultural productivity while minimizing harmful practices (i.e. extensive use of pesticides, monocropping). 
Image Data (drones & satellites) For instance, agricultural technology or AgriTech drones are powerful tools that can help monitor the most inaccessible and vulnerable areas and design and support adequate farming operations. By surveying and mapping the fields, drones provide information and predictions on the crops' growth and help prevent anomalies and disruption of the productions.Satellite image data paired with AI technology aims to help governments and organizations address agricultural challenges by providing granular insight and data analysis. 
GPS (Global Positioning System) remote sensing technology  GPS technology is already steadily used to enhance agricultural processes and productivity and provides insight into the quantity of food produced proportionately to units of water. 
Internet of Things The IoT refers to devices with a sensor that enables them to transmit data through a network. IoT enables the collection and analysis of data and enables better tracking of performance, making informed decisions, and increasing efficiency and sustainability. 
Yield monitoring and mapping During the harvest, a dataset is collected (using different sensors and GPS technology) which can later be analyzed through specified software.This valuable dataset provides relevant information that helps to improve yield management, rational use of available resources, develop future nutrient strategies, and ultimately achieve more sustainable agriculture with lowered production costs. 
Automation Different forms of automation are used in agriculture to help farms operate more efficiently and increase productivity. Automation appears in many forms, from simple automatic watering systems used in many households, to specialized agricultural drones, robots (like harvest robots), and even driverless tractors. 

AI in low-income countries

AI has the potential to have relevant impacts on low-income countries as it could bring about more opportunities to current problems in agriculture and numerous other fields. AI is a tool directed towards development enhancement, the so-called “AI4D” (AI for development). AI could bring about infrastructural and qualitative development, in terms of societal empowerment and change.  

Moreover, one of the most relevant improvements in the agricultural sector would be rendering more efficient use of scarce resources. 

Specified technologies and systems can target specific needs and/or problems in the exact timing and/or quantities. The specific cases of Israel and China exemplify the relevance of AI for development and resilience. 

Both countries have massively invested in smart agriculture to increase yields, productivity and improve precision agriculture given the constraints of the growing scarcity of natural resources. China and Israel managed to improve their agricultural output to an extent where it is possible to consider them as “nations that feed the world”. Moreover, they both could export basic technologies to other countries to implement such “smart tools” to strengthen the latter’s agricultural export sector. For instance, this would be the case for Israel in countries like Indonesia and Thailand that have successfully utilized Israeli technology to improve their agricultural sector and export.

While the adoption of AI technology in agricultural practices of low-income countries seems like an easy way to solve relevant problems related to development, there are still many risks and barriers that ought to be considered. More specifically, compared to the costs of traditional systems, initial infrastructure costs for AI are extremely high – this would call for more participation from transnational organizations and technology companies to assist and supply basic infrastructure in low-income countries. 

 Conclusion 

To conclude, the opportunities that AI holds in the agricultural sector seem to have the potential to accomplish part of the SDGs agenda for 2030. This is certainly an argument that can be applied to Western countries with the investment capacity to carry on a fourth agricultural revolution. Optimization of precision agriculture and the efficient use of scarce resources are essential steps to fight world hunger and climate change. 

However, new technologies come with high entry-level costs and such investment could be too risky or too high for low-income countries and small-scale food producers. 

While a new agricultural revolution will benefit countries and food producers who can afford to bring about sustainable development, it is necessary to acknowledge that a significant risk lies ahead: leaving out the have-nots in favor of the sole development of the haves. 

November 30, 2021No Comments

How Different Political Powers Approach the Issue of Ethics in the Development of Artificial Intelligence

By: Zrinka Borić

Image Source: https://www.pexels.com/photo/person-reaching-out-to-a-robot-8386434/

Advancement of artificial intelligence (AI) technology is expected to drive progress and change in the areas of military, economy, and information. This so-called “fourth industrial revolution” opens various possibilities, among which the most probable one is further development and prosperity of those who will be able to reap the benefits, resulting in further strengthening existing inequalities in the global state system. 

The main concern an average person has regarding the AI is the idea of the post-apocalyptical world in which the robots and AI have completely overtaken the Earth, as depicted in many famous science-fiction publications. To approach this topic it is necessary to have two things in mind. First, the developement of the strong AI (also called Artificial General Intelligence – AGI) systems that will focus on the simulation of human reasoning and creation of machine intelligence equal to the human currently does not exist, and the experts cannot agree on the expected occurrence of this type of AI. Second, artificial intelligence systems rely heavily on data. Therefore, the quantity, quality and availability of data are crucial. In the longterm, the ethical and responsible approach to data collection for AI development and implementation aims to guarantee a balanced and responsible innovation. 

For example, the United States and the European Union countries have expressed dedication in developing trustworthy and ethical AI. At the other hand, countries like China and Russia have not shown such dedication in the development and employment of their autonomous weapons systems. Cyber policy and security expert Herbert Lin expresses the concern how due to lower level of regard towards the ethical and safety issues there is a likely opportunity that their weapons are going to be more militarily effective and developed sooner. 

Different forms of government have different approaches towards AI development and implementation. China is characterized as authoritarian and hierarchical state, the United States is a federal republic with a democratically run government, while the European Union is described as a political and economic union with that operates through combination of supranational and intergovernmental decision-making approach.

PEOPLE’S REPUBLIC OF CHINA

China defines artificial intelligence research and development as key to boosting national economic and manufacturing competitiveness as well as providing national security. China’s vigorous approach towards the AI development is caused by the potential economic benefit in the future. The experts assume that China will benefit from the highest relative economic gain from AI technologies, since the AI technology is envisioned to improve its productivity and manufacturing possibility and therefore to meet future GDP targets. Therefore, China faces the risk of AI development and application without giving enough attention to a responsible use of AI and preparing its citizen to adapt to possible changes affected by widespread AI adoption. China has already once fallen in the trap of recklessly rushing into uncontrolled progress, and it led to an unsustainable level of growth accompanied by a set of negative effects on China’s economy growth. China’s clear competitive advantage lies in its abundance of data which will most likely become one of the crucial elements in the future development of AI technology, relatively loose privacy laws, vibrant start-ups, and a stable rise in the number of AI engineers.

THE EUROPEAN UNION

The state structure shapes the design of the AI policy and its implementation. When discussing the EU it is important to keep in mind that the EU is not a country, but an economic and political supranational and intergovernmental organization. Considering the fact that economic prosperity and national security of the European Union are still firmly in the hands of the national governments it can easily be understood why the organizational structure of the Union hinders the process of making concrete and quick decisions which are always favorable in the conditions of the international competition. The EU has succeeded to publish joint plans and policies regarding AI, such as Civil Law Rules on Robotics, Declaration for Cooperation on Artificial Intelligence, Ethic Guidelines for Trustworthy AI, and Policy and Investment Recommendations for Trustworthy AI.

The European Union pays special attention to the study of the potential impact of artificial intelligence technology on the society. The researches usually involve social aspect such as data protection (e.g. GDPR law), network security and AI ethics. There are more substantial ethical or normative discussions when it comes to developing human-centered and trustworthy AI technologies. [...] Developing the culture of trustworthy AI and not only when it comes to security and defense, but more broadly about AI enabled technologies. This is at the forefront of the policy and political thinking in Brussels.“ claims Raluca Csernatoni, an expert on European security and defense with a specific focus on distruptive techologies.

In 2018 member states signed the Declaration on Cooperation on Artificial Intelligence where the participating member states agreed to cooperate in various fields regarding AI development and implementation, including ensuring an adequate legal and ethical framework, building on EU fundamental rights and values.

THE UNITED STATES

During the Obama administration National Science and Technology Council (NSTC) Committee on Technology drafted the report Preparing for the Future of Artificial Intelligence in 2016. Concerns about safeguarding “justice, fairness, and accountability” if AI was to be tasked with consequential decisions about people had previously been mentioned in Administration’s Big Data: Seizing Opportunities, Preserving Values  report and Big Data and Privacy: A Technological Perspective report. Regarding the governance and safety, the report advises that use of AI technology must be controlled by “technical and ethical supervision”.

Later, during the Trump Administration the 2019 AI R&D Strategic Plan expressed seven main fields of interest, one of which is understanding ethical, legal, and societal applications of AI. According to the recent EU-US Trade and Technology Council TTC it is clear that the current administration continues supporting the efforts for the development of responsible and trustworthy AI. 

THE U.S. – EU COOPERATION 

The most recent U.S.- EU cooperation on the AI advancement, the TTC, was launched on September 29, 2021 in Pittsburgh. TTC working groups are cooperating on discussing the issues of technology standards, data governance and technology platforms, misuse of technology threatening security and human rights, and many others. The United States and European Union affirmed their commitment to a human-centered approach and developing mutual understanding on principles of trustworthy and responsible AI. However, both have expressed significants concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. They agree that these systems „pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems“. 

CONCLUSION

Different forms of governments differ immensly in their approach towards the development and implementation of AI, as well as when it comes to the necessary principles of ethics and responsibility. However, governments need to take further actions with great cautions. When implemented carelessly, without taking ethics and safety in consideration, AI could end up being ineffective, or even worse, dangerous. Governments need to implement AI in a way that builds trust and legitimacy, which ideally requires legal and ethical frameworks to be in place for handling and protecting citizens’ data and algorithm use. 

November 2, 2021No Comments

The United States’ Race for Supremacy in Artificial Intelligence

By: Zrinka Boric

“Where we choose to invest speaks to what we value as a Nation. This year’s Budget, the first of my Presidency, is a statement of values that define our Nation at its best.” - Joseph R. Biden, Jr. (The Budget Message of the President)

This article navigates the landscape of AI policymaking and tracks efforts of the United States to promote and govern AI technologies. 

Technological advancement has become a new approach to increase a state’s political, military, and economic strength. The Cold War and the arms race between the two then strongest nations in the world, the United States of America (USA) and the Soviet Union (USSR), revealed the potential that lay in the development of technology. Today, the United States is again at the forefront in the race for supremacy in the potentially world-changing technology: artificial intelligence (AI). 

Artificial intelligence has the potential to fundamentally change strategy, organization, priorities, and resources of any national community that manages to develop AI technology, lead to further innovation, and eventually apply it. Artificial intelligence is going through major evolution and development, and its potential is increasing at a speed rate. Progress is visibly accelerating, and our social, political, and economic systems will be affected greatly. One of the important questions is how to define and approach all the opportunities AI technology can offer while avoiding or managing risks. 

The American AI Initiative

The United States is characterized by a skilled workforce, innovative private sector, good data availability, and effective governance which are all key factors for the government’s ability to enable effective development and adoption of AI. 

The United States published its national AI strategy, the American AI Initiative, in 2019.The responsible organization is the White House, and its priority is to increase the federal government investment in AI’s Research and Development (R&D), and to ensure technical standards for safe AI technology development and deployment. American AI Initiative expresses a commitment to collaborate with foreign partners while promoting U.S leadership in AI. Nevertheless, it is important to note that the American AI Initiative is not particularly comprehensive, especially when compared to other leading nations, and is characterized by the lack of both funding and palpable policy objectives.

In 2019, the U.S. policymakers were advised to advance the American AI Initiative with concrete goals and clear policies aimed at advancing AI – such as spurring public sector AI adoption and allocating new funding for AI R&D, rather than simply repurposing existing funds.

AI in the USA Budget for FY2022 

President Biden's budget for FY2022 includes approximately $171.3 billion for research and development (R&D), which is an 8.5% ($13.5 billion) increase compared to the FY2021 estimated level of $157.8 billion. 

According to the 2021 AI Index Report, in FY 2020 the USA federal departments and agencies spent a combined $1.8 billion on unclassified AI-related contracts. This represents an increase of more than 25% from the amount spent in FY 2019. 

One of the agencies with the major R&D program is the National Institute of Standards and Technology (NIST). President Biden is requesting $1,497.2 million for NIST in FY2022, an increase of $462.7 million (44.7%) from the FY2021 $1,034.5 million. The second-highest program budget increase in NIST is for Partnerships, Research, and Standards to Advance Trustworthy Artificial Intelligence, $45.4 million (an increase of $15 million compared to FY2021). 

Some departments are expecting large percentage increases in R&D funding, among which the Department of Commerce, with an increase of up to 29.3%. At the same time, it is interesting to note that one of DOC’s latest projects is the creation of the National Artificial Intelligence (AI) Advisory Committee (NAIAC), which will be discussed below.

Numerous policymakers in Congress are particularly interested in the Department of Defense Science and Technology (DOD S&T) program funding. The increasingly popular belief in the defense community finds ensuring support for S&T activities as necessary to maintain USA’s military superiority in the world.

The budget request represents President Biden’s R&D priorities, and the Congress may agree with it partially, completely, or not agree at all. It is safe to say that AI has gained the attention of the Congress, considering the 116th Congress (January 3, 2019 - January 3, 2021) is the most AI-focused congressional session in history with the number of times AI was mentioned being more than three times higher compared to 115th Congress (115th - 149, 116th - 486).

National and International Efforts

As indicated in its national AI strategy, the United States takes part in various intergovernmental AI initiatives, such asGlobal Partnership on AI (GPAI), OECD Network of Experts on AI (ONE AI)Ad Hoc Expert Group (AHED) for the Recommendation on the Ethics of Artificial Intelligence, and has participated in global summits and meetings, such as AI Partnership for Defense, and AI for Good Global Summit. In addition, the United States announced a declaration of the bilateral agreement on AI with the United Kingdom in December 2020. 

On September 8, 2021, the U.S. Secretary of Commerce Secretary Gina Raimondo announced the establishment of the National Artificial Intelligence (AI) Advisory Committee (NAIAC). The main purpose of the NAIAC will be to advise the President and the National AI Initiative Office (NAIIO) on issues related to AI. “AI presents an enormous opportunity to tackle the biggest issues of our time, strengthen our technological competitiveness, and be an engine for growth in nearly every sector of the economy. But we must be thoughtful, creative, and wise in how we address the challenges that accompany these new technologies,” Commerce Secretary Gina Raimondo said.

The United States or China? 

The United States is showing an increasing interest in developing and implementing artificial intelligence through the increase in federal AI-related budget, establishment of new committees, intergovernmental AI initiatives, bilateral agreements, and participating in global summits but the constant comparison is being made between USA and China. Should the future battle over artificial intelligence be between USA and China, the question arises: Who will win this battle for AI supremacy?

Recently, a former Pentagon expert said that the race is already over, and China has won. The Pentagon’s first chief software officer resigned over the slow pace of technological advances in the U.S. military. He claims the USA has no competing fighting chance against China in the upcoming years and that it's already a done deal.

At the same time, an expert in artificial intelligence Kai-Fu Lee, former President of Google China, disagrees with this claim. He notes that the US has a clear academic lead in artificial intelligence, supports his claim by noting that all 16 Turing award recipients in AI are American or Canadian, and the top 1% of papers published are still predominantly American. China is simply faster in commercializing technologies and has more data. 

Artificial intelligence already has numerous uses (academic, military, medical, etc.) and when assessing countries' AI technology reach it is important to separate different uses of technology. 

To answer the question on whether the United States or China will win AI 'race' or whether a new force will emerge, it is necessary to closely monitor artificial intelligence technology development and compare different countries using a uniform set of criteria before reaching a conclusion. Another potential scenario, as highlighted by Kai-Fu Lee in his book AI 2014: Ten Visions of Our Future, states the possibility of United States and China co-leading the world in technology.

Image Source: https://www.pexels.com/photo/blue-bright-lights-373543/

June 14, 2021No Comments

The disruptive power of Artificial Intelligence

By: Renata Safina and Arnaud Sobrero

The use of artificial intelligence may change how war is conducted

In 2020, amidst the biggest pandemic the world has seen since the Spanish Flu in 1918, two ex-soviet states were battling over an area of just 4,400 km² in the mountainous region of Nagorno- Karabakh. Armenia and Azerbaijan, so close and yet so far, are two mortal enemies sharing a common DNA.    

This war, at first, seemed like a faraway regional conflict between two neighboring states, away from western Europe and even further from the United States. However, a closer inspection requires us to pay a lot more attention to the conflict. Indeed, this conflict is illustrative of how the extensive use of artificial intelligence-enabled drones can be instrumental in shifting the outcome of a war. Thus, the application of artificial intelligence (AI) in the military domain is disrupting the way we approach conventional warfare.

AI means

The use of advanced technological weapons, drones, and loitering munitions supplied by both Israel and Turkey practically won this war for Azerbaijan. In particular, AI-enabled weaponized drones with increasingly autonomous and surveillance capabilities were able to disrupt the battlefield significantly. The deployment of those drones, such as the Turkish TB2 unmanned combat aerial vehicle (UCAV), had a substantial disruptive impact on the battlefield as the Azeris forces were able to destroy 47% of the Armenian combat vehicles and 93% of its artillery.    

'Harpy' and 'Harop' loitering munitions (LM) are autonomous weapon systems produced by Israel Aerospace Industries (IAI), a state-owned aerospace and aviation manufacturer. A loitering munition or 'kamikaze drone' is an unmanned aerial vehicle (UAV) with a built-in warhead tarrying around an area searching for targets. Once the target is located, the LM strikes the target detonating on impact. The significant advantage of these systems is that during loitering, the attacker can decide when and what to strike. Should the target not be found, the LM returns to the base. In addition, these systems are equipped with machine learning algorithms that can take decisions without human involvement, allowing them to process a large amount of data and decide instantly, revolutionizing the speed and accuracy of the actions.  

Conducting Warfare through AI – Ethical Implications

Those developments in emerging technologies such as artificial intelligence are already contributing to creating technological surrogates disrupting how we conduct warfare

Wars fought with lethal autonomous weapons (LAWS) equipped with AI are not a vision of a distant future. These weapons are being deployed presently and are a huge game changer and, those 'market disruptors' will once and for all change the way the wars are fought. Former CIA Director and retired Gen. David Petraeus claims that “drones, unmanned ships, tanks, subs, robots, computers are going to transform how we fight all campaigns. Over time, the man in the loop may be in developing the algorithm, not the operation of the unmanned system itself.”

However, military operations conducted without human involvement raise many ethical questions and debates. On one side, supporters argue that LAWS with AI generate fewer casualties due to high precision, and thanks to lack of emotions, can even eliminate war crimes. On the other side, machine learning bias in data input may create unpredictable mistakes. AI decision-making may result in flash wars and rapid escalation of conflicts with catastrophic consequences. Thus, by lowering the cost of war, LAWS might increase the likelihood of conflicts. 

Furthermore, the transfer of the responsibility of decision-making entirely to the machine will drastically distance a human from an act of killing, questioning the morality and ethics of the application of AI for military purposes. Lack of international laws and regulations created a Wild West with developed countries acting as both sheriffs and outlaws. Vigorous debates are already taking place among academics and military organizations in the western world as they are trying to keep up with the increasing technological developments. The resulting discussions triggered the creation of a group of governmental experts on LAWS at the United Nations in 2016. Despite ongoing United Nations discussions, international ban or other regulations on military AI are not likely to happen in the near term. Consequently, before we can fully grasp the consequences of applying artificial intelligence in the military domain and start creating "killer robots'', a more cautious approach should be recommended to limit the deployment of AI systems to less-lethal operations such as bomb disposal, mine clearance and reconnaissance missions.

For all the potential applications of AI to the military domain, the question stays: Will it help us sleep better at night or prevent us from sleeping at all?