June 3, 2024No Comments

The viability of small Unmanned Aerial Systems (UAS) fulfilling conventional battlefield roles

by Joseph Moses - Military Strategy & Intelligence Team

Drone warfare has been one of the salient capabilities of the First World’s futuristic military arsenal through the conflicts of the 2000s and 2010s. The war that broke out in Ukraine in 2022, and the battlefields in Myanmar, Sudan and Gaza have since been the arena for another emerging technology that in tactical situations, proven to be decisive. These are the small drones ranging from the FPV variants, commercial hobby drones like the DJI drones. 

This essay focuses on smaller systems and aerial systems because these grant a force cheap options and also grant significant stealth, mobility, equipment safety, and Intelligence, Surveillance and Reconnaissance (ISR) collection capabilities and provide economically cheap tactically offensive capabilities than ground-based or water based systems. Another reason to focus on small drones is their assignment to smaller troop formations which grants these systems a very versatile and efficient [semi]autonomous usage. Furthermore, the integration of Artificial Intelligence (AI), also potentially negates the necessity of connecting the drones to nodal points like bigger drones or nearby aircraft which gives them relative immunity from detection and electronic countermeasures. This essay also focuses on electronic countermeasures (jamming) and not on energy weapons and kinetic countermeasures because this uniquely counters the threat posed by drones while rendering them temporarily inert as opposed to kinetic and energy countermeasures which destroy these systems.

Turkish and Iranian drones had been at the forefront of remotely controlled vehicles for precision strikes, kamikaze strikes and monitoring purposes in the initial months of the Ukraine war, providing cheaper options for drone warfare. However, over the course of the war, smaller drones have not only emerged but have caused major disruptions in the battlefield, dispatching enemy tanks, and armored vehicles, coordinating artillery strikes, and targeting small and mobile targets like combatants and individual vehicles. Initially, they were improvised to carry and drop grenades and mortars over enemy combatants and for surveillance. In recent months, these drones have begun to trickle into the battlefield on an industrial scale, with manufactured “pylons” to attach munitions onto these small drones. Modified FPV drones are used as guided kamikaze weapons. These drones also have increased ranges to increase their surveilling and strike capabilities.

These smaller drones can by no means in the short term, replace the bigger fixed-winged drones that are conventional in militaries nor will they have the costly sensor arrays and range of conventional kamikaze drones and other loitering munitions, but they do provide stealth, loitering and precision strike capability and above all, are expendable. In addition, given their precise nature, these drones also contribute to reducing collateral damage to both life and material which is inevitable with protracted infantry engagements, artillery and air strikes. 

In many ways, they fulfil the roles of snipers, forward observers, provide a good vantage point for observing and calculating firing solutions for artillery and conducting battle damage assessments, and one could argue, that the precision strikes by kamikaze drones and loitering drones are a form of Close Air Support (CAS). Another recent development as of the time of writing, is the emergence of AI being implemented in drone strikes in Ukraine to autonomously identify and strike targets. If developed, this would be a form of highly manoeuvrable and precise fire-and-forget missiles and would provide the drone launcher and operator significant situational awareness by relieving the targeting proces. It is yet to be seen at scale if autonomy is reliable across the ‘kill chain’ of target detection, discrimination, acquisition, and engagement.

While the improvised nature of the first iterations of loitering bomber drones carrying grenades and mortar shells was considered an asymmetrical and unconventional tactic, the new emerging technologies being integrated into these systems with the potential for these smaller and cheaper systems to replace conventional battlefield roles with the same quality of data collection, precision etc., could transform these systems into significant conventional and force multipliers. 

Replacing conventional battlefield roles:

The Drone Swarm: The recent attack on Israel by Iranian kamikaze drones and the number of attacks on oil refineries in Russia by Ukrainian drones are isolated examples of the drone swarm. While these drones can be intercepted, there are yet to be incidents involving swarms of smaller drones used against frontline troops where interceptions are much more difficult and where these drones will have the effect of cluster munitions or rocket strikes. The chief difference would be that these munitions would be far more accurate, and manoeuvrable and can potentially loiter, change course, make quick decisions, change targets etc.

These drones are difficult to intercept because of how small silent and cheap they are. If expensive air defence systems are used to soak these swarms up, this leaves these expensive air defence systems vulnerable to counterstrikes and with a reduced magazine in their positions. If it is included in doctrine, to ignore smaller drones, this would prove to be catastrophic to frontline troops, isolated patrols, and vehicles. The American Defense Advanced Research Projects Agency (DARPA), as of 2016 has already been experimenting with integrating autonomous systems into swarm drone tactics through their OFFensive Swarm-Enabled Tactics program (OFFSET). The US Department of Defense unveiled its “Replicator” program in 2023 and has planned on fielding thousands of autonomous systems across multiple domains by 2025 to counter Chinese military build-up in the Indo-Pacific. It has been confirmed that one of the systems that is a part of this program is the Switchblade loitering munition in association with the Low Altitude Stalking and Strike Ordnance (LASSO) program. The first iteration of the Replicator program aims to tackle the problem of slowing or defeating an invading force with swarms of lethal surface drones and overhead loitering munitions. This would shift the most dangerous battlefield roles of frontline repellence and area denial from manned vehicles and posts to unmanned and semi-autonomous or autonomous swarms, creating a highly fluid, dynamic and unpredictable ‘minefield’ for an advancing enemy force.

With the battlefield potential of swarm strikes, and with the novel usage of AI in Ukraine to discriminate targets and avoid electronic warfare measures, it is only a matter of time before swarm tactics will be implemented in drone swarms of smaller UASs to react, improve and harass frontline troops and significantly attack an enemy’s tactical defensive posturing while not costing the user significantly economically and commercially. This drone could be FPV (disposable) to loitering bombing drones, to drones with attached machine guns, rockets and other anti-drone counter-measures. While these systems can be assigned at squad levels, these can also be assigned at higher levels and be used in fire missions, and joint operations across wider active theaters. 

Source: Image by Pexels from Pixabay

CAS and sniper/precision-strike roles: The mobility and silent nature of drones affords fighting forces in urban settings and open terrain likewise, map out, reconnoiter, stalk and even kill hiding and isolated combatants. Their manoeuvrability can be used to attack fixed encampments as we see in the conflict in Gaza between Hamas and the Israeli IDF. Similarly, we have seen videos of Israeli forces using small copter drones to enter houses and search for Hamas combatants room by room, after which they can either engage through squads or call in an artillery or air strike against the entrenched enemy. While these strikes can be expensive, we can soon expect to see these strikes being taken up by the same reconnoitring drone. This economically relieves artillery batteries, tanks or aircraft while it also frees up the squad can focus on other operations/activities.  

A similar argument can be made for these systems being used in a CAS role. Given the precision, and payloads these drones can now carry and/or drop onto enemy positions and also strike and harass mobile enemies they can fulfill a very versatile CAS role while being assigned to smaller units like squads and platoons. In a swarm scenario or in a dynamic battlefield that is not shaped by shaping operations, while deconfliction could be a challenge for controlled and autonomous systems operated by smaller troop formations, this is already being performed in Ukraine against individual targets and to harass and survey larger formations. While these small drones may not be able to conduct large scale Suppression of Enemy Air Defenses (SEAD) operations, these smaller systems can be used as decoys, perform smaller precision strikes against less guarded enemy air defences, and psychologically de-motivate an infantry fighting force when used strategically. 

The primary advantage of conventional CAS systems is their precision, damage dealt and psychological effect of their munitions/guns with the disadvantage of being vulnerable to enemy ground fire from small handheld systems to surface-to-air missiles. With small drone systems, the advantage primarily is a relative degree of visual and audible stealth, the expendable nature of these systems and the shock-and-awe they provide to an enemy not expecting a highly mobile and smart targeting system. These systems can also drastically reduce the time required for a CAS aircraft to get to the battlefield as these will be at the hands of the squad or platoon members.

Electronic Warfare and AI:

The primary non-energy and non-kinetic deterrence against these drones have been electronic jamming countermeasures that are used to sever the connection between the drone operator and the drone. This has led to drone confiscations and loss of battlefield information and intelligence on drones that store information offline instead of storing/recording them on the operator’s system as a redundancy measure.


With the emergence of AI being integrated into kamikaze and FPV drones, these drones do not need operator control. An improvement already being made on non-autonomous FPV systems was for the operator to target an immobile or mobile enemy and place the drone on a trajectory towards the target at maximum velocity. This tactic negates the electronic countermeasure’s effects as the severance of the connection cannot stop the momentum of the inbound kamikaze drone. Integrating AI into these smaller and faster systems is a new concept but can be seen being implemented in the Ukrainian and Russian battlefields. These drones provide the tactical advantage of identifying and discriminating targets without requiring an operator but are also beginning to prove to be immune to electronic countermeasures as they have the target and terrain information and parameters pre-loaded into them.


While the disadvantage of AI is the lack of human supervision across the entire ‘kill chain’, the real-world consequences and level of permissible and reliable autonomy are yet to be seen at scale. In the future, this would be a contentious matter, especially in a battlefield with a civilian population, in urban counter-terror and counter-insurgency operations and could cause deconfliction challenges in an active and fluid battlefield. Real-time decision-making relieves the operator while also leaving the door open for the aforementioned challenges when it comes to targeting mobile targets or operating in a highly dynamic battlefield. Active deconfliction of a changing battlefield would require a constant connection between frontline troops, the drone operator and the drone.

While the possibilities are many, it is yet to be seen how the current improvisations, augmentations to/for conventional battlefield roles with the added potential of machine autonomy, would affect the composition of smaller combat units and doctrine.


Where conventional CAS, ISR and precision strike systems and roles are expensive, require scarce expertise and are not expendable, these UAS systems are the exact opposite while also being a force multiplier. It is however yet to be seen whether they would have the same psychological effect and efficacy to either augment or replace major decisive battlefield interdiction roles.

May 22, 2024No Comments

The year of Generative AI elections: reviewing risks and mitigations

Author: Piero Soave and Wesley Issey Romain - AI, Cyber Security & Space Team

The year 2024 is sure to be remembered when it comes to elections: first, never before have so many people around the world been called to cast their vote; second, these elections will be the first to take place in a world of widespread Generative Artificial Intelligence (GenAI). The combined impact of these two elements is likely to have a lasting impact on democracy. This article looks at how GenAI can influence the outcome of elections, reviews examples of risks from recent elections, and investigates possible mitigations.

The year of high-stakes elections

In over 70 elections throughout 2024, some 800mn voters will take to the ballots in India, 400mn in Europe, 200mn in the United States of America, and many more across Indonesia, Mexico, and South Africa1. In many cases, these elections will be polarized and will feature candidates from populist backgrounds. Previous electoral rounds have scarcely been an example of moderation, featuring instead accusations of foreign interference, and a deadly assault on the US Congress. Whoever wins the most votes will make decisions on topics as consequential as the US-EU relationships, the future of NATO, trade wars, the geopolitical equilibrium in the Middle East, Hindu-Muslim relationships, and more. With so much at stake, the risk of election interference warrants a closer look.

Enter GenAI

The launch of OpenAI’s ChatGPT at the end of 2022 brought GenAI to the mainstream. GenAI indicates an AI system that has the ability to create content in the form of text, audio or video. Having been popularized by ChatGPT, there are now thousands of applications readily available at minimal to no cost. These systems have been trained on billions of elements of text, sound or video, and are able to respond to a user query and create synthetic content in those formats. 

The existing legal and regulatory frameworks are poorly suited to mitigate the risks deriving from GenAI. Since the launch of ChatGPT, there have already been lawsuits related to intellectual property2, sanctions to corner-cutting lawyers3, egregious reinterpretations of historical facts4, as well as general concern about the bias inherent in these systems5. One specific problem related to GenAI is that of deepfakes, i.e. audio or video files that show people saying or doing things they never in fact said or did. This content is so realistic that it is all but impossible to determine whether what is in front of us is reality, or an artificial creation. The consequences are far-ranging, from the potential increase in financial and other fraud6, to the infringement of privacy and individual rights7. But it is in the domain of politics that deepfakes are particularly troubling. They can be used for a variety of bad purposes, from misleading voters about where, when and how they can vote, to spreading fake content from well recognizable public figures, to generating inflammatory messages that lead to violence8

GenAI and misinformation in elections

Misinformation is not a new phenomenon, and it certainly is older than artificial intelligence. However, technology can exacerbate and multiply its effects. By some accounts, “25% of tweets spread during the 2016 US presidential elections were fake or misleading”9. GenAI has the potential to turbocharge the creation of fake content, as this no longer requires sophisticated tools and expertise - anyone with an internet connection could do it. 

Examples of deepfake interference in the political process abound10, despite the relative young age of the technology. In what is perhaps the most consequential event to date, Gabon’s President Ali Bongo appeared in a 2019 video in good health, despite having recently suffered a stroke. The media started questioning the veracity of the video - which is still being debated - ultimately triggering an attempted coup11. Crucially, Schiff et al suggest that “the mere existence of deepfakes may allow for plausible claims of misinformation and lead to significant social and political harms, even when the authenticity of the content is disputed or disproved”12.

During Argentina’s 2023 presidential elections, both camps made extensive use of AI generated content. Ads featured clearly fake propaganda images of candidates as movie heroes, dystopian villains or zombies. In an actual deepfake video - labeled as AI generated - “Mr Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views”13. Also in 2023, synthetic content featured in mayoral elections14 in Toronto and Chicago, the Republican primaries in the US, Slovakia’s parliamentary elections - all the way to New Zealand15.

In the run-up to general elections in India, the Congress party shared a deepfake video of a Bharat Rashtra Samiti leader calling to vote for Congress. The video was shared on social media and messaging apps as voters went to the ballot, and was viewed over 500,000 times before the opposing campaign could contain the damage. AI is being widely used in India to create holograms of candidates, and translate speeches across multiple local languages - as well as for less ethical and transparent objectives16.

In an attempt to simulate bad actors’ attempt to generate misinformation, researchers tested four popular AI image generators and found that the tools “generated images constituting election disinformation in 41%” of cases. This is despite policies in place for these tools which should prevent the creation of misleading materials about elections. The same researchers looked for evidence of bad use and found that individuals “are already using the tool to generate content containing political figures, illustrating how AI image tools are already being used to produce content that could potentially be used to spread election disinformation”17.

Source: Markus Spiske. - https://www.pexels.com/photo/technology-computer-desktop-programming-113850/

Controls and mitigations

Regulation around AI is moving fast in response to even faster technological advancements. Perhaps the most thorough attempt at creating a regulatory framework is the EU AI Act18, approved in March 2024. In the US, a mix of federal and state initiatives seek to address several concerns related to AI, from bias to GenAI, and data privacy. These include the 2023 Presidential Executive Order and related OMB guidance; the NIST Risk Management Framework; and state legislation, from the early New York City Law 144 to the more recent California guidance and proposed bills. Other countries, from Singapore to Australia and China, have approved similar rules. 

Looking at elections integrity specifically, the EU adopted in March a new regulation “on the transparency and targeting of political advertising, aimed at countering information manipulation and foreign interference in elections”. This focuses mostly on making political advertising clearly recognizable, but most of the provisions won’t enter into force before the autumn of 202519. Also in March, the European Commission leveraged the Digital Services Act - which required very large online platforms to mitigate the risks related to electoral processes - to issue guidelines aimed at protecting the June European Parliament elections. The guidelines include labeling of GenAI content. Although these are just best practices, the Commission can start formal proceedings under the Digital Services Act if it suspects a lack of compliance20. In the US, two separate bipartisan bills have been introduced in the Senate: the AI Transparency in Elections Act21 and the Protect Elections from Deceptive AI Act22.

These frameworks have yet to stand the test of time, and the proliferation of open-source models and APIs makes it an uphill struggle for regulators. Regulation around deepfakes specifically is scarce and complex, as it needs to address two separate issues: the creation of the synthetic material, and its distribution. What regulation does exist, tends to focus on sexual content23, although in some cases political content is also covered24. Existing norms around privacy, defamation or cybercrime can offer some support, but are ultimately inadequate to prevent harm25. Some tech solutions are available, such as watermarks, detection algorithms to verify authenticity, or including provenance tags into content26. Whether these techniques are able to prevent or counter the creation and spread of deepfakes at scale remains an open question - and some of them may have unintended drawbacks27. The experience of social media platforms in tackling the spread of harmful content and misinformation is mixed at best28. Platforms’ efforts to mitigate harm (from content moderation to the provision of trustworthy information), and solutions proposed by other parties (such as the removal of the reshare option) are steps in the right direction - but seem unlikely to move the needle.

It is possible that tech developments in the near future will make it easier to detect and disrupt the flow of disinformation, fake news and deepfakes that threaten to sway elections - such as the recently released OpenAI detector29. But the best tool available right now might be literacy interventions, which can make readers more alert to fake news3031. For example, news media literacy aims to provide the tools to assess information more critically and to identify false information. Hameleers found that this type of intervention is effective at reducing the perceived accuracy of false information, although importantly it does not reduce agreement with it (when the reader’s beliefs align with its message)32

Conclusions

2024 will be a critical year for liberal democracies and election processes worldwide, from the Americas and Europe to Africa and Asia. Election outcomes will play a crucial role in shaping the orientation of the most pressing issues in world affairs.

The advent of AI tools such as Generative AI threatens electoral processes in democratic countries as it increases the risks of disinformation, potentially swaying voting outcomes. GenAI effectively gives anyone the ability to create synthetic content and deploy it in the form of robocalls, phishing emails, realistic deepfake photography or video, and more. Once this content is online, previous experience teaches that it is very difficult to moderate or eliminate, especially on social media platforms.

While continuing to support tech-based initiatives to detect or tag synthetic content, Governments and education institutions should invest in information literacy programs to equip people with the tools to critically evaluate information and make informed electoral decisions. 


  1. Keating, Dave. “2024: the year democracy is voted out?” Gulf Stream Blues (blog). Substack. Dec 29, 2023.<https://davekeating.substack.com/p/2024-the-year-democracy-is-voted?r=wx462&utm_campaign=post&utm_medium=web&triedRedirect=true> ↩︎
  2. Grynbaum, Michael M., and Ryan Mac. “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work.” New York Times. Dec 23, 2023. <https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html↩︎
  3. Merken, Sara. “New York lawyers sanctioned for using fake ChatGPT cases in legal brief.” Reuters. June 26, 2023. <https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22> ↩︎
  4. Grant, Nico. “Google Chatbot’s A.I. Image Put People of Color in Nazi-Era Uniforms.” New York Times. Feb 22, 2024. <https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html> ↩︎
  5. Nicoletti, Leonardo., and Dina Bass. “Humans are Bias. Generative AI is even Worse.” Bloomberg. June 9, 2023. <https://www.bloomberg.com/graphics/2023-generative-ai-bias/> ↩︎
  6. Sheng, Ellen. “Generative AI financial scammers are getting very good at duping work email.” CNBC. Feb 14, 2024. <https://www.cnbc.com/2024/02/14/gen-ai-financial-scams-are-getting-very-good-at-duping-work-email.html ↩︎
  7. Weatherbed, Jess. “Trolls have flooded X with graphic Taylor Swift AI fakes.” The Verge. Jan 25, 2024. <https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending> ↩︎
  8. Alvarez, Michael R., Frederick Eberhardt., and Mitchell Linegar. “Generative AI and the Future of Elections” California Institute of Technology Center for Science, Society, and Public Policy (CSSP), July 21, 2023. <https://lindeinstitute.caltech.edu/documents/25475/CSSPP_white_paper.pdf> ↩︎
  9. Bovet, Alexandre., and Hernán A. Makse. “Influence of fake news in Twitter during the 2016 US presidential election.” Nature Communications. Vol. 10(1), 7. Jan 2, 2017. <https://pubmed.ncbi.nlm.nih.gov/30602729/> ↩︎
  10. Bontcheva, Kalina., Symeon Papadopoulous., Filareti Tsalakanidou., Riccardo Gallotti., et al. “Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities”. European Digital Media Observatory (EDMO), February 2024. <https://edmo.eu/edmo-news/new-white-paper-on-generative-ai-and-disinformation-recent-advances-challenges-and-opportunities/> ↩︎
  11. Delcker, Janosche. “Welcome to the age of uncertainty”. Politico. Dec 17, 2019. <https://www.politico.eu/article/deepfake-videos-the-future-uncertainty/> ↩︎
  12. Bueno, Natalia., Daniel Schiff., and Kaylyn Jackson Schiff. “The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media.” Georgia Institute of Technology GVU Center. <https://gvu.gatech.edu/research/projects/liars-dividend-impact-deepfakes-and-fake-news-politician-support-and-trust-media> ↩︎
  13. Nicas, Jack., Lucia Cholakian Herrera. “Is Argentina the First A.I. Election?” New York Times. Nov 15, 2023. <https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html> ↩︎
  14. Wirtschafter, Valerie. “The Impact of Generative AI in a Global Election Year”. Brookings Institution. Jan 30, 2024. <https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year> ↩︎
  15. Hsu, Tiffany., and Steven Lee Myers. “A.I. Use in Elections Sets Off a Scramble for Guardrails.” New York Times. June 25, 2023. <https://www.nytimes.com/2023/06/25/technology/ai-elections-disinformation-guardrails.html> ↩︎
  16.  Sharma, Yashraj. “Deepfakes democracy: Behind the AI trickery shaping India’s 2024 election.” Aljazeera. Feb 20, 2024. <https://www.aljazeera.com/news/2024/2/20/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections> ↩︎
  17. “Fake image factory: How image generators threaten election integrity and democracy.” Center for Countering Digital Hate (CCDH). March 6 2024. <https://counterhate.com/wp-content/uploads/2024/03/240304-Election-Disinfo-AI-REPORT.pdf↩︎
  18. Abdurashitov, Oleg., and Caterina Panzetti. “AI Regulatory Landscape in the US and the EU: Regarding the Unknown.” ITSS Verona. Jan 18, 2024. <https://www.itssverona.it/ai-regulatory-landscape-in-the-us-and-the-eu-regulating-the-unknown-ai-cybersecurity-space-group↩︎
  19. “EU introduces new rules on transparency and targeting of political advertising.” Council of the European Union. March 24, 2024. <https://www.consilium.europa.eu/en/press/press-releases/2024/03/11/eu-introduces-new-rules-on-transparency-and-targeting-of-political-advertising/> ↩︎
  20. “Commission publishes guidelines under the DSA” European Commission. March 26, 2024. <https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1707> ↩︎
  21. “Murkowski, Klobuchar Introduce Bipartisan Legislation to Require Transparency in Political Ads with AI-Generated Content.” Lisa Murkowski, United States Senator for Alaska. March 6, 2024. <https://www.murkowski.senate.gov/press/release/murkowski-klobuchar-introduce-bipartisan-legislation-to-require-transparency-in-political-ads-with-ai-generated-content↩︎
  22. Klobuchar, Hawley, Coons, Collins Introduce Bipartisan Legislation to Ban the Use of Materially Deceptive AI-Generative Content in Elections.” Amy Klobuchar, United States Senator. September 12, 2023.  ↩︎
  23. UK Ministry of Justice., and Laura Farris MP. “Government cracks down on ‘deepfakes’ creation.” Press Release. April 16 2024. <https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation↩︎
  24. Ahmed, Trisha. “Minnesota advances deepfakes bill to criminalize people sharing altered sexual, political content.” Associated Press (AP). May 11, 2023. <https://apnews.com/article/deepfake-minnesota-pornography-elections-technology-5ef76fc3994b2e437c7595c09a38e848↩︎
  25. Jodka, Sara H. “Manipulating reality: the intersection of deepfakes and the law.” Reuters. Feb 1, 2024. <Manipulating reality: the intersection of deepfakes and the law | Reuters↩︎
  26. Content Authenticity Initiative Website: <https://contentauthenticity.org/↩︎
  27. Wirtschafter, Valerie. “The Impact of Generative AI in a Global Election Year” <https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year> ↩︎
  28. Aïmeur, Esma., Sabrine Amri., and Gilles Brassard. “Fake news, disinformation and misinformation in social media: a review.” Social Network Analysis and Mining. Vol. 13, 30, 2023. <https://link.springer.com/article/10.1007/s13278-023-01028-5#Fn18↩︎
  29. Cade Metz and Tiffany Hsu, “OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers”, New York Times, May 7, 2024.
    <https://www.nytimes.com/2024/05/07/technology/openai-deepfake-detector.html> ↩︎
  30. Jones-Jang, S Mo, Tara Mortensen, and Jingjing Liu. “Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t.” American Behavioral Scientist. Vol. 65(2). <https://journals.sagepub.com/doi/10.1177/0002764219869406↩︎
  31. Helmus, Todd C. “Artificial Intelligence, Deepfakes, and Disinformation: A Primer”. RAND Corporation, July 2022. <http://www.jstor.org/stable/resrep42027> ↩︎
  32. Hameleers, Michael. “Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Information, Communication, & Society,. Vol 25(1). 2022. <https://www.tandfonline.com/doi/full/10.1080/1369118X.2020.1764603↩︎

January 18, 2024No Comments

AI Regulatory Landscape in the US and the EU: Regulating the Unknown – AI, Cybersecurity, Space Group 

Author: Oleg Abdurashitov and Caterina Panzetti - AI, Cyber Security & Space Team

Among other things 2023 was a year of AI regulation in the EU, US and well beyond. The fundamental challenge that policymakers face in the case of AI is that, in essence, they are often dealing with the unknowns resulting from the complexity of the technology itself and the break-neck speed of its development and adoption. Given the incessant debate on whether AI poses an existential risk to humanity that needs to be addressed at the earlier stages or if such existential risks are merely a smoke screen to the far more urgent and practical implications of widespread AI deployment on privacy, copyright, human rights, labor market, setting the regulatory priorities appears to be challenging. Analyzing what the regulators in the US and Europe chose to focus on and how they framed AI regulatory doctrine may help to better understand not just their priorities but the differences in respective institutional, political and economic environments and approaches to dealing with the emerging technologies. 

United States 

Despite the existential threat narrative peddled by the largest industry players, including at the Senate hearings, thePresidential Order Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence seems to be more grounded in the current reality when assessing AI's potential risks. While the act attempts to address several critical security issues - from AI-enabled cyber operations to the threat of WMD (Weapon of Mass Destruction) development - it can nonetheless be viewed as an effort to prepare the American economy and society for the age of AI use across the numerous (if not all) sectors. 

The Order’s approach is based on the unique strength of the US economy and governance model that heavily relies on the enormous capacity of the US tech sector, as well as on the diverse environment of the nation’s civil society, where educational institutions, think tanks, and the legal system all play a role in shaping and implementing regulations. Probably, the most critical aspect of the AI regulatory environment is that in the US the recent AI breakthroughs are funded by private capital as opposed to the state budgets like in China[1] or, largely, the EU[2]. This, on the one hand, allows the US to retain the competitive edge in the AI race, with the so-called  MAGMA[3] companies bearing a large share of the R&D costs in developing breakthrough commercial AI products. On the other, this puts the US government in a position where the sectoral regulation shall be balanced against the interests of the commercial players and enable, rather than control, the technology development and adoption.      

The Order implicitly acknowledges this complex interplay between the commercial interests, the interests of the state, and American society’s demands. Section 2 (Policy and Principles) in particular broadly outlines the many aspects of AI development - from safety to impact on the workforce - that need to be balanced against each other. Again, given the enormity of such a task, the Order is short on specific details - and when such details are given they often leave the question of whether it will be able to address the long-term security implications of AI development open. 

For instance, in Section 4 the Order puts the “dual-use foundation models” that may pose “a serious risk to security, national economic security, national public health or safety” under increased regulatory and technical scrutiny. The definition of such a model as the one containing “at least tens of billions of parameters” covers the leading large language models (LLMs) behind ChatGPT and Google Bard, with each having more than 100 billion parameters. The Order’s approach to regulating such powerful models relies largely on industry guidelines (such as the NIST AI Risk Management Framework[4]) developed in collaboration with the private sector players themselves complemented by a series of government-funded testbeds for risk assessment.

It is important to note, that while the commonly agreed approach to AI model training can be described as “greater is better”, there is evidence that the output of models with a far smaller number of parameters (1.5B to 2.7B) can be somewhat comparable to that of larger models[5]. Additionally, while larger models are generally controlled by specific entities, the open-source models (such as Meta’s Llama available in packages of 7B, 13B, and 70B parameters[6]) may be used by a far wider number of actors developing their own powerful models, potentially falling outside the regulatory scrutiny and export control measures. 

More so, the Order explicitly focuses on very large models as subject to regulatory restrictions, like “[a] model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations”. That number, for instance, is significantly higher than the rumoured estimates for the most advanced model in the market today - OpenAI’s GPT-4 - which currently stands around 2.15 x 1025 FLOPs (floating point operations)[7]. If the field will sustain the current pace of innovation, this threshold may well be crossed shortly. However, there is so far little evidence that such models would indeed represent “potential capabilities that could be used in malicious cyber-enabled activity” since malicious cyber operations of today require far less computing power and the “cyber-enabled” definition may simply be too broad to have meaning in regulatory context.  

Of course, the proposed control regime for the large ‘dual-use models’ need not necessarily fully address the issue of AI-powered malicious activity as of today. Instead, the Order directs federal agencies to study the best practices and guidelines of the critical infrastructure sectors to manage “AI-specific cybersecurity risks” and “develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards” as well as assess the risks of AI usage in the critical infrastructure and government systems. From this point of view, the Order implicitly acknowledges the fact that AI models are already largely deployed both in the private and public sectors and calls for measures to discover and reduce the risks of such use. 

Notably, the US DoD’s Data, Analytics, and Artificial Intelligence Adoption Strategy[8], released months earlier in June 2023 prioritizes speed of deployment over careful risk assessment that the Presidential Order entails. To the military, the “[AI deployment] risks will be managed not by flawless forecasting, but by continuous deployment powered by campaigns of learning”. More so, the DoD calls for mitigation of policy barriers through consensus building and closer relations with vendors, as well as the AI community at large. Despite the risks of AI deployment being no less profound in the military sector than in civilian affairs, the US Government as a customer may well choose the speed of decision-making - and many other benefits that AI can potentially bring to warfighting - to a more careful and balanced approach. 

Source: https://www.pexels.com/photo/blue-bright-lights-373543/

EU AI Act

2021 has been marked by a race towards gaining normative authority in the field of Artificial Intelligence-enabled services. The European Union has been leading this chase by engaging in an omni comprehensive risk-oriented approach to AI regulation, providing for a broad regulatory framework to ensure securitization and protection of fundamental rights.

The Commission has indeed proposed a model founded on a decreasing scale of risk-based obligations to which providers will have to adhere to be able to continue conducting their business in the European Union, irrespective of their place of establishment[1]. Regarding the service providers which surpass the threshold of what the legislator has referred to as “high risk”, the AI Act imposes a ban upon them, and thus will not be allowed to distribute their services in the Union, as they are deemed to pose an unacceptable risk to the livelihood and safety of the users of such service. Just a tire under the said forbidden services, the providers which have been labelled as “high risk” will have to comply with the most burdensome obligations. Notably, the proposed regulation will not have any impact on AI systems implemented for military purposes.

The high-risk providers are identified with critical infrastructures which deeply affect the users’ daily lives, and which could potentially implement discriminatory or harmful practices. The non-exhaustive list comprises providers that supply technologies applied for transportation or employment purposes, migration management, administration of justice and law enforcement processes[2]. Providers of such services will be asked to supply, among other requirements, adequate risk assessments, and a high level of robustness and security to make sure that “AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities”[3], and detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance[4]. Although the Parliament has included all biometric identification systems as high risk, Italy, Hungary and France have been pushing to implement a more lenient regime for the employment of biometric identification instruments for surveillance purposes. The result of this debate is to be seen at the moment of ratification of the Act.

Despite the praiseworthy effort of the European legislator in setting up standards which prioritize fundamental rights and security for its citizens, endorsed also by the setup of clear enforcement measures and fines directed to the misbehaving providers; it is pivotal to highlight some challenges that regulating AI will pose on future legislative attempts.

Firstly, the main area of concern regards the tug of war between maintaining a firm hold over high-risk service providers and, on the other hand, ensuring the smooth progress of AI innovation in the EU. We will likely be witnessing a certain degree of lobbying practices from what arguably represent the top-tier AI companies based in the US (mainly, META, OpenAI, Google, Deepmind etc.); hence watering down the original scope of the Act. This concern rapidly escalated to a concrete debate over the regulation of foundation models. “The foundation models, such as GPT-3.5 - the large language model that powers OpenAI’s ChatGPT-, are trained on vast amounts of data and are able to carry out a wide range of tasks in a number of use cases. They are some of the most powerful, valuable and potentially risky AI systems in existence”[5]. While the proposed Act was keen on firmly regulating foundation models, a trialogue was initiated between the German, Italian and French governments to loosen the grip over these providers by proposing a self-regulation system, and strongly criticising Brussels for over-regulating service providers’ conduct and hindering innovation in the Union[6]. The leaders of said countries also expressed deep concern about the possibility that smaller European-based companies will not be able to keep up with the obligations raised by the Act[7]. While the Parliament maintains a firm formal position over the impossibility of excluding the foundation models, it is apparent that this opposition could furthermore potentially trigger the stalemate of the legislative iter.

A second criticality was identified by the exclusion from the scope of the Act of AI instruments applied for military, national security and national defence purposes. Civil society organizations have indeed expressed major concern towards the possibility that technologies which would be theoretically labelled as posing an unacceptable risk could be implemented if they fall under the umbrella of the scope of defending national security, but, additionally, that dual-use technology could be employed without any regulatory restriction[8]

Finally, the Act faces issues regarding its coordination with the US Order. Albeit both legislative instruments are based on a risk-based approach, the Senate has been more hesitant to espouse the European hard line. As Alex Engler - associate in governance studies at The Brookings Institution- wrote for Stanford University: “There’s a growing disparity between the U.S. and the EU approach in regulating AI. The EU has moved forward on laws around data privacy, online platforms, and online e-commerce regulation, and more, while similar legislation is absent in the U.S”[9]. Furthermore, the US Order struggles to draw clear-cut enforcement measures against companies which happen to be in breach of their obligations, therefore it is clear that the priority of the American legislator lies mostly in maintaining international competitiveness[10]. Needless to say, the lack of homogeneous standards hinders physical and legal persons, the latter being obliged to change their operativity depending on the country of distribution of services. 

Despite the said shortcomings, the Act will hopefully be the Kickstarter of a broader strategy able to compensate for the strict approach adopted in the regulation, thus attracting investments and levelling the competition with the US.


[1] https://www.lawfaremedia.org/article/a-comparative-perspective-on-ai-regulation

[2] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[3] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[4] Ibid.

[5] https://time.com/6338602/eu-ai-regulation-foundation-models/

[6] Ibid.

[7] Ibid.

[8] https://www.stopkillerrobots.org/news/what-are-the-ai-act-and-the-council-of-europe-convention/

[9] https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement

[10] https://www.oii.ox.ac.uk/news-events/the-eu-and-the-us-two-different-approaches-to-ai-governance/


[1] https://cset.georgetown.edu/article/in-out-of-china-financial-support-for-ai-development/

[2] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[3] https://twitter.com/ylecun/status/1662375684612685825?lang=en

[4] https://www.nist.gov/itl/ai-risk-management-framework

[5] https://www.scientificamerican.com/article/when-it-comes-to-ai-models-bigger-isnt-always-better/

[6] https://ai.meta.com/llama/

[7] https://hackernoon.com/the-next-era-of-ai-inside-the-breakthrough-gpt-4-model

[8] https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF

October 28, 2023No Comments

From Knowing to the Impossibility of Not Knowing’ – Imposing International Criminal Responsibility on Human Combatants for War Crimes Committed by Autonomous Weapons

Author: Vendela Laukkanen - AI, Cyber Security & Space Team

Introduction

The United Nations Secretary-General and the President of the International Committee of the Red Cross (ICRC) called on States ‘to take decisive action now to protect humanity’, referring to the threat posed by autonomous weaponssystems (AWS).1 The joint call further referred to the restrictions on certain weapons under International Humanitarian Law (IHL) and elucidated the accountability of States and individuals for any violations, since impunity threatens peace and security.2 This post will focus on the latter: the principal criminal responsibility of individuals when the acts of an AWS result in war crimes. The discussion will be twofold: the ethical concern of who will bear responsibility in such a scenario, followed by the legal dilemma of an ‘accountability gap’3 in front of the International Criminal Court(ICC). The definition of AI weapons for the purpose of this discussion is:

‘Any weapon system with autonomy in its critical functions—that is, a weapon system that can select… and attack… targets without human intervention’.4

Ethical Dilemma

To maintain peace and security, responsibility for the most egregious breaches of IHL is crucial, the bearers of such has thus far been human combatants.5 The purpose of criminal responsibility is firstly to ‘deter future violations’6, and secondly, to ensure justice for victims. In the case of AWS, the question is where such responsibility ought to be placed to satisfy both factors of criminal responsibility (deterrence and justice): the manufacturer that produced the machine; the combatant deploying the weapon; or the AWS itself?

Proponents of AWS argue that a machine will be better equipped than a human to distinguish between military targetsand civilian persons/objects7, it could thus be presumed that if the AWS strikes indiscriminately it is due to a malfunctioning of the system and the manufacturer ought to be responsible. However, as Sparrow claims, if the risk of mistargeting has been acknowledged to the person deploying the weapon, or, if the weapon has sufficient autonomy to act outside of the initial programming, to hold the manufacturer accountable ‘would be analogous to holding parents responsible for the actions of their children once they have left their care’.8 The second scenario - holding the combatant that deployed the weapon responsible is neither unproblematic, as Sparrow points out, the distinguishing factor of AWS to other weapons is its ability to choose its targets independently of human control, thus imposing responsibility on the combatant would be unfair.9 However, the human deploying the weapon ought to be aware of its autonomous nature -that the machine can be involved in mistargeting is therefore a foreseeable risk, leading to the argument that the combatant accepts that risk when deciding to deploy the AWS. The final scenario is to impose responsibility on the weapon itself. It must therefore be possible to punish the machine and, as Sparrow claims, make it suffer - the purpose of punishment.10 Whether a machine is able to suffer and feel remorse in a way consistent with the human idea of ‘justice has been done’, refrain from repeating the behaviour and to deter other machines from committing war crimes, is highly questionable.

Legal Dilemma

The ICC’s mission - to fight impunity for the most serious crimes, risks being undermined if AWS are deployed with little to no chance (or risk) of criminal responsibility, such ‘accountability gap’ therefore threatens to increase war crimes and destabilise the laws of war.11 If we are to retain morality in war - the most plausible solution is to hold the combatant deploying the AWS responsible for grave IHL breaches, as with any other weapon. Criminal Law requires proof of mens rea (‘guilty mind’) and actus reus (‘guilty act’). The Rome Statute of the ICC requires intention and knowledge as the default mens rea12 and the war crime of ‘intentionally directing attacks against the civilian population…’13 relates to the IHL rule of distinction. A combatant who intentionally and knowingly deploys an AWS incapable of functioning lawfully would be clear-cut under the ICC regime, - but the combatant deploying the AWS with the lack of knowledge and intention (acts with dolus eventualis) to attack civilian targets bestrides the accountability gap.14 However, a probative practice that allows a mental element to be inferred from conduct and circumstances with no reasonable alternative exposition15 may provide a solution. Indeed, ICC case law appears to suggest that intent:

‘...may be inferred from various factors establishing that civilians… were the object of the attack…16; and:

‘...lack of discrimination or precaution in attack may constitute an attack against civilian targets…17

Source: https://www.itssverona.it/wp-content/uploads/2023/10/5a2e5ec681cb80d9c17f3e8af8e252f1-1-e1700696117407.webp

This is also evident in the case law of other international tribunals18. As stated by the Court itself ‘…it must beestablished that in the circumstances… a reasonable person could not have believed that the individual or group… attacked was… directly participating in hostilities’19, thus shifting the mens rea analysis from the subjective state of mind of the combatant, to the objective standard of what the reasonable person must have known of the civilian status in the circumstances.20 The accountability gap is therefore mitigated by establishing that it was impossible for the combatant not to have known of the civilian status of the targets, and the combatant that still deploys the AWS must therefore have intended the attack, ‘from knowing to the impossibility of not knowing’21.

Conclusion

This post has attempted to briefly discuss the ethical; and legal dilemmas of AI used in warfare. Whilst no simple answers exist, it is clear that if autonomous weapons are to be used in times of armed conflict - humans with the moral capacity to suffer and feel remorse must be the bearers of responsibility for war crimes. The accountability gap may be mitigated by allowing a dolus eventualis mens rea standard at the ICC, which finds support in the case law. After all, the lack of possibility of holding humans criminally responsible for war crimes committed by AWS, calls into question whether the international community is ready to abandon the laws of war and the last 25 years of fighting impunity for the most serious crimes.


1 ICRC ‘Joint Call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States toestablish new prohibitions and restrictions on Autonomous Weapons Systems’ (icrc.org, 05 October 2023) <https://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-system s> accessed 12October 2023.

2 ibid.

3 Davison, N., ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (2017) No. 30 UNODA OccasionalPapers 16.

4 ibid 5.

5 See also: Davison, N., ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (2017) No. 30 UNODAOccasional Papers 19.

6 ICRC ‘Joint Call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States to establishnew prohibitions and restrictions on Autonomous Weapons Systems’ (icrc.org, 05 October 2023) <https://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-system s> accessed 12October 2023.

7 Dawes, J., ‘The case for and against autonomous weapon systems’ (2017) 1(9) Nature Human Behaviour 613.

8 Sparrow, R., ‘Killer Robots’ (2007) 24(1) Journal of Applied Philosophy 70.

9 ibid 71.

10 ibid 72.

11 See also: Dawes, J., ‘The case for and against autonomous weapon systems’ (2017) 1(9) Nature Human Behaviour 614.

12 Rome Statute Art. 30.

13 Rome Statute Arts. 8(2)(b)(i), 8(2)(b)(ii) and 8(2)(e)(i).

14 See also: Davison, N., ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (2017) No. 30 UNODA Occasional Papers 16; Abhimanyu, G., ‘Autonomous cyber capabilities and individual criminal responsibility for war crimes’ (2021) AutonomousCyber Capabilities Under International Law 8.

15 See also: Abhimanyu, G., ‘Autonomous cyber capabilities and individual criminal responsibility for war crimes’ (2021) Autonomous CyberCapabilities Under International Law 10.

16 Katanga trial judgment (n 35) para 807.

17 Ntaganda trial judgment (n 35) para 921.

18 Prosecutor v Dragomir Milošević (TC) [2007] International Criminal Tribunal for the Former Yugoslavia IT98-29/1-T [948]. See also, Prosecutor v Stanislav Galić (AC) [2006] International Criminal Tribunal for the Former Yugoslavia IT-98-29-A [132]; Prosecutor v Tihomir Blaškić (TC) [2000] International Criminal Tribunal for the Former Yugoslavia IT-95-14-T [501–12].

19 Ntaganda trial judgment (n 35) para 921.

20 Abhimanyu, G., ‘Autonomous cyber capabilities and individual criminal responsibility for war crimes’ (2021) Autonomous Cyber CapabilitiesUnder International Law 14.

21 Ibid 15.

February 20, 2023No Comments

Diletta Huyskes interviewed on AI and Human Rights

Diletta Huyskes, Head of Advocacy in Privacy Network, talks about the latest developments regarding Artificial Intelligence. In particular, this episode deals with the challenges that AI poses to the protection of Human Rights and how this issue is tackled in the upcoming AI Act.

Interviewers: Ilaria Lorusso and Luca Mattei

November 21, 2022No Comments

Artificial Intelligence in the World of Art: A Human Rights Dilemma

Author: Maria Makurat.

Artificial Intelligence or “AI” is already being widely used for various purposes whether it be in analysing marketing trends, modern warfare or as of recently: reproducing artwork. Around 2021 and up till now, various articles have been released discussing the issue of AI being developed to reproduce an artist’s style and even recreate new artwork and therefore bringing up ethical issue of whether artists are in danger of losing their copyright claim on their own work. Whilst this issue is very new and one cannot say for sure where this development is going and whether one should be concerned in the first place. This article explains the recent debate and issues that are being addressed while drawing upon classical AI theory from warfare and highlighting possible suggestions.

Artificial intelligence not only in the military realm

“In April this year, the company announced DALL-E 2, which can generate photos, illustrations, and paintings that look like they were produced by human artists. This July OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.”

An article by Wired “Algorithms Can Now Mimic Any Artist. Some Artists Hate It,” discusses how an AI called “DALL- E 2” can reproduce an artist’s style and make new photos, digital art and paintings. In theory anyone can use the programme to mimic another artist, or artists can sue it to make new art based on their old work. This of course brings many issues to light such as whether one can put a copyright on an art style (as is also discussed in the article), what exactly one wants to achieve with using AI to recreate more art and how this will be discussed in the future if indeed art work will be stolen. An earlier article by the Los Angeles Times from 2020 “Edison, Morse ... Watson? Artificial intelligence poses test of who’s an inventor,” already addressed this issue by discussing who is exactly the “inventor” when AI can develop for instance computer games and other inventions. It is true that a human being must develop the AI programme however, can that person also then be called the inventor if that said programme develops own ideas and perhaps own artwork? In relation to the general debate, one should consider “The Universal Declaration of Human Rights” Article 27: “Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.”

Some recent debate centres not only around whether it is a question of the “ethics” in artificial intelligence but going one step back to understand the term “intelligence”. Joanna J Bryson writes: “Intelligence is the capacity to do the right thing at the right time. It is the ability to respond to the opportunities and challenges presented by context.”[i] Whilst the authors consider AI in relation to law, they do point out that: “Artificial intelligence only occurs by and with design. Thus, AI is only produced intentionally, for a purpose, by one or more member of the human society.”[ii]  Joanna further discusses that the word artificial means that something has been made by humans and therefore again brings up a key concept in AI of whether the human or the programme is responsible.[iii] When we consider this in relation to human rights issues and ethics, it may be true that AI in the world of art can be produced with a purpose by humans, but it remains the problematic issue of what the purpose is. We need the clear outline of why this AI programme has been made in the art world and for what purpose in order to then be able to answer further questions.

It has been pointed out that one should consider this development as nothing new since AI has been already used in the 1950s and 1960s to generate certain patterns and shapes. It is seen by many as a tool that helps the artists in these areas to work faster and be more precise however, it’s been debated that one should not be worried at all that the AI can replace humans since it lacks the human touch in the first place. This remains to be seen how far the AI can learn and adapt since it is programmed that way. If one should not be concerned by AI replacing human artists, then why is the debate happening in the first place? 

Credits: unsplash.com

The continues need for clearer definitions

It is not only a matter of the AI replicating art, but how we can define whether the system has crossed the line of copyright infringement: “(…) lawsuits claiming infringement are unlikely to succeed, because while a piece of art may be protected by copyright, an artistic style cannot.” This only shows again that one needs to quickly define more clearly what is an “artistic style”, “artwork” in relation to how AI would be even allowed to replicate the style.

One can draw a comparison to AI in warfare with debates concerning following themes: responsibility gap, moral offloading and taking humans out of the loop (discussed by scholars such as Horowitz, Asaro, Krischnan and Schwarz). Keith argues for example that psychological analyses show that we suffer from cognitive bias and that AI (in terms of military defence) will change our decision-making process.[iv]  If we use the example of drone warfare and the campaign “Stop Autonomous Weapons”, it depicts how drones can be used without directly sending humans into battle and shows the system getting out of hand and people distancing themselves from responsibility. Such type of warfare has an impact on the decision-making process, distancing the soldiers and strategists from the battle field. With of course taking into mind that using an AI in the art world does not involve possible casualties, one still can consider how we have a similar distancing from responsibility and moral offloading. It comes back to the recurring issues of who is responsible if an AI system decides by itself which choices to make, how to make them and determine the output. There are no humans involved during the process of making or “replicating” the art pieces however, there was an individual present during the development of the AI – I would like to call it a problematic ethical circle of debate in the art world.

Even though the idea of using AI to copy an art style or artworks altogether is quite new and perhaps even undeveloped, one should consider more strongly certain methods in order to bring a certain control and a managing system into the game. Nick Bostrom for instance discusses what a superintelligence in relation to AI would entail saying that one would need certain incentive methods in order for the AI to learn and adapt to the human society: “Capability control through social integration and balance of power relies upon diffuse social forces rewarding and penalizing the AI. (…) A better alternative might be to combine the incentive method with the use of motivation selection to give the AI a final goal that makes it easier to control.”[v]

Conclusion

It is not only problematic for the art world that an AI is able to copy any artist’s style -it is concerning how much further this development could go in terms of taking an artist’s style and creating an entire new series and diluting therefore the line between where the old and fictional artist lies. As has also already pointed out by others then need for better definitions however it needs to be stressed more strongly: one needs clearer definitions of who is an “artist”, “inventor”, “digital artist” when AI enters the discussion and is apparently here to stay. One needs to make a clear distinction between a human artist and a ‘programme artist (AI)’. Can an artist call himself artist when he or she uses AI to produce art?  All these questions should be discussed further in the near future since it seems to be the case that AI has entered the art realm and will continue to stay playing maybe a larger role in the future perhaps even with the development of the Meta verse.


[i] Markus Dirk Dubber, Frank Pasquale, Sunit Das, (2020) The Oxford Handbook of Ethics of AI Oxford handbooks. Oxford: Oxford University Press, pp. 4

[ii] Ibid, pp. 6.

[iii] Ibid, pp. 5

[iv] Keith, Dear, “Artificial intelligence and decision making,” pp. 18.

[v] Bostrom, Nick, “Superintelligence: Paths, Dangers, Strategies,” pp. 132.

November 30, 2021No Comments

How Different Political Powers Approach the Issue of Ethics in the Development of Artificial Intelligence

By: Zrinka Borić

Image Source: https://www.pexels.com/photo/person-reaching-out-to-a-robot-8386434/

Advancement of artificial intelligence (AI) technology is expected to drive progress and change in the areas of military, economy, and information. This so-called “fourth industrial revolution” opens various possibilities, among which the most probable one is further development and prosperity of those who will be able to reap the benefits, resulting in further strengthening existing inequalities in the global state system. 

The main concern an average person has regarding the AI is the idea of the post-apocalyptical world in which the robots and AI have completely overtaken the Earth, as depicted in many famous science-fiction publications. To approach this topic it is necessary to have two things in mind. First, the developement of the strong AI (also called Artificial General Intelligence – AGI) systems that will focus on the simulation of human reasoning and creation of machine intelligence equal to the human currently does not exist, and the experts cannot agree on the expected occurrence of this type of AI. Second, artificial intelligence systems rely heavily on data. Therefore, the quantity, quality and availability of data are crucial. In the longterm, the ethical and responsible approach to data collection for AI development and implementation aims to guarantee a balanced and responsible innovation. 

For example, the United States and the European Union countries have expressed dedication in developing trustworthy and ethical AI. At the other hand, countries like China and Russia have not shown such dedication in the development and employment of their autonomous weapons systems. Cyber policy and security expert Herbert Lin expresses the concern how due to lower level of regard towards the ethical and safety issues there is a likely opportunity that their weapons are going to be more militarily effective and developed sooner. 

Different forms of government have different approaches towards AI development and implementation. China is characterized as authoritarian and hierarchical state, the United States is a federal republic with a democratically run government, while the European Union is described as a political and economic union with that operates through combination of supranational and intergovernmental decision-making approach.

PEOPLE’S REPUBLIC OF CHINA

China defines artificial intelligence research and development as key to boosting national economic and manufacturing competitiveness as well as providing national security. China’s vigorous approach towards the AI development is caused by the potential economic benefit in the future. The experts assume that China will benefit from the highest relative economic gain from AI technologies, since the AI technology is envisioned to improve its productivity and manufacturing possibility and therefore to meet future GDP targets. Therefore, China faces the risk of AI development and application without giving enough attention to a responsible use of AI and preparing its citizen to adapt to possible changes affected by widespread AI adoption. China has already once fallen in the trap of recklessly rushing into uncontrolled progress, and it led to an unsustainable level of growth accompanied by a set of negative effects on China’s economy growth. China’s clear competitive advantage lies in its abundance of data which will most likely become one of the crucial elements in the future development of AI technology, relatively loose privacy laws, vibrant start-ups, and a stable rise in the number of AI engineers.

THE EUROPEAN UNION

The state structure shapes the design of the AI policy and its implementation. When discussing the EU it is important to keep in mind that the EU is not a country, but an economic and political supranational and intergovernmental organization. Considering the fact that economic prosperity and national security of the European Union are still firmly in the hands of the national governments it can easily be understood why the organizational structure of the Union hinders the process of making concrete and quick decisions which are always favorable in the conditions of the international competition. The EU has succeeded to publish joint plans and policies regarding AI, such as Civil Law Rules on Robotics, Declaration for Cooperation on Artificial Intelligence, Ethic Guidelines for Trustworthy AI, and Policy and Investment Recommendations for Trustworthy AI.

The European Union pays special attention to the study of the potential impact of artificial intelligence technology on the society. The researches usually involve social aspect such as data protection (e.g. GDPR law), network security and AI ethics. There are more substantial ethical or normative discussions when it comes to developing human-centered and trustworthy AI technologies. [...] Developing the culture of trustworthy AI and not only when it comes to security and defense, but more broadly about AI enabled technologies. This is at the forefront of the policy and political thinking in Brussels.“ claims Raluca Csernatoni, an expert on European security and defense with a specific focus on distruptive techologies.

In 2018 member states signed the Declaration on Cooperation on Artificial Intelligence where the participating member states agreed to cooperate in various fields regarding AI development and implementation, including ensuring an adequate legal and ethical framework, building on EU fundamental rights and values.

THE UNITED STATES

During the Obama administration National Science and Technology Council (NSTC) Committee on Technology drafted the report Preparing for the Future of Artificial Intelligence in 2016. Concerns about safeguarding “justice, fairness, and accountability” if AI was to be tasked with consequential decisions about people had previously been mentioned in Administration’s Big Data: Seizing Opportunities, Preserving Values  report and Big Data and Privacy: A Technological Perspective report. Regarding the governance and safety, the report advises that use of AI technology must be controlled by “technical and ethical supervision”.

Later, during the Trump Administration the 2019 AI R&D Strategic Plan expressed seven main fields of interest, one of which is understanding ethical, legal, and societal applications of AI. According to the recent EU-US Trade and Technology Council TTC it is clear that the current administration continues supporting the efforts for the development of responsible and trustworthy AI. 

THE U.S. – EU COOPERATION 

The most recent U.S.- EU cooperation on the AI advancement, the TTC, was launched on September 29, 2021 in Pittsburgh. TTC working groups are cooperating on discussing the issues of technology standards, data governance and technology platforms, misuse of technology threatening security and human rights, and many others. The United States and European Union affirmed their commitment to a human-centered approach and developing mutual understanding on principles of trustworthy and responsible AI. However, both have expressed significants concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. They agree that these systems „pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems“. 

CONCLUSION

Different forms of governments differ immensly in their approach towards the development and implementation of AI, as well as when it comes to the necessary principles of ethics and responsibility. However, governments need to take further actions with great cautions. When implemented carelessly, without taking ethics and safety in consideration, AI could end up being ineffective, or even worse, dangerous. Governments need to implement AI in a way that builds trust and legitimacy, which ideally requires legal and ethical frameworks to be in place for handling and protecting citizens’ data and algorithm use. 

November 2, 2021No Comments

The United States’ Race for Supremacy in Artificial Intelligence

By: Zrinka Boric

“Where we choose to invest speaks to what we value as a Nation. This year’s Budget, the first of my Presidency, is a statement of values that define our Nation at its best.” - Joseph R. Biden, Jr. (The Budget Message of the President)

This article navigates the landscape of AI policymaking and tracks efforts of the United States to promote and govern AI technologies. 

Technological advancement has become a new approach to increase a state’s political, military, and economic strength. The Cold War and the arms race between the two then strongest nations in the world, the United States of America (USA) and the Soviet Union (USSR), revealed the potential that lay in the development of technology. Today, the United States is again at the forefront in the race for supremacy in the potentially world-changing technology: artificial intelligence (AI). 

Artificial intelligence has the potential to fundamentally change strategy, organization, priorities, and resources of any national community that manages to develop AI technology, lead to further innovation, and eventually apply it. Artificial intelligence is going through major evolution and development, and its potential is increasing at a speed rate. Progress is visibly accelerating, and our social, political, and economic systems will be affected greatly. One of the important questions is how to define and approach all the opportunities AI technology can offer while avoiding or managing risks. 

The American AI Initiative

The United States is characterized by a skilled workforce, innovative private sector, good data availability, and effective governance which are all key factors for the government’s ability to enable effective development and adoption of AI. 

The United States published its national AI strategy, the American AI Initiative, in 2019.The responsible organization is the White House, and its priority is to increase the federal government investment in AI’s Research and Development (R&D), and to ensure technical standards for safe AI technology development and deployment. American AI Initiative expresses a commitment to collaborate with foreign partners while promoting U.S leadership in AI. Nevertheless, it is important to note that the American AI Initiative is not particularly comprehensive, especially when compared to other leading nations, and is characterized by the lack of both funding and palpable policy objectives.

In 2019, the U.S. policymakers were advised to advance the American AI Initiative with concrete goals and clear policies aimed at advancing AI – such as spurring public sector AI adoption and allocating new funding for AI R&D, rather than simply repurposing existing funds.

AI in the USA Budget for FY2022 

President Biden's budget for FY2022 includes approximately $171.3 billion for research and development (R&D), which is an 8.5% ($13.5 billion) increase compared to the FY2021 estimated level of $157.8 billion. 

According to the 2021 AI Index Report, in FY 2020 the USA federal departments and agencies spent a combined $1.8 billion on unclassified AI-related contracts. This represents an increase of more than 25% from the amount spent in FY 2019. 

One of the agencies with the major R&D program is the National Institute of Standards and Technology (NIST). President Biden is requesting $1,497.2 million for NIST in FY2022, an increase of $462.7 million (44.7%) from the FY2021 $1,034.5 million. The second-highest program budget increase in NIST is for Partnerships, Research, and Standards to Advance Trustworthy Artificial Intelligence, $45.4 million (an increase of $15 million compared to FY2021). 

Some departments are expecting large percentage increases in R&D funding, among which the Department of Commerce, with an increase of up to 29.3%. At the same time, it is interesting to note that one of DOC’s latest projects is the creation of the National Artificial Intelligence (AI) Advisory Committee (NAIAC), which will be discussed below.

Numerous policymakers in Congress are particularly interested in the Department of Defense Science and Technology (DOD S&T) program funding. The increasingly popular belief in the defense community finds ensuring support for S&T activities as necessary to maintain USA’s military superiority in the world.

The budget request represents President Biden’s R&D priorities, and the Congress may agree with it partially, completely, or not agree at all. It is safe to say that AI has gained the attention of the Congress, considering the 116th Congress (January 3, 2019 - January 3, 2021) is the most AI-focused congressional session in history with the number of times AI was mentioned being more than three times higher compared to 115th Congress (115th - 149, 116th - 486).

National and International Efforts

As indicated in its national AI strategy, the United States takes part in various intergovernmental AI initiatives, such asGlobal Partnership on AI (GPAI), OECD Network of Experts on AI (ONE AI)Ad Hoc Expert Group (AHED) for the Recommendation on the Ethics of Artificial Intelligence, and has participated in global summits and meetings, such as AI Partnership for Defense, and AI for Good Global Summit. In addition, the United States announced a declaration of the bilateral agreement on AI with the United Kingdom in December 2020. 

On September 8, 2021, the U.S. Secretary of Commerce Secretary Gina Raimondo announced the establishment of the National Artificial Intelligence (AI) Advisory Committee (NAIAC). The main purpose of the NAIAC will be to advise the President and the National AI Initiative Office (NAIIO) on issues related to AI. “AI presents an enormous opportunity to tackle the biggest issues of our time, strengthen our technological competitiveness, and be an engine for growth in nearly every sector of the economy. But we must be thoughtful, creative, and wise in how we address the challenges that accompany these new technologies,” Commerce Secretary Gina Raimondo said.

The United States or China? 

The United States is showing an increasing interest in developing and implementing artificial intelligence through the increase in federal AI-related budget, establishment of new committees, intergovernmental AI initiatives, bilateral agreements, and participating in global summits but the constant comparison is being made between USA and China. Should the future battle over artificial intelligence be between USA and China, the question arises: Who will win this battle for AI supremacy?

Recently, a former Pentagon expert said that the race is already over, and China has won. The Pentagon’s first chief software officer resigned over the slow pace of technological advances in the U.S. military. He claims the USA has no competing fighting chance against China in the upcoming years and that it's already a done deal.

At the same time, an expert in artificial intelligence Kai-Fu Lee, former President of Google China, disagrees with this claim. He notes that the US has a clear academic lead in artificial intelligence, supports his claim by noting that all 16 Turing award recipients in AI are American or Canadian, and the top 1% of papers published are still predominantly American. China is simply faster in commercializing technologies and has more data. 

Artificial intelligence already has numerous uses (academic, military, medical, etc.) and when assessing countries' AI technology reach it is important to separate different uses of technology. 

To answer the question on whether the United States or China will win AI 'race' or whether a new force will emerge, it is necessary to closely monitor artificial intelligence technology development and compare different countries using a uniform set of criteria before reaching a conclusion. Another potential scenario, as highlighted by Kai-Fu Lee in his book AI 2014: Ten Visions of Our Future, states the possibility of United States and China co-leading the world in technology.

Image Source: https://www.pexels.com/photo/blue-bright-lights-373543/

October 5, 2021No Comments

The Disruptive Power of AI applied to Drones

Tate Nurkin talks about the intricacies of AI technologies applied to the military domain and gives us an overview of the AI-powered military programs, what it means for the future of warfare and touches on ethical issues. 

Tate Nurkin is the founder of OTH Intelligence Group and a Non-Resident Senior Fellow at the Atlantic Council.

Interviewer: Arnaud Sobrero

This is ITSS Verona Member Series Video Podcast by the Cyber, AI and Space Team.

ITSS Verona - The International Team for the Study of Security Verona is a not-for-profit, apolitical, international cultural association dedicated to the study of international security, ranging from terrorism to climate change, from artificial intelligence to pandemics, from great power competition to energy security.