Author: Vendela Laukkanen - AI, Cyber Security & Space Team
Introduction
The United Nations Secretary-General and the President of the International Committee of the Red Cross (ICRC) called on States ‘to take decisive action now to protect humanity’, referring to the threat posed by autonomous weaponssystems (AWS).1 The joint call further referred to the restrictions on certain weapons under International Humanitarian Law (IHL) and elucidated the accountability of States and individuals for any violations, since impunity threatens peace and security.2 This post will focus on the latter: the principal criminal responsibility of individuals when the acts of an AWS result in war crimes. The discussion will be twofold: the ethical concern of who will bear responsibility in such a scenario, followed by the legal dilemma of an ‘accountability gap’3 in front of the International Criminal Court(ICC). The definition of AI weapons for the purpose of this discussion is:
‘Any weapon system with autonomy in its critical functions—that is, a weapon system that can select… and attack… targets without human intervention’.4
Ethical Dilemma
To maintain peace and security, responsibility for the most egregious breaches of IHL is crucial, the bearers of such has thus far been human combatants.5 The purpose of criminal responsibility is firstly to ‘deter future violations’6, and secondly, to ensure justice for victims. In the case of AWS, the question is where such responsibility ought to be placed to satisfy both factors of criminal responsibility (deterrence and justice): the manufacturer that produced the machine; the combatant deploying the weapon; or the AWS itself?
Proponents of AWS argue that a machine will be better equipped than a human to distinguish between military targetsand civilian persons/objects7, it could thus be presumed that if the AWS strikes indiscriminately it is due to a malfunctioning of the system and the manufacturer ought to be responsible. However, as Sparrow claims, if the risk of mistargeting has been acknowledged to the person deploying the weapon, or, if the weapon has sufficient autonomy to act outside of the initial programming, to hold the manufacturer accountable ‘would be analogous to holding parents responsible for the actions of their children once they have left their care’.8 The second scenario - holding the combatant that deployed the weapon responsible is neither unproblematic, as Sparrow points out, the distinguishing factor of AWS to other weapons is its ability to choose its targets independently of human control, thus imposing responsibility on the combatant would be unfair.9 However, the human deploying the weapon ought to be aware of its autonomous nature -that the machine can be involved in mistargeting is therefore a foreseeable risk, leading to the argument that the combatant accepts that risk when deciding to deploy the AWS. The final scenario is to impose responsibility on the weapon itself. It must therefore be possible to punish the machine and, as Sparrow claims, make it suffer - the purpose of punishment.10 Whether a machine is able to suffer and feel remorse in a way consistent with the human idea of ‘justice has been done’, refrain from repeating the behaviour and to deter other machines from committing war crimes, is highly questionable.
Legal Dilemma
The ICC’s mission - to fight impunity for the most serious crimes, risks being undermined if AWS are deployed with little to no chance (or risk) of criminal responsibility, such ‘accountability gap’ therefore threatens to increase war crimes and destabilise the laws of war.11 If we are to retain morality in war - the most plausible solution is to hold the combatant deploying the AWS responsible for grave IHL breaches, as with any other weapon. Criminal Law requires proof of mens rea (‘guilty mind’) and actus reus (‘guilty act’). The Rome Statute of the ICC requires intention and knowledge as the default mens rea12 and the war crime of ‘intentionally directing attacks against the civilian population…’13 relates to the IHL rule of distinction. A combatant who intentionally and knowingly deploys an AWS incapable of functioning lawfully would be clear-cut under the ICC regime, - but the combatant deploying the AWS with the lack of knowledge and intention (acts with dolus eventualis) to attack civilian targets bestrides the accountability gap.14 However, a probative practice that allows a mental element to be inferred from conduct and circumstances with no reasonable alternative exposition15 may provide a solution. Indeed, ICC case law appears to suggest that intent:
‘...may be inferred from various factors establishing that civilians… were the object of the attack…’16; and:
‘...lack of discrimination or precaution in attack may constitute an attack against civilian targets…’17
This is also evident in the case law of other international tribunals18. As stated by the Court itself ‘…it must beestablished that in the circumstances… a reasonable person could not have believed that the individual or group… attacked was… directly participating in hostilities’19, thus shifting the mens rea analysis from the subjective state of mind of the combatant, to the objective standard of what the reasonable person must have known of the civilian status in the circumstances.20 The accountability gap is therefore mitigated by establishing that it was impossible for the combatant not to have known of the civilian status of the targets, and the combatant that still deploys the AWS must therefore have intended the attack, ‘from knowing to the impossibility of not knowing’21.
Conclusion
This post has attempted to briefly discuss the ethical; and legal dilemmas of AI used in warfare. Whilst no simple answers exist, it is clear that if autonomous weapons are to be used in times of armed conflict - humans with the moral capacity to suffer and feel remorse must be the bearers of responsibility for war crimes. The accountability gap may be mitigated by allowing a dolus eventualis mens rea standard at the ICC, which finds support in the case law. After all, the lack of possibility of holding humans criminally responsible for war crimes committed by AWS, calls into question whether the international community is ready to abandon the laws of war and the last 25 years of fighting impunity for the most serious crimes.
1 ICRC ‘Joint Call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States toestablish new prohibitions and restrictions on Autonomous Weapons Systems’ (icrc.org, 05 October 2023) <https://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-system s> accessed 12October 2023.
2 ibid.
3 Davison, N., ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (2017) No. 30 UNODA OccasionalPapers 16.
4 ibid 5.
5 See also: Davison, N., ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (2017) No. 30 UNODAOccasional Papers 19.
6 ICRC ‘Joint Call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States to establishnew prohibitions and restrictions on Autonomous Weapons Systems’ (icrc.org, 05 October 2023) <https://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-system s> accessed 12October 2023.
7 Dawes, J., ‘The case for and against autonomous weapon systems’ (2017) 1(9) Nature Human Behaviour 613.
8 Sparrow, R., ‘Killer Robots’ (2007) 24(1) Journal of Applied Philosophy 70.
9 ibid 71.
10 ibid 72.
11 See also: Dawes, J., ‘The case for and against autonomous weapon systems’ (2017) 1(9) Nature Human Behaviour 614.
12 Rome Statute Art. 30.
13 Rome Statute Arts. 8(2)(b)(i), 8(2)(b)(ii) and 8(2)(e)(i).
14 See also: Davison, N., ‘A legal perspective: Autonomous weapon systems under international humanitarian law’ (2017) No. 30 UNODA Occasional Papers 16; Abhimanyu, G., ‘Autonomous cyber capabilities and individual criminal responsibility for war crimes’ (2021) AutonomousCyber Capabilities Under International Law 8.
15 See also: Abhimanyu, G., ‘Autonomous cyber capabilities and individual criminal responsibility for war crimes’ (2021) Autonomous CyberCapabilities Under International Law 10.
16 Katanga trial judgment (n 35) para 807.
17 Ntaganda trial judgment (n 35) para 921.
18 Prosecutor v Dragomir Milošević (TC) [2007] International Criminal Tribunal for the Former Yugoslavia IT98-29/1-T [948]. See also, Prosecutor v Stanislav Galić (AC) [2006] International Criminal Tribunal for the Former Yugoslavia IT-98-29-A [132]; Prosecutor v Tihomir Blaškić (TC) [2000] International Criminal Tribunal for the Former Yugoslavia IT-95-14-T [501–12].
19 Ntaganda trial judgment (n 35) para 921.
20 Abhimanyu, G., ‘Autonomous cyber capabilities and individual criminal responsibility for war crimes’ (2021) Autonomous Cyber CapabilitiesUnder International Law 14.
21 Ibid 15.