Diletta Huyskes, Head of Advocacy in Privacy Network, talks about the latest developments regarding Artificial Intelligence. In particular, this episode deals with the challenges that AI poses to the protection of Human Rights and how this issue is tackled in the upcoming AI Act.
Artificial Intelligence or “AI” is already being widely used for various purposes whether it be in analysing marketing trends, modern warfare or as of recently: reproducing artwork. Around 2021 and up till now, various articles have been released discussing the issue of AI being developed to reproduce an artist’s style and even recreate new artwork and therefore bringing up ethical issue of whether artists are in danger of losing their copyright claim on their own work. Whilst this issue is very new and one cannot say for sure where this development is going and whether one should be concerned in the first place. This article explains the recent debate and issues that are being addressed while drawing upon classical AI theory from warfare and highlighting possible suggestions.
Artificial intelligence not only in the military realm
“In April this year, the company announced DALL-E 2, which can generate photos, illustrations, and paintings that look like they were produced by human artists. This July OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.”
An article by Wired “Algorithms Can Now Mimic Any Artist. Some Artists Hate It,” discusses how an AI called “DALL- E 2” can reproduce an artist’s style and make new photos, digital art and paintings. In theory anyone can use the programme to mimic another artist, or artists can sue it to make new art based on their old work. This of course brings many issues to light such as whether one can put a copyright on an art style (as is also discussed in the article), what exactly one wants to achieve with using AI to recreate more art and how this will be discussed in the future if indeed art work will be stolen. An earlier article by the Los Angeles Times from 2020 “Edison, Morse ... Watson? Artificial intelligence poses test of who’s an inventor,” already addressed this issue by discussing who is exactly the “inventor” when AI can develop for instance computer games and other inventions. It is true that a human being must develop the AI programme however, can that person also then be called the inventor if that said programme develops own ideas and perhaps own artwork? In relation to the general debate, one should consider “The Universal Declaration of Human Rights” Article 27: “Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.”
Some recent debate centres not only around whether it is a question of the “ethics” in artificial intelligence but going one step back to understand the term “intelligence”. Joanna J Bryson writes: “Intelligence is the capacity to do the right thing at the right time. It is the ability to respond to the opportunities and challenges presented by context.”[i] Whilst the authors consider AI in relation to law, they do point out that: “Artificial intelligence only occurs by and with design. Thus, AI is only produced intentionally, for a purpose, by one or more member of the human society.”[ii] Joanna further discusses that the word artificial means that something has been made by humans and therefore again brings up a key concept in AI of whether the human or the programme is responsible.[iii] When we consider this in relation to human rights issues and ethics, it may be true that AI in the world of art can be produced with a purpose by humans, but it remains the problematic issue of what the purpose is. We need the clear outline of why this AI programme has been made in the art world and for what purpose in order to then be able to answer further questions.
It has been pointed out that one should consider this development as nothing new since AI has been already used in the 1950s and 1960s to generate certain patterns and shapes. It is seen by many as a tool that helps the artists in these areas to work faster and be more precise however, it’s been debated that one should not be worried at all that the AI can replace humans since it lacks the human touch in the first place. This remains to be seen how far the AI can learn and adapt since it is programmed that way. If one should not be concerned by AI replacing human artists, then why is the debate happening in the first place?
The continues need for clearer definitions
It is not only a matter of the AI replicating art, but how we can define whether the system has crossed the line of copyright infringement: “(…) lawsuits claiming infringement are unlikely to succeed, because while a piece of art may be protected by copyright, an artistic style cannot.” This only shows again that one needs to quickly define more clearly what is an “artistic style”, “artwork” in relation to how AI would be even allowed to replicate the style.
One can draw a comparison to AI in warfare with debates concerning following themes: responsibility gap, moral offloading and taking humans out of the loop (discussed by scholars such as Horowitz, Asaro, Krischnan and Schwarz). Keith argues for example that psychological analyses show that we suffer from cognitive bias and that AI (in terms of military defence) will change our decision-making process.[iv] If we use the example of drone warfare and the campaign “Stop Autonomous Weapons”, it depicts how drones can be used without directly sending humans into battle and shows the system getting out of hand and people distancing themselves from responsibility. Such type of warfare has an impact on the decision-making process, distancing the soldiers and strategists from the battle field. With of course taking into mind that using an AI in the art world does not involve possible casualties, one still can consider how we have a similar distancing from responsibility and moral offloading. It comes back to the recurring issues of who is responsible if an AI system decides by itself which choices to make, how to make them and determine the output. There are no humans involved during the process of making or “replicating” the art pieces however, there was an individual present during the development of the AI – I would like to call it a problematic ethical circle of debate in the art world.
Even though the idea of using AI to copy an art style or artworks altogether is quite new and perhaps even undeveloped, one should consider more strongly certain methods in order to bring a certain control and a managing system into the game. Nick Bostrom for instance discusses what a superintelligence in relation to AI would entail saying that one would need certain incentive methods in order for the AI to learn and adapt to the human society: “Capability control through social integration and balance of power relies upon diffuse social forces rewarding and penalizing the AI. (…) A better alternative might be to combine the incentive method with the use of motivation selection to give the AI a final goal that makes it easier to control.”[v]
It is not only problematic for the art world that an AI is able to copy any artist’s style -it is concerning how much further this development could go in terms of taking an artist’s style and creating an entire new series and diluting therefore the line between where the old and fictional artist lies. As has also already pointed out by others then need for better definitions however it needs to be stressed more strongly: one needs clearer definitions of who is an “artist”, “inventor”, “digital artist” when AI enters the discussion and is apparently here to stay. One needs to make a clear distinction between a human artist and a ‘programme artist (AI)’. Can an artist call himself artist when he or she uses AI to produce art? All these questions should be discussed further in the near future since it seems to be the case that AI has entered the art realm and will continue to stay playing maybe a larger role in the future perhaps even with the development of the Meta verse.
[i] Markus Dirk Dubber, Frank Pasquale, Sunit Das, (2020) The Oxford Handbook of Ethics of AI Oxford handbooks. Oxford: Oxford University Press, pp. 4
Advancement of artificial intelligence (AI) technology is expected to drive progress and change in the areas of military, economy, and information. This so-called “fourth industrial revolution” opens various possibilities, among which the most probable one is further development and prosperity of those who will be able to reap the benefits, resulting in further strengthening existing inequalities in the global state system.
The main concern an average person has regarding the AI is the idea of the post-apocalyptical world in which the robots and AI have completely overtaken the Earth, as depicted in many famous science-fiction publications. To approach this topic it is necessary to have two things in mind. First, the developement of the strong AI (also called Artificial General Intelligence – AGI) systems that will focus on the simulation of human reasoning and creation of machine intelligence equal to the human currently does not exist, and the experts cannot agree on the expected occurrence of this type of AI. Second, artificial intelligence systems rely heavily on data. Therefore, the quantity, quality and availability of data are crucial. In the longterm, the ethical and responsible approach to data collection for AI development and implementation aims to guarantee a balanced and responsible innovation.
For example, the United States and the European Union countries have expressed dedication in developing trustworthy and ethical AI. At the other hand, countries like China and Russia have not shown such dedication in the development and employment of their autonomous weapons systems. Cyber policy and security expert Herbert Lin expresses the concern how due to lower level of regard towards the ethical and safety issues there is a likely opportunity that their weapons are going to be more militarily effective and developed sooner.
Different forms of government have different approaches towards AI development and implementation. China is characterized as authoritarian and hierarchical state, the United States is a federal republic with a democratically run government, while the European Union is described as a political and economic union with that operates through combination of supranational and intergovernmental decision-making approach.
PEOPLE’S REPUBLIC OF CHINA
China defines artificial intelligence research and development as key to boosting national economic and manufacturing competitiveness as well as providing national security. China’s vigorous approach towards the AI development is caused by the potential economic benefit in the future. The experts assume that China will benefit from the highest relative economic gain from AI technologies, since the AI technology is envisioned to improve its productivity and manufacturing possibility and therefore to meet future GDP targets. Therefore, China faces the risk of AI development and application without giving enough attention to a responsible use of AI and preparing its citizen to adapt to possible changes affected by widespread AI adoption. China has already once fallen in the trap of recklessly rushing into uncontrolled progress, and it led to an unsustainable level of growth accompanied by a set of negative effects on China’s economy growth. China’s clear competitive advantage lies in its abundance of data which will most likely become one of the crucial elements in the future development of AI technology, relatively loose privacy laws, vibrant start-ups, and a stable rise in the number of AI engineers.
THE EUROPEAN UNION
The state structure shapes the design of the AI policy and its implementation. When discussing the EU it is important to keep in mind that the EU is not a country, but an economic and political supranational and intergovernmental organization. Considering the fact that economic prosperity and national security of the European Union are still firmly in the hands of the national governments it can easily be understood why the organizational structure of the Union hinders the process of making concrete and quick decisions which are always favorable in the conditions of the international competition. The EU has succeeded to publish joint plans and policies regarding AI, such as Civil Law Rules on Robotics, Declaration for Cooperation on Artificial Intelligence, Ethic Guidelines for Trustworthy AI, and Policy and Investment Recommendations for Trustworthy AI.
The European Union pays special attention to the study of the potential impact of artificial intelligence technology on the society. The researches usually involve social aspect such as data protection (e.g. GDPR law), network security and AI ethics. There are more substantial ethical or normative discussions when it comes to developing human-centered and trustworthy AI technologies. [...] Developing the culture of trustworthy AI and not only when it comes to security and defense, but more broadly about AI enabled technologies. This is at the forefront of the policy and political thinking in Brussels.“ claims Raluca Csernatoni, an expert on European security and defense with a specific focus on distruptive techologies.
In 2018 member states signed the Declaration on Cooperation on Artificial Intelligence where the participating member states agreed to cooperate in various fields regarding AI development and implementation, including ensuring an adequate legal and ethical framework, building on EU fundamental rights and values.
Later, during the Trump Administration the 2019 AI R&D Strategic Plan expressed seven main fields of interest, one of which is understanding ethical, legal, and societal applications of AI. According to the recent EU-US Trade and Technology Council TTC it is clear that the current administration continues supporting the efforts for the development of responsible and trustworthy AI.
THE U.S. – EU COOPERATION
The most recent U.S.- EU cooperation on the AI advancement, the TTC, was launched on September 29, 2021 in Pittsburgh. TTC working groups are cooperating on discussing the issues of technology standards, data governance and technology platforms, misuse of technology threatening security and human rights, and many others. The United States and European Union affirmed their commitment to a human-centered approach and developing mutual understanding on principles of trustworthy and responsible AI. However, both have expressed significants concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. They agree that these systems „pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems“.
Different forms of governments differ immensly in their approach towards the development and implementation of AI, as well as when it comes to the necessary principles of ethics and responsibility. However, governments need to take further actions with great cautions. When implemented carelessly, without taking ethics and safety in consideration, AI could end up being ineffective, or even worse, dangerous. Governments need to implement AI in a way that builds trust and legitimacy, which ideally requires legal and ethical frameworks to be in place for handling and protecting citizens’ data and algorithm use.
Operation Falcon Strike 21 was initiated from the Italian Military Air Force base in Amendola (FG), Italy on 6th June 2021. It was promoted by the Stato Maggiore della Difesa (Defence Staff) in partnership with NATO, mainly with the United States of America and the United Kingdom and Israel. The involvement of Israeli Air Force (IAF) in the twelve days of aeronaval training, its consequential collaboration with the Italian, American and British military forces gathered the Italian public attention and raised questions on the role of ethics in the decisions made by the NATO powers.
While there are not many details available on how the Operation Falcon Strike 21 originated, it facilitated the integration from airplanes between the 4th and 5th generation of fighters and increased the level of cooperation between powers in the logistic field and concerning the transfer of F-35 fighter jets. Thus, strengthening the interoperability of allied air forces and partners during joint operations. Exercises to master the use of the most advanced missile defence systems took place in between Sardinia and regions of Southern Italy (Il Manifesto, 2021).
As the Operation was initiated, an old debate on the role of ethics in military trainings emerged, due to the public fear for how the development of knowledge in the field could be exploited by Israel in its own military operations. Indeed, the participation of this Middle-Eastern Power in exercises that are strategically designed to test the firepower of new F-35 fighter-bombers provided. The debate in fact dates back to 2016 when IAF received its first F-35 fighter jets. Initial trainings with the collaboration with the Western military forces, in particular with the Royal Air Forces and the US Marines, started in 2019. It included the Tri-Lightning exercise and continued with the Enduring Lightning ones organised by Israel and the United States (Aviation Report, 2021).
Since the IAF received its first F-35 fighter jets, it has strived to obtain more of them to be added to its fleet in groups of twos and threes throughout the past years, reaching the current level of 27 planes in total. Additionally, by 2024 23 F-35 jets will be owned by Israel to meet the Israel Defence Forces’ (IDF) purpose of acquiring a total of 50 aircraft. The Israeli officials have also confirmed that they are planning to purchase more of these aircrafts (The Times of Israel, 2021).
Undoubtedly, the possibility of Israel deploying the acquired assets to fight its own wars within the Middle East generates an evident threat, which has indeed awakened concerns among the public. In particular, strikes have been organised in the areas close to the military air force base in Amendola, from where the operation was launched as pro-Palestine organisations have mobilized to show their disapproval for the partnership with Israel (Rete Italiana Pace e Disarmo, 2021). The exhibited disagreement with practices strengthening Western powers relationship with Israel was further emphasised since a statement released by one of IAF senior officials claimed that the extensive training conducted in Italy would be a historic chance to train its pilots to future wars in the Middle-Eastern area, particularly in Iran (Il Manifesto, 2021).
Hence, the rise of concerns among pro-Palestine groups regarding the consequences of including Israel in the trainings is inevitable. Yet, this only seem to strike the attention of worried civilians, as indeed even after the IAF statement, the Operation was successfully carried out.
Operation Falcon 21 was arguably an implicit declaration that it is ultimately ethical to include powers such as Israel in advanced military training conducted by NATO powers, regardless of the knowledge that the former might use the abilities and means gained to fight its own wars. The United States, United Kingdom and Italy have inevitably provided Israel with an outstanding opportunity to improve its military capabilities by supplying it with arms and helping in the development of knowledge about their use. This comes as a direct contradiction of the values of human rights and peace-keeping that these Western Powers claim to uphold. It appears though that for the Western powers, the role of ethics in strategic military decisions is overshadowed where there is a need to build partnerships with key powers such as Israel.
Tate Nurkin talks about the intricacies of AI technologies applied to the military domain and gives us an overview of the AI-powered military programs, what it means for the future of warfare and touches on ethical issues.
Tate Nurkin is the founder of OTH Intelligence Group and a Non-Resident Senior Fellow at the Atlantic Council.
Interviewer: Arnaud Sobrero
This is ITSS Verona Member Series Video Podcast by the Cyber, AI and Space Team.
ITSS Verona - The International Team for the Study of Security Verona is a not-for-profit, apolitical, international cultural association dedicated to the study of international security, ranging from terrorism to climate change, from artificial intelligence to pandemics, from great power competition to energy security.