Diletta Huyskes, Head of Advocacy in Privacy Network, talks about the latest developments regarding Artificial Intelligence. In particular, this episode deals with the challenges that AI poses to the protection of Human Rights and how this issue is tackled in the upcoming AI Act.
Artificial Intelligence or “AI” is already being widely used for various purposes whether it be in analysing marketing trends, modern warfare or as of recently: reproducing artwork. Around 2021 and up till now, various articles have been released discussing the issue of AI being developed to reproduce an artist’s style and even recreate new artwork and therefore bringing up ethical issue of whether artists are in danger of losing their copyright claim on their own work. Whilst this issue is very new and one cannot say for sure where this development is going and whether one should be concerned in the first place. This article explains the recent debate and issues that are being addressed while drawing upon classical AI theory from warfare and highlighting possible suggestions.
Artificial intelligence not only in the military realm
“In April this year, the company announced DALL-E 2, which can generate photos, illustrations, and paintings that look like they were produced by human artists. This July OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.”
An article by Wired “Algorithms Can Now Mimic Any Artist. Some Artists Hate It,” discusses how an AI called “DALL- E 2” can reproduce an artist’s style and make new photos, digital art and paintings. In theory anyone can use the programme to mimic another artist, or artists can sue it to make new art based on their old work. This of course brings many issues to light such as whether one can put a copyright on an art style (as is also discussed in the article), what exactly one wants to achieve with using AI to recreate more art and how this will be discussed in the future if indeed art work will be stolen. An earlier article by the Los Angeles Times from 2020 “Edison, Morse ... Watson? Artificial intelligence poses test of who’s an inventor,” already addressed this issue by discussing who is exactly the “inventor” when AI can develop for instance computer games and other inventions. It is true that a human being must develop the AI programme however, can that person also then be called the inventor if that said programme develops own ideas and perhaps own artwork? In relation to the general debate, one should consider “The Universal Declaration of Human Rights” Article 27: “Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.”
Some recent debate centres not only around whether it is a question of the “ethics” in artificial intelligence but going one step back to understand the term “intelligence”. Joanna J Bryson writes: “Intelligence is the capacity to do the right thing at the right time. It is the ability to respond to the opportunities and challenges presented by context.”[i] Whilst the authors consider AI in relation to law, they do point out that: “Artificial intelligence only occurs by and with design. Thus, AI is only produced intentionally, for a purpose, by one or more member of the human society.”[ii] Joanna further discusses that the word artificial means that something has been made by humans and therefore again brings up a key concept in AI of whether the human or the programme is responsible.[iii] When we consider this in relation to human rights issues and ethics, it may be true that AI in the world of art can be produced with a purpose by humans, but it remains the problematic issue of what the purpose is. We need the clear outline of why this AI programme has been made in the art world and for what purpose in order to then be able to answer further questions.
It has been pointed out that one should consider this development as nothing new since AI has been already used in the 1950s and 1960s to generate certain patterns and shapes. It is seen by many as a tool that helps the artists in these areas to work faster and be more precise however, it’s been debated that one should not be worried at all that the AI can replace humans since it lacks the human touch in the first place. This remains to be seen how far the AI can learn and adapt since it is programmed that way. If one should not be concerned by AI replacing human artists, then why is the debate happening in the first place?
Credits: unsplash.com
The continues need for clearer definitions
It is not only a matter of the AI replicating art, but how we can define whether the system has crossed the line of copyright infringement: “(…) lawsuits claiming infringement are unlikely to succeed, because while a piece of art may be protected by copyright, an artistic style cannot.” This only shows again that one needs to quickly define more clearly what is an “artistic style”, “artwork” in relation to how AI would be even allowed to replicate the style.
One can draw a comparison to AI in warfare with debates concerning following themes: responsibility gap, moral offloading and taking humans out of the loop (discussed by scholars such as Horowitz, Asaro, Krischnan and Schwarz). Keith argues for example that psychological analyses show that we suffer from cognitive bias and that AI (in terms of military defence) will change our decision-making process.[iv] If we use the example of drone warfare and the campaign “Stop Autonomous Weapons”, it depicts how drones can be used without directly sending humans into battle and shows the system getting out of hand and people distancing themselves from responsibility. Such type of warfare has an impact on the decision-making process, distancing the soldiers and strategists from the battle field. With of course taking into mind that using an AI in the art world does not involve possible casualties, one still can consider how we have a similar distancing from responsibility and moral offloading. It comes back to the recurring issues of who is responsible if an AI system decides by itself which choices to make, how to make them and determine the output. There are no humans involved during the process of making or “replicating” the art pieces however, there was an individual present during the development of the AI – I would like to call it a problematic ethical circle of debate in the art world.
Even though the idea of using AI to copy an art style or artworks altogether is quite new and perhaps even undeveloped, one should consider more strongly certain methods in order to bring a certain control and a managing system into the game. Nick Bostrom for instance discusses what a superintelligence in relation to AI would entail saying that one would need certain incentive methods in order for the AI to learn and adapt to the human society: “Capability control through social integration and balance of power relies upon diffuse social forces rewarding and penalizing the AI. (…) A better alternative might be to combine the incentive method with the use of motivation selection to give the AI a final goal that makes it easier to control.”[v]
Conclusion
It is not only problematic for the art world that an AI is able to copy any artist’s style -it is concerning how much further this development could go in terms of taking an artist’s style and creating an entire new series and diluting therefore the line between where the old and fictional artist lies. As has also already pointed out by others then need for better definitions however it needs to be stressed more strongly: one needs clearer definitions of who is an “artist”, “inventor”, “digital artist” when AI enters the discussion and is apparently here to stay. One needs to make a clear distinction between a human artist and a ‘programme artist (AI)’. Can an artist call himself artist when he or she uses AI to produce art? All these questions should be discussed further in the near future since it seems to be the case that AI has entered the art realm and will continue to stay playing maybe a larger role in the future perhaps even with the development of the Meta verse.
[i] Markus Dirk Dubber, Frank Pasquale, Sunit Das, (2020) The Oxford Handbook of Ethics of AI Oxford handbooks. Oxford: Oxford University Press, pp. 4
Advancement of artificial intelligence (AI) technology is expected to drive progress and change in the areas of military, economy, and information. This so-called “fourth industrial revolution” opens various possibilities, among which the most probable one is further development and prosperity of those who will be able to reap the benefits, resulting in further strengthening existing inequalities in the global state system.
The main concern an average person has regarding the AI is the idea of the post-apocalyptical world in which the robots and AI have completely overtaken the Earth, as depicted in many famous science-fiction publications. To approach this topic it is necessary to have two things in mind. First, the developement of the strong AI (also called Artificial General Intelligence – AGI) systems that will focus on the simulation of human reasoning and creation of machine intelligence equal to the human currently does not exist, and the experts cannot agree on the expected occurrence of this type of AI. Second, artificial intelligence systems rely heavily on data. Therefore, the quantity, quality and availability of data are crucial. In the longterm, the ethical and responsible approach to data collection for AI development and implementation aims to guarantee a balanced and responsible innovation.
For example, the United States and the European Union countries have expressed dedication in developing trustworthy and ethical AI. At the other hand, countries like China and Russia have not shown such dedication in the development and employment of their autonomous weapons systems. Cyber policy and security expert Herbert Lin expresses the concern how due to lower level of regard towards the ethical and safety issues there is a likely opportunity that their weapons are going to be more militarily effective and developed sooner.
Different forms of government have different approaches towards AI development and implementation. China is characterized as authoritarian and hierarchical state, the United States is a federal republic with a democratically run government, while the European Union is described as a political and economic union with that operates through combination of supranational and intergovernmental decision-making approach.
PEOPLE’S REPUBLIC OF CHINA
China defines artificial intelligence research and development as key to boosting national economic and manufacturing competitiveness as well as providing national security. China’s vigorous approach towards the AI development is caused by the potential economic benefit in the future. The experts assume that China will benefit from the highest relative economic gain from AI technologies, since the AI technology is envisioned to improve its productivity and manufacturing possibility and therefore to meet future GDP targets. Therefore, China faces the risk of AI development and application without giving enough attention to a responsible use of AI and preparing its citizen to adapt to possible changes affected by widespread AI adoption. China has already once fallen in the trap of recklessly rushing into uncontrolled progress, and it led to an unsustainable level of growth accompanied by a set of negative effects on China’s economy growth. China’s clear competitive advantage lies in its abundance of data which will most likely become one of the crucial elements in the future development of AI technology, relatively loose privacy laws, vibrant start-ups, and a stable rise in the number of AI engineers.
THE EUROPEAN UNION
The state structure shapes the design of the AI policy and its implementation. When discussing the EU it is important to keep in mind that the EU is not a country, but an economic and political supranational and intergovernmental organization. Considering the fact that economic prosperity and national security of the European Union are still firmly in the hands of the national governments it can easily be understood why the organizational structure of the Union hinders the process of making concrete and quick decisions which are always favorable in the conditions of the international competition. The EU has succeeded to publish joint plans and policies regarding AI, such as Civil Law Rules on Robotics, Declaration for Cooperation on Artificial Intelligence, Ethic Guidelines for Trustworthy AI, and Policy and Investment Recommendations for Trustworthy AI.
The European Union pays special attention to the study of the potential impact of artificial intelligence technology on the society. The researches usually involve social aspect such as data protection (e.g. GDPR law), network security and AI ethics. There are more substantial ethical or normative discussions when it comes to developing human-centered and trustworthy AI technologies. [...] Developing the culture of trustworthy AI and not only when it comes to security and defense, but more broadly about AI enabled technologies. This is at the forefront of the policy and political thinking in Brussels.“ claims Raluca Csernatoni, an expert on European security and defense with a specific focus on distruptive techologies.
In 2018 member states signed the Declaration on Cooperation on Artificial Intelligence where the participating member states agreed to cooperate in various fields regarding AI development and implementation, including ensuring an adequate legal and ethical framework, building on EU fundamental rights and values.
THE UNITED STATES
During the Obama administration National Science and Technology Council (NSTC) Committee on Technology drafted the report Preparing for the Future of Artificial Intelligence in 2016. Concerns about safeguarding “justice, fairness, and accountability” if AI was to be tasked with consequential decisions about people had previously been mentioned in Administration’s Big Data: Seizing Opportunities, Preserving Values report and Big Data and Privacy: A Technological Perspective report. Regarding the governance and safety, the report advises that use of AI technology must be controlled by “technical and ethical supervision”.
Later, during the Trump Administration the 2019 AI R&D Strategic Plan expressed seven main fields of interest, one of which is understanding ethical, legal, and societal applications of AI. According to the recent EU-US Trade and Technology Council TTC it is clear that the current administration continues supporting the efforts for the development of responsible and trustworthy AI.
THE U.S. – EU COOPERATION
The most recent U.S.- EU cooperation on the AI advancement, the TTC, was launched on September 29, 2021 in Pittsburgh. TTC working groups are cooperating on discussing the issues of technology standards, data governance and technology platforms, misuse of technology threatening security and human rights, and many others. The United States and European Union affirmed their commitment to a human-centered approach and developing mutual understanding on principles of trustworthy and responsible AI. However, both have expressed significants concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. They agree that these systems „pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems“.
CONCLUSION
Different forms of governments differ immensly in their approach towards the development and implementation of AI, as well as when it comes to the necessary principles of ethics and responsibility. However, governments need to take further actions with great cautions. When implemented carelessly, without taking ethics and safety in consideration, AI could end up being ineffective, or even worse, dangerous. Governments need to implement AI in a way that builds trust and legitimacy, which ideally requires legal and ethical frameworks to be in place for handling and protecting citizens’ data and algorithm use.
“Where we choose to invest speaks to what we value as a Nation. This year’s Budget, the first of my Presidency, is a statement of values that define our Nation at its best.” - Joseph R. Biden, Jr. (The Budget Message of the President)
This article navigates the landscape of AI policymaking and tracks efforts of the United States to promote and govern AI technologies.
Technological advancement has become a new approach to increase a state’s political, military, and economic strength. The Cold War and the arms race between the two then strongest nations in the world, the United States of America (USA) and the Soviet Union (USSR), revealed the potential that lay in the development of technology. Today, the United States is again at the forefront in the race for supremacy in the potentially world-changing technology: artificial intelligence (AI).
Artificial intelligence has the potential to fundamentally change strategy, organization, priorities, and resources of any national community that manages to develop AI technology, lead to further innovation, and eventually apply it. Artificial intelligence is going through major evolution and development, and its potential is increasing at a speed rate. Progress is visibly accelerating, and our social, political, and economic systems will be affected greatly. One of the important questions is how to define and approach all the opportunities AI technology can offer while avoiding or managing risks.
The United States published its national AI strategy, the American AI Initiative, in 2019.The responsible organization is the White House, and its priority is to increase the federal government investment in AI’s Research and Development (R&D), and to ensure technical standards for safe AI technology development and deployment. American AI Initiative expresses a commitment to collaborate with foreign partners while promoting U.S leadership in AI. Nevertheless, it is important to note that the American AI Initiative is not particularly comprehensive, especially when compared to other leading nations, and is characterized by the lack of both funding and palpable policy objectives.
President Biden's budget for FY2022 includes approximately $171.3 billion for research and development (R&D), which is an 8.5% ($13.5 billion) increase compared to the FY2021 estimated level of $157.8 billion.
According to the 2021 AI Index Report, in FY 2020 the USA federal departments and agencies spent a combined $1.8 billion on unclassified AI-related contracts. This represents an increase of more than 25% from the amount spent in FY 2019.
One of the agencies with the major R&D program is the National Institute of Standards and Technology (NIST). President Biden is requesting $1,497.2 million for NIST in FY2022, an increase of $462.7 million (44.7%) from the FY2021 $1,034.5 million. The second-highest program budget increase in NIST is for Partnerships, Research, and Standards to Advance Trustworthy Artificial Intelligence, $45.4 million (an increase of $15 million compared to FY2021).
Some departments are expecting large percentage increases in R&D funding, among which the Department of Commerce, with an increase of up to 29.3%. At the same time, it is interesting to note that one of DOC’s latest projects is the creation of the National Artificial Intelligence (AI) Advisory Committee (NAIAC), which will be discussed below.
Numerous policymakers in Congress are particularly interested in the Department of Defense Science and Technology (DOD S&T) program funding. The increasingly popular belief in the defense community finds ensuring support for S&T activities as necessary to maintain USA’s military superiority in the world.
The budget request represents President Biden’s R&D priorities, and the Congress may agree with it partially, completely, or not agree at all.It is safe to say that AI has gained the attention of the Congress, considering the 116th Congress (January 3, 2019 - January 3, 2021) is the most AI-focused congressional session in history with the number of times AI was mentioned being more than three times higher compared to 115th Congress (115th - 149, 116th - 486).
National and International Efforts
As indicated in its national AI strategy, the United States takes part in various intergovernmental AI initiatives, such asGlobal Partnership on AI (GPAI), OECD Network of Experts on AI (ONE AI), Ad Hoc Expert Group (AHED) for the Recommendation on the Ethics of Artificial Intelligence, and has participated in global summits and meetings, such as AI Partnership for Defense, and AI for Good Global Summit. In addition, the United States announced a declaration of the bilateral agreement on AI with the United Kingdom in December 2020.
On September 8, 2021, the U.S. Secretary of Commerce Secretary Gina Raimondo announced the establishment of the National Artificial Intelligence (AI) Advisory Committee (NAIAC). The main purpose of the NAIAC will be to advise the President and the National AI Initiative Office (NAIIO) on issues related to AI. “AI presents an enormous opportunity to tackle the biggest issues of our time, strengthen our technological competitiveness, and be an engine for growth in nearly every sector of the economy. But we must be thoughtful, creative, and wise in how we address the challenges that accompany these new technologies,” Commerce Secretary Gina Raimondo said.
The United States or China?
The United States is showing an increasing interest in developing and implementing artificial intelligence through the increase in federal AI-related budget, establishment of new committees, intergovernmental AI initiatives, bilateral agreements, and participating in global summits but the constant comparison is being made between USA and China. Should the future battle over artificial intelligence be between USA and China, the question arises: Who will win this battle for AI supremacy?
Recently, a former Pentagon expert said that the race is already over, and China has won. The Pentagon’s first chief software officer resigned over the slow pace of technological advances in the U.S. military. He claims the USA has no competing fighting chance against China in the upcoming years and that it's already a done deal.
At the same time, an expert in artificial intelligence Kai-Fu Lee, former President of Google China, disagrees with this claim. He notes that the US has a clear academic lead in artificial intelligence, supports his claim by noting that all 16 Turing award recipients in AI are American or Canadian, and the top 1% of papers published are still predominantly American. China is simply faster in commercializing technologies and has more data.
Artificial intelligence already has numerous uses (academic, military, medical, etc.) and when assessing countries' AI technology reach it is important to separate different uses of technology.
To answer the question on whether the United States or China will win AI 'race' or whether a new force will emerge, it is necessary to closely monitor artificial intelligence technology development and compare different countries using a uniform set of criteria before reaching a conclusion. Another potential scenario, as highlighted by Kai-Fu Lee in his book AI 2014: Ten Visions of Our Future, states the possibility of United States and China co-leading the world in technology.
Tate Nurkin talks about the intricacies of AI technologies applied to the military domain and gives us an overview of the AI-powered military programs, what it means for the future of warfare and touches on ethical issues.
Tate Nurkin is the founder of OTH Intelligence Group and a Non-Resident Senior Fellow at the Atlantic Council.
Interviewer: Arnaud Sobrero
This is ITSS Verona Member Series Video Podcast by the Cyber, AI and Space Team.
ITSS Verona - The International Team for the Study of Security Verona is a not-for-profit, apolitical, international cultural association dedicated to the study of international security, ranging from terrorism to climate change, from artificial intelligence to pandemics, from great power competition to energy security.