March 11, 2024No Comments

Beijing and Washington try to talk to each other about AI

Author: Francesco Cirillo - U.S. Team

The sudden emergence of applications related to artificial intelligence, which combine different fields of work, is at the centre of international agendas and governments. The issue of the possible risks of AI has focused the debate on the need to find common ground, especially between China and the United States, the two major global economic powers and key players in AI research and development, both in the public and private Big Tech sectors, both in the US and in China. 

In recent months, following the San Francisco Summit in November 2023, where a very important bilateral meeting between Joe Biden and Xi Jinping took place, China and the US have started to engage in a dialogue to create global governance on AI. For several experts, Sino-American cooperation is crucial to avoid a political-military race in the AI sector, as was also said by Sam Altman, CEO of OpenAI. From Beijing's perspective, the issue remains a priority due to the delicate diplomatic relations between Washington and Beijing. Between December 2023 and January 2024, two Chinese academics wrote two papers attempting to define the new China-US relationship from Beijing's perspective. 

Da Wei, a professor at Tsinghua University, highlights four key concepts needed to analyse the new 'normal' in Sino-US relations after the San Francisco summit. The paper points out that relations will remain predominantly negative in the long term, but analyses the condition that neither power wants a direct confrontation on the economic level. From the point of view of Chinese academics, there seems to be the idea that both China and the US must find the need to coexist, especially to consolidate dialogue on issues of artificial intelligence. 

Here too, Chinese academics and experts reiterate the need for high-level dialogue to build global governance on AI

Even the world of independent academic research is moving to protect the impartiality of scientific research on AI.

On 5 March, a team of MIT experts published a letter"A Safe Harbor for AI Evaluation and Red Teaming," calling on tech companies involved in generative AI research to implement independent evaluation systems for AI-related risks. The letter, signed by several experts, refers to the confidentiality and corporate security issues that many Big Techs, most notably OpenAI, have within their R&D teams, preventing an unbiased assessment of the risks associated with the sudden development of AI, accelerated from December 2022.

Source: Tara Winstead - Pexels

Artificial Intelligence issues have entered the international debate because of their potential in various sectors, but also because of the competition/cooperation that both China and the United States will have to face in the coming years on issues related to the integration of AI systems in the economic, political and military spheres. 

International competition and the race to dominate AI could change the current status of global governance, but it would not change the game and the competition between the great powers, since technological innovation has always played a key role in the global hegemony of the superpower that would gain a strategic advantage over all the other great global powers.

October 4, 2023No Comments

The tension between China and the US also has an impact on the technological world

Author: Francesco Cirillo - U.S. Team

Washington and Beijing have planned strategies to increase semiconductor production with the advent of Artificial Intelligence. For this reason, Washington is somewhat concerned about China's chip production capabilities; the concern increased after the unveiling of the new Mate 60 Pro smartphone by Huawei. The chip component of this product is unique: in fact, semiconductor companies in Beijing and the People's Republic produce them entirely. US analysts think this demonstrates Beijing's technological capabilities and China's ability to become independent in technology production.


It is crucial for many companies, particularly Nvidia, to maintain a steady relationship with the People's Republic; this is a different view for the US government. To slow the growth of the industry, especially the development of the artificial intelligence field, the White House, Congress, and the defence and intelligence apparatuses should implement a containment strategy in the supply chain of the semiconductor industry. The New York Times states that China has used artificial intelligence tools, in particular, to pursue disinformation actions. This, according to Microsoft researchers, indicates that Beijing is eager to use generative AI to produce images and disseminate them online to apply disinformation actions.


For Beijing, one of the ways China could obtain the resources to compete with the US in that area is through the technology race. In a recent report translated by the CSIS (Political Bureau of the Central Committee of the CCP, 2023), At the Seventh Collective Study Session of the CCP Central Committee Politburo, Xi Jinping Emphasized Comprehensively Strengthening Military Governance and Using High-Standard Governance to Promote High- Quality Military Development [习近平在中共中央政治局第七次集 体学习时强调 全面加强军事治理 以高水平治理推动我军高质量发展]. Interpretation: China (originally published 2023), a working group of the Political Bureau of the Central Committee of the Communist Party of China, outlines guidelines for the integration of HiTech tools into the Chinese armed forces.

Source: https://www.pexels.com/it-it/foto/luce-blu-e-rossa-dal-computer-1933900/


Beijing recently announced the launch of a new approximately USD 40 billion investment fund intending to support the industrial growth of technology companies. The contention between the People's Republic of China and the US has also affected the artificial intelligence sector. The US is trying to impose restrictions on US companies operating and selling technology products to Chinese companies to hinder China's access to the industrial chain.


The Semiconductor Industry Association has stated that China will purchase chips and semiconductors worth around $180 billion in 2022 and only a few companies, including Intel, Nvidia and Qualcomm, have a significant relationship with Beijing. These companies are the only ones authorised by the US authorities to sell chips for Huawei's smartphones. In an economic and technological competition that Washington hopes will limit China's growth and development in the HiTech sector, further trade conflicts could also hurt US companies themselves.

November 21, 2022No Comments

Artificial Intelligence in the World of Art: A Human Rights Dilemma

Author: Maria Makurat.

Artificial Intelligence or “AI” is already being widely used for various purposes whether it be in analysing marketing trends, modern warfare or as of recently: reproducing artwork. Around 2021 and up till now, various articles have been released discussing the issue of AI being developed to reproduce an artist’s style and even recreate new artwork and therefore bringing up ethical issue of whether artists are in danger of losing their copyright claim on their own work. Whilst this issue is very new and one cannot say for sure where this development is going and whether one should be concerned in the first place. This article explains the recent debate and issues that are being addressed while drawing upon classical AI theory from warfare and highlighting possible suggestions.

Artificial intelligence not only in the military realm

“In April this year, the company announced DALL-E 2, which can generate photos, illustrations, and paintings that look like they were produced by human artists. This July OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.”

An article by Wired “Algorithms Can Now Mimic Any Artist. Some Artists Hate It,” discusses how an AI called “DALL- E 2” can reproduce an artist’s style and make new photos, digital art and paintings. In theory anyone can use the programme to mimic another artist, or artists can sue it to make new art based on their old work. This of course brings many issues to light such as whether one can put a copyright on an art style (as is also discussed in the article), what exactly one wants to achieve with using AI to recreate more art and how this will be discussed in the future if indeed art work will be stolen. An earlier article by the Los Angeles Times from 2020 “Edison, Morse ... Watson? Artificial intelligence poses test of who’s an inventor,” already addressed this issue by discussing who is exactly the “inventor” when AI can develop for instance computer games and other inventions. It is true that a human being must develop the AI programme however, can that person also then be called the inventor if that said programme develops own ideas and perhaps own artwork? In relation to the general debate, one should consider “The Universal Declaration of Human Rights” Article 27: “Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.”

Some recent debate centres not only around whether it is a question of the “ethics” in artificial intelligence but going one step back to understand the term “intelligence”. Joanna J Bryson writes: “Intelligence is the capacity to do the right thing at the right time. It is the ability to respond to the opportunities and challenges presented by context.”[i] Whilst the authors consider AI in relation to law, they do point out that: “Artificial intelligence only occurs by and with design. Thus, AI is only produced intentionally, for a purpose, by one or more member of the human society.”[ii]  Joanna further discusses that the word artificial means that something has been made by humans and therefore again brings up a key concept in AI of whether the human or the programme is responsible.[iii] When we consider this in relation to human rights issues and ethics, it may be true that AI in the world of art can be produced with a purpose by humans, but it remains the problematic issue of what the purpose is. We need the clear outline of why this AI programme has been made in the art world and for what purpose in order to then be able to answer further questions.

It has been pointed out that one should consider this development as nothing new since AI has been already used in the 1950s and 1960s to generate certain patterns and shapes. It is seen by many as a tool that helps the artists in these areas to work faster and be more precise however, it’s been debated that one should not be worried at all that the AI can replace humans since it lacks the human touch in the first place. This remains to be seen how far the AI can learn and adapt since it is programmed that way. If one should not be concerned by AI replacing human artists, then why is the debate happening in the first place? 

Credits: unsplash.com

The continues need for clearer definitions

It is not only a matter of the AI replicating art, but how we can define whether the system has crossed the line of copyright infringement: “(…) lawsuits claiming infringement are unlikely to succeed, because while a piece of art may be protected by copyright, an artistic style cannot.” This only shows again that one needs to quickly define more clearly what is an “artistic style”, “artwork” in relation to how AI would be even allowed to replicate the style.

One can draw a comparison to AI in warfare with debates concerning following themes: responsibility gap, moral offloading and taking humans out of the loop (discussed by scholars such as Horowitz, Asaro, Krischnan and Schwarz). Keith argues for example that psychological analyses show that we suffer from cognitive bias and that AI (in terms of military defence) will change our decision-making process.[iv]  If we use the example of drone warfare and the campaign “Stop Autonomous Weapons”, it depicts how drones can be used without directly sending humans into battle and shows the system getting out of hand and people distancing themselves from responsibility. Such type of warfare has an impact on the decision-making process, distancing the soldiers and strategists from the battle field. With of course taking into mind that using an AI in the art world does not involve possible casualties, one still can consider how we have a similar distancing from responsibility and moral offloading. It comes back to the recurring issues of who is responsible if an AI system decides by itself which choices to make, how to make them and determine the output. There are no humans involved during the process of making or “replicating” the art pieces however, there was an individual present during the development of the AI – I would like to call it a problematic ethical circle of debate in the art world.

Even though the idea of using AI to copy an art style or artworks altogether is quite new and perhaps even undeveloped, one should consider more strongly certain methods in order to bring a certain control and a managing system into the game. Nick Bostrom for instance discusses what a superintelligence in relation to AI would entail saying that one would need certain incentive methods in order for the AI to learn and adapt to the human society: “Capability control through social integration and balance of power relies upon diffuse social forces rewarding and penalizing the AI. (…) A better alternative might be to combine the incentive method with the use of motivation selection to give the AI a final goal that makes it easier to control.”[v]

Conclusion

It is not only problematic for the art world that an AI is able to copy any artist’s style -it is concerning how much further this development could go in terms of taking an artist’s style and creating an entire new series and diluting therefore the line between where the old and fictional artist lies. As has also already pointed out by others then need for better definitions however it needs to be stressed more strongly: one needs clearer definitions of who is an “artist”, “inventor”, “digital artist” when AI enters the discussion and is apparently here to stay. One needs to make a clear distinction between a human artist and a ‘programme artist (AI)’. Can an artist call himself artist when he or she uses AI to produce art?  All these questions should be discussed further in the near future since it seems to be the case that AI has entered the art realm and will continue to stay playing maybe a larger role in the future perhaps even with the development of the Meta verse.


[i] Markus Dirk Dubber, Frank Pasquale, Sunit Das, (2020) The Oxford Handbook of Ethics of AI Oxford handbooks. Oxford: Oxford University Press, pp. 4

[ii] Ibid, pp. 6.

[iii] Ibid, pp. 5

[iv] Keith, Dear, “Artificial intelligence and decision making,” pp. 18.

[v] Bostrom, Nick, “Superintelligence: Paths, Dangers, Strategies,” pp. 132.

June 18, 2022No Comments

AI goes to War: Observations from the Battlefields in Ukraine

Author: Andrea Rebora, Federica Montanaro and Oleg Abdurashitov.

Although the concept of artificial intelligence is quite complex and nuanced, many people imagine its use in warfare as a brutal slaughter conducted by evil robots. The reality is much different, and the current conflict between Russia and Ukraine offers a glimpse of what AI is being used for on today’s battlefields.

Researchers and military experts have spent years trying to visualize and understand military operations conducted with the support of artificial intelligence systems. One of the most prominent examples is the use of lethal autonomous weapons (LAWs), systems designed to locate, identify, and engage targets based on programmed information without requiring constant human control. Russia is using its KUB-BLA during the invasion in Ukraine, a loitering munition (commonly known as kamikaze drone) designed to identify and attack ground targets using AI technology.[1]However, the Russian aerial campaign leveraging these drones appears weak overall, and its fleet surprisingly small.[2] On the Ukrainian side, the Bayraktar TB2 drone fleet arguably appears to be its most potent force,[3] alongside the “kamikaze drone fleet,” with an estimated 20-30% of the registered Ukrainian kills to be the result of the successful employment of these systems.[4]

Another application envisioned on the battlefield is the use of AI to automate the mobility of vehicles, such as tanks and vessels, and make them more effective at identifying routes and prioritizing target selection and engagement.

AI is being increasingly included in the military decision-making process, from the straightforward calculation of aircraft or missile trajectories to the identification of targets during sensitive operations via automated target recognition. In 2021, Secretary of the Air Force Frank Kendall confirmed that AI had already been used during at least one “live operational kill chain,” demonstrating its effectiveness on the battlefield.[5]

Finally, the idea of artificial intelligence being applied to cyber operations is a thought that keeps many professionals up at night. Cyberattacks have already become one the most pervasive issues of this decade and, if enhanced by AI, they could be used to cause significant damage and potentially destabilize entire countries.[6]

The war in Ukraine, however, seems to be a far cry from swarms of unmanned drones and autonomous vehicles clashing with robotic systems of the adversary, as envisioned by several researchers of future warfare[7]. While AI algorithms reveal themselves on the battlefield, they do so in far more mundane aspects. 

The most illustrative example is the use of Ukraine’s specialist application for artillery (GIS Arta), which combines the conventional geo-mapping tools with the ability to sift information on the enemy’s location from a variety of sources and data types, including military and civilian drones and smartphones, GPS trackers and radars.[8] The intuitive and data-agnostic system has since become a force multiplier for the outgunned Ukrainian artillery, increasing the precision of their strikes.

The algorithmic geo-mapping itself, however, is not new and both high-resolution maps and image and processing algorithms are widely available and used among a range of civilian apps - from online maps to food delivery. The GIS Arta allows the blending of imagery intelligence (IMINT) and signals intelligence (SIGINT) feeds into an actionable solution for cheap and proves the ingenuity of Ukrainian developers and army in adopting civilian AI and machine learning technology for military use, but also highlights that the use of technology is shaped by the conditions and demands of the battlefield, not vice versa.

Russia, in turn, claims it is working on updating its reconnaissance and reconnaissance-strike drones with “electronic [optical and infrared] images of military equipment adopted in NATO countries” obtained through the application of neural network training algorithms.[9] With images and videos of NATO-supplied equipment requested by Ukraine widely available on the internet in almost ready-made datasets, the use of neural networks may be justified. Given that both Russia and Ukraine rely on human operators of unmanned aerial vehicles (UAVs), whose ability to identify snippets of objects is dramatically outmatched by the image recognition technology, such an AI-augmented approach may lead to better prioritization and increased accuracy of the Russian strikes targeting. 

Another example is the use of AI-enabled face recognition algorithms largely thanks to the ubiquity of visual content, ranging from smartphone videos and security camera feeds to social media pages.[10] The use of face recognition ranges from inspecting people and vehicles at checkpoints to identifying the potential perpetrators of war crimes. While lacking the immediacy often required on a battlefield, face recognition may become an essential deterring component of warfare preventing the most horrendous crimes on the battlefield. 

The use and credibility of such technology are not without controversy since AI algorithms are prone to bias and software flaws. In particular, the first person officially accused by Ukraine of war crimes in Bucha, who was caught on camera in a courier service office in a Belarus town used by Russian soldiers to send looted goods back to Russia, is a Belarusian citizen who vehemently denies even serving in the military.[11]

What emerges from this evolving environment is that the employment of AI on the Ukrainian battlefield is very human-centered and not very different from what has been seen in other conflicts. Despite the technological progress and innovations, there is still no clear evidence of the use of fully autonomous weapons in Ukraine. The presence of human beings “in the loop” still prevents us from finding a paradigmatic change in the employment of AI in this conflict. Artificial intelligence is leveraged as an instrument, which shapes and facilitates decision making and enables the implementation of decisions already taken, but is still not allowed to, or capable of, making autonomous decisions.

Despite the relatively limited use of advanced AI systems, the conflict in Ukraine provides a significant amount of operational and technical information. On the operational side, AI systems are being examined, tested, and deployed in various degrees and scope of application, allowing researchers and officials to understand the advantages and challenges in leveraging such systems in active conflict. On the technical side, the data collected such as images, audio, and geographical coordinates, can be used to train and improve current and future systems capable of, for example, recognizing camouflaged enemy vehicles, identifying optimal attack and counterattack routes, and predicting enemy movements.

The conflict in Ukraine provides an overview of the AI military capabilities of the two countries and the level of risk they are willing to accept with their top-of-the-line AI systems. After all, the cost-benefit analysis associated with using autonomous weapons in low-intensity conflict is much different from using the same weapons in open conflict, where the risk of losing just one AI-powered system is very high and particularly expensive. The military invasion of Ukraine is unfortunately not over yet, but militaries around the world will study its execution and aftermath for years to understand how to leverage AI systems for offensive and, most importantly, defensive applications.

February 28, 2022No Comments

Agriculture 4.0 – The Revolutionary Power of Artificial Intelligence

Author: Zrinka Boric, Giorgia Zaghi, and Beatrice Gori

According to the estimates, the global population will reach 9.7 billion people by 2050. To meet such growing food demand, the food production in the world will need to increase by 70% in the upcoming decades. At the same time, the agricultural sector is currently facing several challenges, such as limited availability of arable land and fresh water, a slowdown in the growth of crop yields, consequences of climate change, and covid-19. The UN's second Sustainable Development Goal (SDG2) targets to end hunger, double agricultural productivity, and ensure sustainable food production systems by 2030. To successfully address the challenges and achieve food security digital technologies are expected to become a foundation in future food production. At the World Summit on Food Security 2009, the four pillars of food security were identified as availability, access, utilization, and stability.

Recently the Focus Group on Artificial Intelligence (AI) and Internet of Things (IoT) for Digital Agriculture (FG-AI4A) was formed, in cooperation with Food and Agriculture Organization (FAO), to explore the potential of technologies (AI, IoT) in the acquisition and handling of necessary data, optimization of agricultural production processes, and to ultimately identify best ways (and possible challenges) to use such technologies within the agricultural domain.Artificial intelligence (AI) technologies are forecast to add US$15 trillion to the global economy by 2030. According to the Government AI Readiness Index 2019, the governments of high income-countries have better odds to utilize these gains than low-income countries. Therefore, there is a risk that low-income countries could be left behind by the fourth industrial revolution.

Image Source: https://www.pexels.com/it-it/foto/piante-a-foglia-verde-2132171/

Examples of the use of digital technologies in agriculture

TECHNOLOGYUSE IN AGRICULTURE
AI The utilization of AI and Human Intelligence can increase the capabilities and knowledge of farmers and improve the sustainability of their productions. Meanwhile, farmers can better manage their resources and obtain superior production rates. Sustainable green farms with optimal yielding are a fundamental step towards the Sustainable Development Goal 12 which provides for a “responsible consumption and production."Farms produce massive amounts of data daily, which AI and machine learning models could utilize to increase agricultural productivity while minimizing harmful practices (i.e. extensive use of pesticides, monocropping). 
Image Data (drones & satellites) For instance, agricultural technology or AgriTech drones are powerful tools that can help monitor the most inaccessible and vulnerable areas and design and support adequate farming operations. By surveying and mapping the fields, drones provide information and predictions on the crops' growth and help prevent anomalies and disruption of the productions.Satellite image data paired with AI technology aims to help governments and organizations address agricultural challenges by providing granular insight and data analysis. 
GPS (Global Positioning System) remote sensing technology  GPS technology is already steadily used to enhance agricultural processes and productivity and provides insight into the quantity of food produced proportionately to units of water. 
Internet of Things The IoT refers to devices with a sensor that enables them to transmit data through a network. IoT enables the collection and analysis of data and enables better tracking of performance, making informed decisions, and increasing efficiency and sustainability. 
Yield monitoring and mapping During the harvest, a dataset is collected (using different sensors and GPS technology) which can later be analyzed through specified software.This valuable dataset provides relevant information that helps to improve yield management, rational use of available resources, develop future nutrient strategies, and ultimately achieve more sustainable agriculture with lowered production costs. 
Automation Different forms of automation are used in agriculture to help farms operate more efficiently and increase productivity. Automation appears in many forms, from simple automatic watering systems used in many households, to specialized agricultural drones, robots (like harvest robots), and even driverless tractors. 

AI in low-income countries

AI has the potential to have relevant impacts on low-income countries as it could bring about more opportunities to current problems in agriculture and numerous other fields. AI is a tool directed towards development enhancement, the so-called “AI4D” (AI for development). AI could bring about infrastructural and qualitative development, in terms of societal empowerment and change.  

Moreover, one of the most relevant improvements in the agricultural sector would be rendering more efficient use of scarce resources. 

Specified technologies and systems can target specific needs and/or problems in the exact timing and/or quantities. The specific cases of Israel and China exemplify the relevance of AI for development and resilience. 

Both countries have massively invested in smart agriculture to increase yields, productivity and improve precision agriculture given the constraints of the growing scarcity of natural resources. China and Israel managed to improve their agricultural output to an extent where it is possible to consider them as “nations that feed the world”. Moreover, they both could export basic technologies to other countries to implement such “smart tools” to strengthen the latter’s agricultural export sector. For instance, this would be the case for Israel in countries like Indonesia and Thailand that have successfully utilized Israeli technology to improve their agricultural sector and export.

While the adoption of AI technology in agricultural practices of low-income countries seems like an easy way to solve relevant problems related to development, there are still many risks and barriers that ought to be considered. More specifically, compared to the costs of traditional systems, initial infrastructure costs for AI are extremely high – this would call for more participation from transnational organizations and technology companies to assist and supply basic infrastructure in low-income countries. 

 Conclusion 

To conclude, the opportunities that AI holds in the agricultural sector seem to have the potential to accomplish part of the SDGs agenda for 2030. This is certainly an argument that can be applied to Western countries with the investment capacity to carry on a fourth agricultural revolution. Optimization of precision agriculture and the efficient use of scarce resources are essential steps to fight world hunger and climate change. 

However, new technologies come with high entry-level costs and such investment could be too risky or too high for low-income countries and small-scale food producers. 

While a new agricultural revolution will benefit countries and food producers who can afford to bring about sustainable development, it is necessary to acknowledge that a significant risk lies ahead: leaving out the have-nots in favor of the sole development of the haves. 

November 30, 2021No Comments

How Different Political Powers Approach the Issue of Ethics in the Development of Artificial Intelligence

By: Zrinka Borić

Image Source: https://www.pexels.com/photo/person-reaching-out-to-a-robot-8386434/

Advancement of artificial intelligence (AI) technology is expected to drive progress and change in the areas of military, economy, and information. This so-called “fourth industrial revolution” opens various possibilities, among which the most probable one is further development and prosperity of those who will be able to reap the benefits, resulting in further strengthening existing inequalities in the global state system. 

The main concern an average person has regarding the AI is the idea of the post-apocalyptical world in which the robots and AI have completely overtaken the Earth, as depicted in many famous science-fiction publications. To approach this topic it is necessary to have two things in mind. First, the developement of the strong AI (also called Artificial General Intelligence – AGI) systems that will focus on the simulation of human reasoning and creation of machine intelligence equal to the human currently does not exist, and the experts cannot agree on the expected occurrence of this type of AI. Second, artificial intelligence systems rely heavily on data. Therefore, the quantity, quality and availability of data are crucial. In the longterm, the ethical and responsible approach to data collection for AI development and implementation aims to guarantee a balanced and responsible innovation. 

For example, the United States and the European Union countries have expressed dedication in developing trustworthy and ethical AI. At the other hand, countries like China and Russia have not shown such dedication in the development and employment of their autonomous weapons systems. Cyber policy and security expert Herbert Lin expresses the concern how due to lower level of regard towards the ethical and safety issues there is a likely opportunity that their weapons are going to be more militarily effective and developed sooner. 

Different forms of government have different approaches towards AI development and implementation. China is characterized as authoritarian and hierarchical state, the United States is a federal republic with a democratically run government, while the European Union is described as a political and economic union with that operates through combination of supranational and intergovernmental decision-making approach.

PEOPLE’S REPUBLIC OF CHINA

China defines artificial intelligence research and development as key to boosting national economic and manufacturing competitiveness as well as providing national security. China’s vigorous approach towards the AI development is caused by the potential economic benefit in the future. The experts assume that China will benefit from the highest relative economic gain from AI technologies, since the AI technology is envisioned to improve its productivity and manufacturing possibility and therefore to meet future GDP targets. Therefore, China faces the risk of AI development and application without giving enough attention to a responsible use of AI and preparing its citizen to adapt to possible changes affected by widespread AI adoption. China has already once fallen in the trap of recklessly rushing into uncontrolled progress, and it led to an unsustainable level of growth accompanied by a set of negative effects on China’s economy growth. China’s clear competitive advantage lies in its abundance of data which will most likely become one of the crucial elements in the future development of AI technology, relatively loose privacy laws, vibrant start-ups, and a stable rise in the number of AI engineers.

THE EUROPEAN UNION

The state structure shapes the design of the AI policy and its implementation. When discussing the EU it is important to keep in mind that the EU is not a country, but an economic and political supranational and intergovernmental organization. Considering the fact that economic prosperity and national security of the European Union are still firmly in the hands of the national governments it can easily be understood why the organizational structure of the Union hinders the process of making concrete and quick decisions which are always favorable in the conditions of the international competition. The EU has succeeded to publish joint plans and policies regarding AI, such as Civil Law Rules on Robotics, Declaration for Cooperation on Artificial Intelligence, Ethic Guidelines for Trustworthy AI, and Policy and Investment Recommendations for Trustworthy AI.

The European Union pays special attention to the study of the potential impact of artificial intelligence technology on the society. The researches usually involve social aspect such as data protection (e.g. GDPR law), network security and AI ethics. There are more substantial ethical or normative discussions when it comes to developing human-centered and trustworthy AI technologies. [...] Developing the culture of trustworthy AI and not only when it comes to security and defense, but more broadly about AI enabled technologies. This is at the forefront of the policy and political thinking in Brussels.“ claims Raluca Csernatoni, an expert on European security and defense with a specific focus on distruptive techologies.

In 2018 member states signed the Declaration on Cooperation on Artificial Intelligence where the participating member states agreed to cooperate in various fields regarding AI development and implementation, including ensuring an adequate legal and ethical framework, building on EU fundamental rights and values.

THE UNITED STATES

During the Obama administration National Science and Technology Council (NSTC) Committee on Technology drafted the report Preparing for the Future of Artificial Intelligence in 2016. Concerns about safeguarding “justice, fairness, and accountability” if AI was to be tasked with consequential decisions about people had previously been mentioned in Administration’s Big Data: Seizing Opportunities, Preserving Values  report and Big Data and Privacy: A Technological Perspective report. Regarding the governance and safety, the report advises that use of AI technology must be controlled by “technical and ethical supervision”.

Later, during the Trump Administration the 2019 AI R&D Strategic Plan expressed seven main fields of interest, one of which is understanding ethical, legal, and societal applications of AI. According to the recent EU-US Trade and Technology Council TTC it is clear that the current administration continues supporting the efforts for the development of responsible and trustworthy AI. 

THE U.S. – EU COOPERATION 

The most recent U.S.- EU cooperation on the AI advancement, the TTC, was launched on September 29, 2021 in Pittsburgh. TTC working groups are cooperating on discussing the issues of technology standards, data governance and technology platforms, misuse of technology threatening security and human rights, and many others. The United States and European Union affirmed their commitment to a human-centered approach and developing mutual understanding on principles of trustworthy and responsible AI. However, both have expressed significants concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. They agree that these systems „pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems“. 

CONCLUSION

Different forms of governments differ immensly in their approach towards the development and implementation of AI, as well as when it comes to the necessary principles of ethics and responsibility. However, governments need to take further actions with great cautions. When implemented carelessly, without taking ethics and safety in consideration, AI could end up being ineffective, or even worse, dangerous. Governments need to implement AI in a way that builds trust and legitimacy, which ideally requires legal and ethical frameworks to be in place for handling and protecting citizens’ data and algorithm use. 

November 2, 2021No Comments

The United States’ Race for Supremacy in Artificial Intelligence

By: Zrinka Boric

“Where we choose to invest speaks to what we value as a Nation. This year’s Budget, the first of my Presidency, is a statement of values that define our Nation at its best.” - Joseph R. Biden, Jr. (The Budget Message of the President)

This article navigates the landscape of AI policymaking and tracks efforts of the United States to promote and govern AI technologies. 

Technological advancement has become a new approach to increase a state’s political, military, and economic strength. The Cold War and the arms race between the two then strongest nations in the world, the United States of America (USA) and the Soviet Union (USSR), revealed the potential that lay in the development of technology. Today, the United States is again at the forefront in the race for supremacy in the potentially world-changing technology: artificial intelligence (AI). 

Artificial intelligence has the potential to fundamentally change strategy, organization, priorities, and resources of any national community that manages to develop AI technology, lead to further innovation, and eventually apply it. Artificial intelligence is going through major evolution and development, and its potential is increasing at a speed rate. Progress is visibly accelerating, and our social, political, and economic systems will be affected greatly. One of the important questions is how to define and approach all the opportunities AI technology can offer while avoiding or managing risks. 

The American AI Initiative

The United States is characterized by a skilled workforce, innovative private sector, good data availability, and effective governance which are all key factors for the government’s ability to enable effective development and adoption of AI. 

The United States published its national AI strategy, the American AI Initiative, in 2019.The responsible organization is the White House, and its priority is to increase the federal government investment in AI’s Research and Development (R&D), and to ensure technical standards for safe AI technology development and deployment. American AI Initiative expresses a commitment to collaborate with foreign partners while promoting U.S leadership in AI. Nevertheless, it is important to note that the American AI Initiative is not particularly comprehensive, especially when compared to other leading nations, and is characterized by the lack of both funding and palpable policy objectives.

In 2019, the U.S. policymakers were advised to advance the American AI Initiative with concrete goals and clear policies aimed at advancing AI – such as spurring public sector AI adoption and allocating new funding for AI R&D, rather than simply repurposing existing funds.

AI in the USA Budget for FY2022 

President Biden's budget for FY2022 includes approximately $171.3 billion for research and development (R&D), which is an 8.5% ($13.5 billion) increase compared to the FY2021 estimated level of $157.8 billion. 

According to the 2021 AI Index Report, in FY 2020 the USA federal departments and agencies spent a combined $1.8 billion on unclassified AI-related contracts. This represents an increase of more than 25% from the amount spent in FY 2019. 

One of the agencies with the major R&D program is the National Institute of Standards and Technology (NIST). President Biden is requesting $1,497.2 million for NIST in FY2022, an increase of $462.7 million (44.7%) from the FY2021 $1,034.5 million. The second-highest program budget increase in NIST is for Partnerships, Research, and Standards to Advance Trustworthy Artificial Intelligence, $45.4 million (an increase of $15 million compared to FY2021). 

Some departments are expecting large percentage increases in R&D funding, among which the Department of Commerce, with an increase of up to 29.3%. At the same time, it is interesting to note that one of DOC’s latest projects is the creation of the National Artificial Intelligence (AI) Advisory Committee (NAIAC), which will be discussed below.

Numerous policymakers in Congress are particularly interested in the Department of Defense Science and Technology (DOD S&T) program funding. The increasingly popular belief in the defense community finds ensuring support for S&T activities as necessary to maintain USA’s military superiority in the world.

The budget request represents President Biden’s R&D priorities, and the Congress may agree with it partially, completely, or not agree at all. It is safe to say that AI has gained the attention of the Congress, considering the 116th Congress (January 3, 2019 - January 3, 2021) is the most AI-focused congressional session in history with the number of times AI was mentioned being more than three times higher compared to 115th Congress (115th - 149, 116th - 486).

National and International Efforts

As indicated in its national AI strategy, the United States takes part in various intergovernmental AI initiatives, such asGlobal Partnership on AI (GPAI), OECD Network of Experts on AI (ONE AI)Ad Hoc Expert Group (AHED) for the Recommendation on the Ethics of Artificial Intelligence, and has participated in global summits and meetings, such as AI Partnership for Defense, and AI for Good Global Summit. In addition, the United States announced a declaration of the bilateral agreement on AI with the United Kingdom in December 2020. 

On September 8, 2021, the U.S. Secretary of Commerce Secretary Gina Raimondo announced the establishment of the National Artificial Intelligence (AI) Advisory Committee (NAIAC). The main purpose of the NAIAC will be to advise the President and the National AI Initiative Office (NAIIO) on issues related to AI. “AI presents an enormous opportunity to tackle the biggest issues of our time, strengthen our technological competitiveness, and be an engine for growth in nearly every sector of the economy. But we must be thoughtful, creative, and wise in how we address the challenges that accompany these new technologies,” Commerce Secretary Gina Raimondo said.

The United States or China? 

The United States is showing an increasing interest in developing and implementing artificial intelligence through the increase in federal AI-related budget, establishment of new committees, intergovernmental AI initiatives, bilateral agreements, and participating in global summits but the constant comparison is being made between USA and China. Should the future battle over artificial intelligence be between USA and China, the question arises: Who will win this battle for AI supremacy?

Recently, a former Pentagon expert said that the race is already over, and China has won. The Pentagon’s first chief software officer resigned over the slow pace of technological advances in the U.S. military. He claims the USA has no competing fighting chance against China in the upcoming years and that it's already a done deal.

At the same time, an expert in artificial intelligence Kai-Fu Lee, former President of Google China, disagrees with this claim. He notes that the US has a clear academic lead in artificial intelligence, supports his claim by noting that all 16 Turing award recipients in AI are American or Canadian, and the top 1% of papers published are still predominantly American. China is simply faster in commercializing technologies and has more data. 

Artificial intelligence already has numerous uses (academic, military, medical, etc.) and when assessing countries' AI technology reach it is important to separate different uses of technology. 

To answer the question on whether the United States or China will win AI 'race' or whether a new force will emerge, it is necessary to closely monitor artificial intelligence technology development and compare different countries using a uniform set of criteria before reaching a conclusion. Another potential scenario, as highlighted by Kai-Fu Lee in his book AI 2014: Ten Visions of Our Future, states the possibility of United States and China co-leading the world in technology.

Image Source: https://www.pexels.com/photo/blue-bright-lights-373543/

June 14, 2021No Comments

The disruptive power of Artificial Intelligence

By: Renata Safina and Arnaud Sobrero

The use of artificial intelligence may change how war is conducted

In 2020, amidst the biggest pandemic the world has seen since the Spanish Flu in 1918, two ex-soviet states were battling over an area of just 4,400 km² in the mountainous region of Nagorno- Karabakh. Armenia and Azerbaijan, so close and yet so far, are two mortal enemies sharing a common DNA.    

This war, at first, seemed like a faraway regional conflict between two neighboring states, away from western Europe and even further from the United States. However, a closer inspection requires us to pay a lot more attention to the conflict. Indeed, this conflict is illustrative of how the extensive use of artificial intelligence-enabled drones can be instrumental in shifting the outcome of a war. Thus, the application of artificial intelligence (AI) in the military domain is disrupting the way we approach conventional warfare.

AI means

The use of advanced technological weapons, drones, and loitering munitions supplied by both Israel and Turkey practically won this war for Azerbaijan. In particular, AI-enabled weaponized drones with increasingly autonomous and surveillance capabilities were able to disrupt the battlefield significantly. The deployment of those drones, such as the Turkish TB2 unmanned combat aerial vehicle (UCAV), had a substantial disruptive impact on the battlefield as the Azeris forces were able to destroy 47% of the Armenian combat vehicles and 93% of its artillery.    

'Harpy' and 'Harop' loitering munitions (LM) are autonomous weapon systems produced by Israel Aerospace Industries (IAI), a state-owned aerospace and aviation manufacturer. A loitering munition or 'kamikaze drone' is an unmanned aerial vehicle (UAV) with a built-in warhead tarrying around an area searching for targets. Once the target is located, the LM strikes the target detonating on impact. The significant advantage of these systems is that during loitering, the attacker can decide when and what to strike. Should the target not be found, the LM returns to the base. In addition, these systems are equipped with machine learning algorithms that can take decisions without human involvement, allowing them to process a large amount of data and decide instantly, revolutionizing the speed and accuracy of the actions.  

Conducting Warfare through AI – Ethical Implications

Those developments in emerging technologies such as artificial intelligence are already contributing to creating technological surrogates disrupting how we conduct warfare

Wars fought with lethal autonomous weapons (LAWS) equipped with AI are not a vision of a distant future. These weapons are being deployed presently and are a huge game changer and, those 'market disruptors' will once and for all change the way the wars are fought. Former CIA Director and retired Gen. David Petraeus claims that “drones, unmanned ships, tanks, subs, robots, computers are going to transform how we fight all campaigns. Over time, the man in the loop may be in developing the algorithm, not the operation of the unmanned system itself.”

However, military operations conducted without human involvement raise many ethical questions and debates. On one side, supporters argue that LAWS with AI generate fewer casualties due to high precision, and thanks to lack of emotions, can even eliminate war crimes. On the other side, machine learning bias in data input may create unpredictable mistakes. AI decision-making may result in flash wars and rapid escalation of conflicts with catastrophic consequences. Thus, by lowering the cost of war, LAWS might increase the likelihood of conflicts. 

Furthermore, the transfer of the responsibility of decision-making entirely to the machine will drastically distance a human from an act of killing, questioning the morality and ethics of the application of AI for military purposes. Lack of international laws and regulations created a Wild West with developed countries acting as both sheriffs and outlaws. Vigorous debates are already taking place among academics and military organizations in the western world as they are trying to keep up with the increasing technological developments. The resulting discussions triggered the creation of a group of governmental experts on LAWS at the United Nations in 2016. Despite ongoing United Nations discussions, international ban or other regulations on military AI are not likely to happen in the near term. Consequently, before we can fully grasp the consequences of applying artificial intelligence in the military domain and start creating "killer robots'', a more cautious approach should be recommended to limit the deployment of AI systems to less-lethal operations such as bomb disposal, mine clearance and reconnaissance missions.

For all the potential applications of AI to the military domain, the question stays: Will it help us sleep better at night or prevent us from sleeping at all?