Author: Oleg Abdurashitov and Caterina Panzetti - AI, Cyber Security & Space Team
Among other things 2023 was a year of AI regulation in the EU, US and well beyond. The fundamental challenge that policymakers face in the case of AI is that, in essence, they are often dealing with the unknowns resulting from the complexity of the technology itself and the break-neck speed of its development and adoption. Given the incessant debate on whether AI poses an existential risk to humanity that needs to be addressed at the earlier stages or if such existential risks are merely a smoke screen to the far more urgent and practical implications of widespread AI deployment on privacy, copyright, human rights, labor market, setting the regulatory priorities appears to be challenging. Analyzing what the regulators in the US and Europe chose to focus on and how they framed AI regulatory doctrine may help to better understand not just their priorities but the differences in respective institutional, political and economic environments and approaches to dealing with the emerging technologies.
United States
Despite the existential threat narrative peddled by the largest industry players, including at the Senate hearings, thePresidential Order Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence seems to be more grounded in the current reality when assessing AI's potential risks. While the act attempts to address several critical security issues - from AI-enabled cyber operations to the threat of WMD (Weapon of Mass Destruction) development - it can nonetheless be viewed as an effort to prepare the American economy and society for the age of AI use across the numerous (if not all) sectors.
The Order’s approach is based on the unique strength of the US economy and governance model that heavily relies on the enormous capacity of the US tech sector, as well as on the diverse environment of the nation’s civil society, where educational institutions, think tanks, and the legal system all play a role in shaping and implementing regulations. Probably, the most critical aspect of the AI regulatory environment is that in the US the recent AI breakthroughs are funded by private capital as opposed to the state budgets like in China[1] or, largely, the EU[2]. This, on the one hand, allows the US to retain the competitive edge in the AI race, with the so-called MAGMA[3] companies bearing a large share of the R&D costs in developing breakthrough commercial AI products. On the other, this puts the US government in a position where the sectoral regulation shall be balanced against the interests of the commercial players and enable, rather than control, the technology development and adoption.
The Order implicitly acknowledges this complex interplay between the commercial interests, the interests of the state, and American society’s demands. Section 2 (Policy and Principles) in particular broadly outlines the many aspects of AI development - from safety to impact on the workforce - that need to be balanced against each other. Again, given the enormity of such a task, the Order is short on specific details - and when such details are given they often leave the question of whether it will be able to address the long-term security implications of AI development open.
For instance, in Section 4 the Order puts the “dual-use foundation models” that may pose “a serious risk to security, national economic security, national public health or safety” under increased regulatory and technical scrutiny. The definition of such a model as the one containing “at least tens of billions of parameters” covers the leading large language models (LLMs) behind ChatGPT and Google Bard, with each having more than 100 billion parameters. The Order’s approach to regulating such powerful models relies largely on industry guidelines (such as the NIST AI Risk Management Framework[4]) developed in collaboration with the private sector players themselves complemented by a series of government-funded testbeds for risk assessment.
It is important to note, that while the commonly agreed approach to AI model training can be described as “greater is better”, there is evidence that the output of models with a far smaller number of parameters (1.5B to 2.7B) can be somewhat comparable to that of larger models[5]. Additionally, while larger models are generally controlled by specific entities, the open-source models (such as Meta’s Llama available in packages of 7B, 13B, and 70B parameters[6]) may be used by a far wider number of actors developing their own powerful models, potentially falling outside the regulatory scrutiny and export control measures.
More so, the Order explicitly focuses on very large models as subject to regulatory restrictions, like “[a] model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations”. That number, for instance, is significantly higher than the rumoured estimates for the most advanced model in the market today - OpenAI’s GPT-4 - which currently stands around 2.15 x 1025 FLOPs (floating point operations)[7]. If the field will sustain the current pace of innovation, this threshold may well be crossed shortly. However, there is so far little evidence that such models would indeed represent “potential capabilities that could be used in malicious cyber-enabled activity” since malicious cyber operations of today require far less computing power and the “cyber-enabled” definition may simply be too broad to have meaning in regulatory context.
Of course, the proposed control regime for the large ‘dual-use models’ need not necessarily fully address the issue of AI-powered malicious activity as of today. Instead, the Order directs federal agencies to study the best practices and guidelines of the critical infrastructure sectors to manage “AI-specific cybersecurity risks” and “develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards” as well as assess the risks of AI usage in the critical infrastructure and government systems. From this point of view, the Order implicitly acknowledges the fact that AI models are already largely deployed both in the private and public sectors and calls for measures to discover and reduce the risks of such use.
Notably, the US DoD’s Data, Analytics, and Artificial Intelligence Adoption Strategy[8], released months earlier in June 2023 prioritizes speed of deployment over careful risk assessment that the Presidential Order entails. To the military, the “[AI deployment] risks will be managed not by flawless forecasting, but by continuous deployment powered by campaigns of learning”. More so, the DoD calls for mitigation of policy barriers through consensus building and closer relations with vendors, as well as the AI community at large. Despite the risks of AI deployment being no less profound in the military sector than in civilian affairs, the US Government as a customer may well choose the speed of decision-making - and many other benefits that AI can potentially bring to warfighting - to a more careful and balanced approach.
EU AI Act
2021 has been marked by a race towards gaining normative authority in the field of Artificial Intelligence-enabled services. The European Union has been leading this chase by engaging in an omni comprehensive risk-oriented approach to AI regulation, providing for a broad regulatory framework to ensure securitization and protection of fundamental rights.
The Commission has indeed proposed a model founded on a decreasing scale of risk-based obligations to which providers will have to adhere to be able to continue conducting their business in the European Union, irrespective of their place of establishment[1]. Regarding the service providers which surpass the threshold of what the legislator has referred to as “high risk”, the AI Act imposes a ban upon them, and thus will not be allowed to distribute their services in the Union, as they are deemed to pose an unacceptable risk to the livelihood and safety of the users of such service. Just a tire under the said forbidden services, the providers which have been labelled as “high risk” will have to comply with the most burdensome obligations. Notably, the proposed regulation will not have any impact on AI systems implemented for military purposes.
The high-risk providers are identified with critical infrastructures which deeply affect the users’ daily lives, and which could potentially implement discriminatory or harmful practices. The non-exhaustive list comprises providers that supply technologies applied for transportation or employment purposes, migration management, administration of justice and law enforcement processes[2]. Providers of such services will be asked to supply, among other requirements, adequate risk assessments, and a high level of robustness and security to make sure that “AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities”[3], and detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance[4]. Although the Parliament has included all biometric identification systems as high risk, Italy, Hungary and France have been pushing to implement a more lenient regime for the employment of biometric identification instruments for surveillance purposes. The result of this debate is to be seen at the moment of ratification of the Act.
Despite the praiseworthy effort of the European legislator in setting up standards which prioritize fundamental rights and security for its citizens, endorsed also by the setup of clear enforcement measures and fines directed to the misbehaving providers; it is pivotal to highlight some challenges that regulating AI will pose on future legislative attempts.
Firstly, the main area of concern regards the tug of war between maintaining a firm hold over high-risk service providers and, on the other hand, ensuring the smooth progress of AI innovation in the EU. We will likely be witnessing a certain degree of lobbying practices from what arguably represent the top-tier AI companies based in the US (mainly, META, OpenAI, Google, Deepmind etc.); hence watering down the original scope of the Act. This concern rapidly escalated to a concrete debate over the regulation of foundation models. “The foundation models, such as GPT-3.5 - the large language model that powers OpenAI’s ChatGPT-, are trained on vast amounts of data and are able to carry out a wide range of tasks in a number of use cases. They are some of the most powerful, valuable and potentially risky AI systems in existence”[5]. While the proposed Act was keen on firmly regulating foundation models, a trialogue was initiated between the German, Italian and French governments to loosen the grip over these providers by proposing a self-regulation system, and strongly criticising Brussels for over-regulating service providers’ conduct and hindering innovation in the Union[6]. The leaders of said countries also expressed deep concern about the possibility that smaller European-based companies will not be able to keep up with the obligations raised by the Act[7]. While the Parliament maintains a firm formal position over the impossibility of excluding the foundation models, it is apparent that this opposition could furthermore potentially trigger the stalemate of the legislative iter.
A second criticality was identified by the exclusion from the scope of the Act of AI instruments applied for military, national security and national defence purposes. Civil society organizations have indeed expressed major concern towards the possibility that technologies which would be theoretically labelled as posing an unacceptable risk could be implemented if they fall under the umbrella of the scope of defending national security, but, additionally, that dual-use technology could be employed without any regulatory restriction[8].
Finally, the Act faces issues regarding its coordination with the US Order. Albeit both legislative instruments are based on a risk-based approach, the Senate has been more hesitant to espouse the European hard line. As Alex Engler - associate in governance studies at The Brookings Institution- wrote for Stanford University: “There’s a growing disparity between the U.S. and the EU approach in regulating AI. The EU has moved forward on laws around data privacy, online platforms, and online e-commerce regulation, and more, while similar legislation is absent in the U.S”[9]. Furthermore, the US Order struggles to draw clear-cut enforcement measures against companies which happen to be in breach of their obligations, therefore it is clear that the priority of the American legislator lies mostly in maintaining international competitiveness[10]. Needless to say, the lack of homogeneous standards hinders physical and legal persons, the latter being obliged to change their operativity depending on the country of distribution of services.
Despite the said shortcomings, the Act will hopefully be the Kickstarter of a broader strategy able to compensate for the strict approach adopted in the regulation, thus attracting investments and levelling the competition with the US.
[1] https://www.lawfaremedia.org/article/a-comparative-perspective-on-ai-regulation
[2] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[3] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
[4] Ibid.
[5] https://time.com/6338602/eu-ai-regulation-foundation-models/
[6] Ibid.
[7] Ibid.
[8] https://www.stopkillerrobots.org/news/what-are-the-ai-act-and-the-council-of-europe-convention/
[9] https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement
[10] https://www.oii.ox.ac.uk/news-events/the-eu-and-the-us-two-different-approaches-to-ai-governance/
[1] https://cset.georgetown.edu/article/in-out-of-china-financial-support-for-ai-development/
[2] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
[3] https://twitter.com/ylecun/status/1662375684612685825?lang=en
[4] https://www.nist.gov/itl/ai-risk-management-framework
[5] https://www.scientificamerican.com/article/when-it-comes-to-ai-models-bigger-isnt-always-better/
[6] https://ai.meta.com/llama/
[7] https://hackernoon.com/the-next-era-of-ai-inside-the-breakthrough-gpt-4-model
[8] https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF