January 18, 2024No Comments

AI Regulatory Landscape in the US and the EU: Regulating the Unknown – AI, Cybersecurity, Space Group 

Author: Oleg Abdurashitov and Caterina Panzetti - AI, Cyber Security & Space Team

Among other things 2023 was a year of AI regulation in the EU, US and well beyond. The fundamental challenge that policymakers face in the case of AI is that, in essence, they are often dealing with the unknowns resulting from the complexity of the technology itself and the break-neck speed of its development and adoption. Given the incessant debate on whether AI poses an existential risk to humanity that needs to be addressed at the earlier stages or if such existential risks are merely a smoke screen to the far more urgent and practical implications of widespread AI deployment on privacy, copyright, human rights, labor market, setting the regulatory priorities appears to be challenging. Analyzing what the regulators in the US and Europe chose to focus on and how they framed AI regulatory doctrine may help to better understand not just their priorities but the differences in respective institutional, political and economic environments and approaches to dealing with the emerging technologies. 

United States 

Despite the existential threat narrative peddled by the largest industry players, including at the Senate hearings, thePresidential Order Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence seems to be more grounded in the current reality when assessing AI's potential risks. While the act attempts to address several critical security issues - from AI-enabled cyber operations to the threat of WMD (Weapon of Mass Destruction) development - it can nonetheless be viewed as an effort to prepare the American economy and society for the age of AI use across the numerous (if not all) sectors. 

The Order’s approach is based on the unique strength of the US economy and governance model that heavily relies on the enormous capacity of the US tech sector, as well as on the diverse environment of the nation’s civil society, where educational institutions, think tanks, and the legal system all play a role in shaping and implementing regulations. Probably, the most critical aspect of the AI regulatory environment is that in the US the recent AI breakthroughs are funded by private capital as opposed to the state budgets like in China[1] or, largely, the EU[2]. This, on the one hand, allows the US to retain the competitive edge in the AI race, with the so-called  MAGMA[3] companies bearing a large share of the R&D costs in developing breakthrough commercial AI products. On the other, this puts the US government in a position where the sectoral regulation shall be balanced against the interests of the commercial players and enable, rather than control, the technology development and adoption.      

The Order implicitly acknowledges this complex interplay between the commercial interests, the interests of the state, and American society’s demands. Section 2 (Policy and Principles) in particular broadly outlines the many aspects of AI development - from safety to impact on the workforce - that need to be balanced against each other. Again, given the enormity of such a task, the Order is short on specific details - and when such details are given they often leave the question of whether it will be able to address the long-term security implications of AI development open. 

For instance, in Section 4 the Order puts the “dual-use foundation models” that may pose “a serious risk to security, national economic security, national public health or safety” under increased regulatory and technical scrutiny. The definition of such a model as the one containing “at least tens of billions of parameters” covers the leading large language models (LLMs) behind ChatGPT and Google Bard, with each having more than 100 billion parameters. The Order’s approach to regulating such powerful models relies largely on industry guidelines (such as the NIST AI Risk Management Framework[4]) developed in collaboration with the private sector players themselves complemented by a series of government-funded testbeds for risk assessment.

It is important to note, that while the commonly agreed approach to AI model training can be described as “greater is better”, there is evidence that the output of models with a far smaller number of parameters (1.5B to 2.7B) can be somewhat comparable to that of larger models[5]. Additionally, while larger models are generally controlled by specific entities, the open-source models (such as Meta’s Llama available in packages of 7B, 13B, and 70B parameters[6]) may be used by a far wider number of actors developing their own powerful models, potentially falling outside the regulatory scrutiny and export control measures. 

More so, the Order explicitly focuses on very large models as subject to regulatory restrictions, like “[a] model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations”. That number, for instance, is significantly higher than the rumoured estimates for the most advanced model in the market today - OpenAI’s GPT-4 - which currently stands around 2.15 x 1025 FLOPs (floating point operations)[7]. If the field will sustain the current pace of innovation, this threshold may well be crossed shortly. However, there is so far little evidence that such models would indeed represent “potential capabilities that could be used in malicious cyber-enabled activity” since malicious cyber operations of today require far less computing power and the “cyber-enabled” definition may simply be too broad to have meaning in regulatory context.  

Of course, the proposed control regime for the large ‘dual-use models’ need not necessarily fully address the issue of AI-powered malicious activity as of today. Instead, the Order directs federal agencies to study the best practices and guidelines of the critical infrastructure sectors to manage “AI-specific cybersecurity risks” and “develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards” as well as assess the risks of AI usage in the critical infrastructure and government systems. From this point of view, the Order implicitly acknowledges the fact that AI models are already largely deployed both in the private and public sectors and calls for measures to discover and reduce the risks of such use. 

Notably, the US DoD’s Data, Analytics, and Artificial Intelligence Adoption Strategy[8], released months earlier in June 2023 prioritizes speed of deployment over careful risk assessment that the Presidential Order entails. To the military, the “[AI deployment] risks will be managed not by flawless forecasting, but by continuous deployment powered by campaigns of learning”. More so, the DoD calls for mitigation of policy barriers through consensus building and closer relations with vendors, as well as the AI community at large. Despite the risks of AI deployment being no less profound in the military sector than in civilian affairs, the US Government as a customer may well choose the speed of decision-making - and many other benefits that AI can potentially bring to warfighting - to a more careful and balanced approach. 

Source: https://www.pexels.com/photo/blue-bright-lights-373543/

EU AI Act

2021 has been marked by a race towards gaining normative authority in the field of Artificial Intelligence-enabled services. The European Union has been leading this chase by engaging in an omni comprehensive risk-oriented approach to AI regulation, providing for a broad regulatory framework to ensure securitization and protection of fundamental rights.

The Commission has indeed proposed a model founded on a decreasing scale of risk-based obligations to which providers will have to adhere to be able to continue conducting their business in the European Union, irrespective of their place of establishment[1]. Regarding the service providers which surpass the threshold of what the legislator has referred to as “high risk”, the AI Act imposes a ban upon them, and thus will not be allowed to distribute their services in the Union, as they are deemed to pose an unacceptable risk to the livelihood and safety of the users of such service. Just a tire under the said forbidden services, the providers which have been labelled as “high risk” will have to comply with the most burdensome obligations. Notably, the proposed regulation will not have any impact on AI systems implemented for military purposes.

The high-risk providers are identified with critical infrastructures which deeply affect the users’ daily lives, and which could potentially implement discriminatory or harmful practices. The non-exhaustive list comprises providers that supply technologies applied for transportation or employment purposes, migration management, administration of justice and law enforcement processes[2]. Providers of such services will be asked to supply, among other requirements, adequate risk assessments, and a high level of robustness and security to make sure that “AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities”[3], and detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance[4]. Although the Parliament has included all biometric identification systems as high risk, Italy, Hungary and France have been pushing to implement a more lenient regime for the employment of biometric identification instruments for surveillance purposes. The result of this debate is to be seen at the moment of ratification of the Act.

Despite the praiseworthy effort of the European legislator in setting up standards which prioritize fundamental rights and security for its citizens, endorsed also by the setup of clear enforcement measures and fines directed to the misbehaving providers; it is pivotal to highlight some challenges that regulating AI will pose on future legislative attempts.

Firstly, the main area of concern regards the tug of war between maintaining a firm hold over high-risk service providers and, on the other hand, ensuring the smooth progress of AI innovation in the EU. We will likely be witnessing a certain degree of lobbying practices from what arguably represent the top-tier AI companies based in the US (mainly, META, OpenAI, Google, Deepmind etc.); hence watering down the original scope of the Act. This concern rapidly escalated to a concrete debate over the regulation of foundation models. “The foundation models, such as GPT-3.5 - the large language model that powers OpenAI’s ChatGPT-, are trained on vast amounts of data and are able to carry out a wide range of tasks in a number of use cases. They are some of the most powerful, valuable and potentially risky AI systems in existence”[5]. While the proposed Act was keen on firmly regulating foundation models, a trialogue was initiated between the German, Italian and French governments to loosen the grip over these providers by proposing a self-regulation system, and strongly criticising Brussels for over-regulating service providers’ conduct and hindering innovation in the Union[6]. The leaders of said countries also expressed deep concern about the possibility that smaller European-based companies will not be able to keep up with the obligations raised by the Act[7]. While the Parliament maintains a firm formal position over the impossibility of excluding the foundation models, it is apparent that this opposition could furthermore potentially trigger the stalemate of the legislative iter.

A second criticality was identified by the exclusion from the scope of the Act of AI instruments applied for military, national security and national defence purposes. Civil society organizations have indeed expressed major concern towards the possibility that technologies which would be theoretically labelled as posing an unacceptable risk could be implemented if they fall under the umbrella of the scope of defending national security, but, additionally, that dual-use technology could be employed without any regulatory restriction[8]

Finally, the Act faces issues regarding its coordination with the US Order. Albeit both legislative instruments are based on a risk-based approach, the Senate has been more hesitant to espouse the European hard line. As Alex Engler - associate in governance studies at The Brookings Institution- wrote for Stanford University: “There’s a growing disparity between the U.S. and the EU approach in regulating AI. The EU has moved forward on laws around data privacy, online platforms, and online e-commerce regulation, and more, while similar legislation is absent in the U.S”[9]. Furthermore, the US Order struggles to draw clear-cut enforcement measures against companies which happen to be in breach of their obligations, therefore it is clear that the priority of the American legislator lies mostly in maintaining international competitiveness[10]. Needless to say, the lack of homogeneous standards hinders physical and legal persons, the latter being obliged to change their operativity depending on the country of distribution of services. 

Despite the said shortcomings, the Act will hopefully be the Kickstarter of a broader strategy able to compensate for the strict approach adopted in the regulation, thus attracting investments and levelling the competition with the US.


[1] https://www.lawfaremedia.org/article/a-comparative-perspective-on-ai-regulation

[2] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[3] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[4] Ibid.

[5] https://time.com/6338602/eu-ai-regulation-foundation-models/

[6] Ibid.

[7] Ibid.

[8] https://www.stopkillerrobots.org/news/what-are-the-ai-act-and-the-council-of-europe-convention/

[9] https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement

[10] https://www.oii.ox.ac.uk/news-events/the-eu-and-the-us-two-different-approaches-to-ai-governance/


[1] https://cset.georgetown.edu/article/in-out-of-china-financial-support-for-ai-development/

[2] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[3] https://twitter.com/ylecun/status/1662375684612685825?lang=en

[4] https://www.nist.gov/itl/ai-risk-management-framework

[5] https://www.scientificamerican.com/article/when-it-comes-to-ai-models-bigger-isnt-always-better/

[6] https://ai.meta.com/llama/

[7] https://hackernoon.com/the-next-era-of-ai-inside-the-breakthrough-gpt-4-model

[8] https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF

November 6, 2023No Comments

Israel’s Possible War Scenarios: From a Temporarily Restrained Conflict to a Prolonged All-out War

Author: Omri Brinner - Middle East Team

With the beginning of its ground invasion into the Gaza Strip, Israel is at a crossroads it hoped it wouldn’t be in. It can be argued that any route Israel would take in this historic intersection would lead to regional escalation, even if only in the long-run. It is safe to assume, then, that even if there is no immediate backlash to the Israeli ground invasion, another front, sooner or later, will follow. 

The most popular Israeli approach in responding to the October 7 Hamas attack is that the IDF’s infantry and armored brigades would invade the Gaza Strip, backed by heavy artillery, actionable intelligence, and preceded by intense aerial bombardment (as is happening). Israel, it has been argued, must respond forcefully, or else it would project to its enemies that it would refrain from war at all costs. 

The ground invasion itself is meant to root out Hamas from the Gaza Strip and to disable its military capabilities. The other objective is the release of the 239 Israeli and foreign hostages, most of whom are civilians. Ideally – from Israel's point of view – the IDF would achieve its goals in the Gaza Strip without having to fight on another front simultaneously, as its capabilities in fighting multiple fronts at the same time are limited, and such a scenario will necessitate Israel to change its objectives. However, this is the least likely scenario. Total victory against Hamas is not guaranteed – and even unlikely  within the limits of military power – and the ground operation can last for months. What is more likely is that Israel would embark on a limited ground incursion (due to American pressure and the possibility of another front elsewhere), achieve some tactical victories against Hamas, and will force a ceasefire on better conditions – which would lead to the release of some hostages (most likely women, children, and the elderly). However, the restrained war efforts in Gaza will surely be followed by war and terror on other fronts, and possibly simultaneously.

One ongoing front is in the West Bank and East Jerusalem, where Hamas, armed militias, and lone-wolf terrorists take arms against Israeli civilians and security personnel. At the time of the Hamas attack on October 7, most of the IDF was stationed in the West Bank, demonstrating its symbolic and strategic importance to Israel. The latter would have to react forcefully to any significant development there. It is in Hamas’ interest to start a new intifada in the West Bank, and possibly in Israeli cities, in order to destabilize and weaken Israel.  

The other ongoing front, where Israel might face a full-scale war, is from the north. Hezbollah, with its arsenal of 150,000 projectiles (of close, medium, and long range) and army of approximately 100,000 soldiers, most of whom are well-trained and with some battle experience, pose a strategic threat – even bigger than the one Hamas poses. 

Thus far, Hezbollah – which is backed by Iran and serves as its most strategic proxy in the region – has been reacting to Israel’s limited ground invasion, albeit with restraint. While Hezbollah needs to show it is committed to the Palestinian cause, it aims to avoid an all-out war with Israel. 

Source: https://twitter.com/IDFSpokesperson/status/1721014635623522767/photo/4

According to Israeli calculation, an all-out war is not fully in Hezbollah’s interests, nor is it in Iran’s. According to this theory, both Iran and Hezbollah would rather open an all-out war with Israel only once Iran guarantees applicable nuclear military capabilities, which, in the long run, seems inevitable. This means that from Israel’s point of view – and contrary to the best-case scenario described above – it would be better for Israel to engage with Hezbollah and Iran before the latter becomes a nuclear power. 

Israel, then, might choose to attack Hezbollah and either drag it into the war – and by so eliminating the surprise element of Hezbollah’s reaction – or, if Hezbollah chooses not to retaliate, to reestablish its deterrence up north.  While it may seem like an act of self harm, the Israeli public would view a Hezbollah surprise attack as another failure of the government, IDF, Shin Bet and Mossad. In a way, then, these institutions hope to project to the public that Israel is on the front foot, and that if a war with Hezbollah and Iran is inevitable in the long run, then better now than later. It is important to note that while Israel calculates that the two Shia powers would rather avoid an all-out war prior to Iran’s nuclearization, Israel’s working assumption that Hamas was deterred and would have opted to avoid an armed conflict fell apart with the October 7 attack. Therefore, there are no guarantees that any theory that existed before the attack is still relevant.

Would Iran and Hezbollah wait peacefully for an Israeli strike, or for it to finish its fighting in Gaza? Unlikely. From their point of view, Iran and Hezbollah are happy to let Israel keep guessing whether they would join the war or not. From Israel’s standpoint, it cannot afford to be surprised again. While it is less likely that there would be a ground invasion from the north following the one from the Gaza Strip on October 7, an extensive missile attack on central Israel would be just as bad.

But initiating war with Hezbollah – and Iran – would force the US into the conflict, as it would be extremely challenging – on the verge of impossible – for Israel to conduct an all-out war with Hamas, Hezbollah, and Iran simultaneously. At the same time, if US forces end up fighting alongside Israel, then it is likely that other Iranian allies would occupy the US forces elsewhere in the region (such as in Yemen, Iraq, and Syria). While a recent poll shows that the vast majority of Americans are against US military involvement in the Middle East, the US would feel it has to protect its allies and interests in the region. 

It seems, then, that the region is ahead of a long period – whether months or years – of an armed conflict.

February 2, 2022No Comments

The Ukrainian Crisis which Washington wants Resolved Quickly

By: Francesco Cirillo

Image Source: https://pixabay.com/photos/moscow-spasskaya-tower-3895333/

With the letter delivered to Moscow the dialogue on the guarantees linked to security put forward by the Russian Federation, we enter the difficult task of keeping open a channel that should aim at a decrease in tensions on the Russian-Ukrainian border.

For Moscow now it is necessary time for Russian President Vladimir Putin to carefully analyze all the documents received from both the United States and NATO; but Russian foreign minister Lavrov himself said that both Washington and the Atlantic Alliance rejected Russia’s request to suspend NATO’s eastward expansion.

While both NATO Secretary General Stoltenberg and US Secretary of State Anthony Blinken have stated that they are ready for dialogue with the Kremlin, which at the moment has given no signs of reducing troops (according to some networks, almost 100,000 men and armored vehicles) near the border with Ukraine. To increase the pressure on the Russian leadership and Putin, Blinken himself stated that in the event of a Russian invasion, Washington would implement a strategy, with Berlin, to block the completion of the North Stream 2 gas pipeline. Europe. The Chinese Foreign Ministry has asked the United States to take Russian concerns seriously.

The US dilemma on the Ukrainian crisis concerns the desire to resolve it quickly to avoid bogging down the other dossiers that the Biden administration considers vital, first and foremost the internal economic situation and the internal pandemic. Other concerns the issues concerning the Indo-Pacific and that concerning the confrontation with China. It is vital for Washington to resolve the issue in Europe that it avoids engaging directly and leaving the field to the European allies of the EU and NATO. In recent days, Jens Stoltenberg declared that NATO will not send Pact troops to Kiev, a statement also accompanied by the US, a statement coming from the White House spokesman, in which it was explicitly stated that the United States does not intend to send troops in Ukraine.

In this Kiev finds itself closed by the desire to prepare for a possible Russian invasion and with only informal and diplomatic support, with economic and military aid that comes from the Baltic countries, Poland and the UK. Meanwhile, Moscow decides to keep the units near the Ukrainian border and the US has put 8,500 people on alert ready to be deployed in NATO allied countries. Another burden will concern the possible negotiations between Washington and Moscow on the "security guarantees" that the latter expects to deal with. The Kremlin aims to gain recognition of its spheres of influence from neighboring countries and opposition to the entry of Moldova, Georgia and Ukraine into the Atlantic Alliance. On the opposite front, both Washington and NATO, in the documents delivered to Moscow, ask the Russians to start a diplomatic path that leads to discussing Russian requests and a possible de-escalation but rejecting the request to suspend expansion towards Eastern Europe.

The dialogue between Moscow and Washington / NATO / EU continues, but with 100,000 troops from the Russian Federation close to the Ukrainian borders.

October 8, 2021No Comments

South Africa strategies in the international arena: is it an “atypical” African country?

The “International system & World order - Africa” interviews Riaan Eksteen from the University of Johannesburg. Mr Eksteen has been a member of the South African Foreign Service for 27 years and he served at the South African embassy in Washington, D.C. He was also Ambassador and Head of Mission at the UN New York, Namibia, Geneva, and Turkey. Riaan Eksteen talks about the international role of South Africa in the BRICS, with the new US administration, and the African Union.

Interviewers: Michele Tallarini and Rebecca Pedemonte

April 4, 2021No Comments

US-China Geopolitical Competition In Indo-Pacific and Asia-Pacific

Dr Zeno Leoni, ITSS-Verona Executive Director, discusses on the dynamics and implications of US-China geopolitical tensions in Asia.