Challenges and Effects of the AI Act

Photo by Igor Omilaev on Unsplash

1. Introducing the AI Act

Artificial Intelligence (AI) is utilised within various aspects of our lives and is developing at a rapid pace. That is why in April 2021 the European Commission proposed a regulatory framework for the creation and employment of AI which was then adopted and came into force in August 2024 [1]. The European Parliament explained that the goal of Regulation 2024/1689, otherwise known as the Artificial Intelligence (AI) Act, is to ensure “that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly” [2].

2. Characteristics

The AI Act aims to regulate AI systems. They are defined in Article 3 as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” [3].

Since the focus of the Act is to ensure environmental and consumer protection, it addresses those who are involved in the creation and distribution of AI systems -- “providers, deployers, importers, distributors, and product manufacturers” [4]. The Act also sets out rules for “any natural or legal person, including a public authority, agency or other body, using an AI system under its authority” [5]. It is clear that the Act is not intended to regulate everyday consumer use of AI systems but their establishment in the market by certain entities.

The AI Act divides AI systems into four types, depending on the amount of risk they pose – unacceptable risk, high-risk, limited risk and minimal risk [6]. The four types of AI systems are each regulated differently. Looking into unacceptable risk AI systems, they contain certain “prohibited practices” [7] and are “harmful, abusive and in contradiction with EU values” [8]. The EU does not allow the distribution of such systems. Furthermore, high-risk AI systems “are deemed a high risk to the health and safety or fundamental rights of individuals” [9] and are regulated very strictly [10]. Limited risk AI systems are those which need to be regulated to ensure their transparency [11]. Minimal risk AI systems, examples of which are “AI-enabled recommender systems and spam filters” [12], do not have any regulations imposed on them by the Act.

The AI Act also regulates general-purpose AI (GPAI) models which are “an AI model, including where such an AI model is trained with a large amount of data using self- supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market” [13]. Particularly strict regulations are provided for general-purpose AI models which have “systemic risk” [14].

3. Regulation Challenges

There was concern that the AI Act would not satisfactorily regulate GPAI models due to their “versatile and often unpredictable applications” [15], meaning that it might be difficult to cover all risks associated with GPAI models and regulate them [16]. General-purpose AI models, an example of which is Chat GPT, are employed very widely and if they are not regulated adequately, then the legislation would be failing in its objective of ensuring various protections.

However, a General-Purpose AI Code of Practice is currently being developed, whose aim is to provide holistic regulation of GPAI models. The GPAI Code of Practice is created by “[i]independent experts” [17] and with the participation of many stakeholders to ensure the comprehensive regulation of GPAI models [18]. Some of the focus points of the Code of Practice will be “risk identification” [19] safety, proportionality, transparency, and copyright [20].

It is vital for the GPAI Code of Practice to successfully supplement the AI Act, so GPAI models are suitably regulated, fulfilling the goals of the Act.

4. Innovation Implications

Another potential issue which might be caused by the AI Act is impeding innovation by “overregulation” [21]. Economist Henrique Shneider identifies different ways in which this could happen [22]. He explains that having to ensure compliance with a variety of regulations might cause delays within the creation of AI, resulting in a slower process of establishing new developments. Moreover, accommodating regulations might result in more expenses for companies – financial concerns might deter smaller entities from engaging in AI development. Small companies might also not want to engage in AI creation due to “fear of non-compliance and the potential financial penalties” [23]. This will likely cause the dominance of larger companies and potential Competition Law problems. Furthermore, having less companies engaging in the creation of AI would likely result in less products so the EU market for AI might develop more slowly than that of other jurisdictions. In addition, if AI products from other jurisdictions do not comply with EU regulations, then they would not be a part of the EU market, depriving it of innovations [24].

Despite the potential issues caused by “overregulation” [25], it is clear regulation of AI is imperative due to a variety of dangers among which are environmental, security, privacy and bias concerns [26]. Regulation of AI is necessary because it “promotes better governance and safer use of AI” [27] and having legal certainty would allow “companies to pursue AI-driven innovation with confidence” [28].

It is still to be determined what the effect of the AI Act and its supplementary legislation would be on AI innovation and the EU market – whether the newly implemented regulations will provide a benefit to AI development and employment or if the potential negative impact of the Act would be overriding.

5. Global Implications

It has been established that if companies want to have their product within the EU market, they would have to comply with the regulations within the AI Act. This is known as the “Brussels effect” [29], more specifically the “de facto” [30] emanation of it [31]. It is likely that companies from other jurisdictions would want to be a part of the EU market, as it is “one of the largest (perhaps the largest) market for AI systems and outputs” [32]. Thus, this would prompt companies to develop products according to the requirements in the AI Act [33].

There is also the possibility that the AI Act would have a “de jure” [34] effect within other jurisdictions where, following EU legislation, they decide to take a similar regulatory approach to AI [35]. It appears that even though the AI Act is EU legislation, it would potentially have strong global effects on the way companies develop AI.

6. Conclusion

A regulatory framework is undeniably a welcome development within the AI sector, considering the rapid growth of AI and the potential risks it brings. The upcoming years will reveal the full extent of the positive and negative aspects of the AI Act, as well as the global impact it would have on the creation and use of AI.

References

[1] European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (European Parliament, 18 June

2024) <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-

artificial-intelligence> accessed 18 January 2025.

[2] ibid.

[3] Regulation (EU) 2024/1689 (AI Act) [2024] OJ L1/144, art 3(1).

[4] Tim Hickman and others, ‘Long awaited EU AI Act becomes law after publication in the EU’s Official Journal’ (White & Case, 16 July 2024) <https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal> accessed 18 January 2025.

[5] AI Act (n 3).

[6] Claudio Novelli and others, ‘Taking AI risks seriously: a new assessment model for the AI Act’ (Springer Nature Link, 12 July 2023) <https://link.springer.com/article/10.1007/s00146-023-01723-z> accessed 18 January 2025.

[7] AI Act (n 3) art 5.

[8] Tim Hickman (n 4).

[9] Latham & Watkins, ‘EU AI Act: Navigating a Brave New World’ (July 2024) <https://www.lw.com/en/admin/upload/SiteAttachments/EU-AI-Act-Navigating-a-Brave-New-World.pdf> accessed 18 January 2025.

[10] ibid.

[11] ibid.

[12] European Commission, ‘European Artificial Intelligence Act comes into force’ (European Commission, 1 August 2024) <https://ec.europa.eu/commission/presscorner/detail/ov/ip_24_4123> accessed 18 January, 2025

[13] AI Act (n 3).

[14] ibid art 55.

[15] Claudio Novelli (n 6).

[16] Henrique Schneider, ‘The AI Act: The EU’s serial digital overregulation’ (GIS Reports Online, October 10 2024) <https://www.gisreportsonline.com/r/ai-act-eu-regulation-innovation/> accessed 18 January 2025.

[17] European Commission, ‘First Draft of the General-Purpose AI Code of Practice published, written by independent experts’ (European Commission, 14 November 2024) <https://digital-strategy.ec.europa.eu/en/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts>

[18] ibid.

[19] Laura Caroli, ‘The EU Code of Practice for General-Purpose AI: Key Takeaways from the First Draft’ (Center for Strategic & International Studies, November 21 2024) <https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft> accessed 18 January 2025.

[20] ibid.

[21] Peter Borner, ‘EU AI Regulation – A Balancing Act Between Innovation and Overregulation’ (The Data Privacy Group, October 25 2024) <https://thedataprivacygroup.com/blog/eu-ai-regulation-a-balancing-act-between-innovation-and-overregulation/> accessed 18 January 2025.

[22] Henrique Shneider (n 16).

[23] ibid.

[24] ibid.

[25] Peter Borner (n 21).

[26] Tableau, ‘What are the risks of artificial intelligence (AI)?’ (Tableau) <https://www.tableau.com/data-insights/ai/risks> accessed 18 January 2025.

[27] Thierry Kellerhals, ‘Is AI regulation threatening innovation and ChatGPT?’ (KPMG, 29 April 2024) <https://kpmg.com/ch/en/insights/artificial-intelligence/eu-ai-act-challenge.html> accesed 18 January 2025.

[28] ibid.

[29] Juliette Faivre, ‘The AI Act: Towards Global Effects ?’ [2023] SSRN 1, 6.

[30] ibid 6.

[31] ibid 6.

[32] Graham Greenleaf, ‘EU AI Act: Brussels Effect(s) or a Race to the Bottom?’ [2024] 1, 2.

[33] ibid 2.

[34] Juliette Faivre (n 29) 6.

[35] ibid 6.

Next
Next

One Man, Two Unions, and Three Billion Euros: A Failing US-EU Security Alliance