EU AI Act: what it is and what it requires
Expected to come into force in 2024, it will be the world’s first Artificial Intelligence regulation
The regulation of artificial intelligence applications is a fundamental step towards a responsible and safe use of this powerful technology.
In questo articolo scoprirai:
In recent years, Artificial Intelligence (AI) has taken a central role in the digital transformation of our world, influencing a wide range of sectors, from industry to healthcare, from mobility to finance. Inevitably, this rapid evolution has made regulation of the technology, which by its nature (i.e. the ability to make decisions and self-learn) carries important and sensitive ethical implications, increasingly necessary and urgent.
Since April 2021, the EU has therefore been working on the so-called AI Act, a first regulatory framework on Artificial Intelligence, the final approval of which is expected at the end of 2023 to come into force between 2024 and 2025.
What is the AI Act and Why is it Important
The objective of the AI Act is to ensure that AI systems used within the European Union are fully aligned with EU rights and values, guaranteeing human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.
AI applications are increasingly capable of real-time analysis and can “decide” which actions to take based on available data. It is precisely this ability to decide and learn that necessitates the ethical evaluations mentioned at the beginning of the article. If an AI application makes a wrong decision or one that unfairly benefits or harms a human being, who will be held accountable? This is why, with the proliferation of this technology, the EU decided to work on what will be the first AI regulation in the world.
It’s a very complex task: the rules governing the use of technologies must protect citizens’ privacy and safety without limiting the possibilities for experimenting with applications.
What the AI Act Envisions
The AI Regulation establishes 4 risk levels according to which AI applications will be categorized, and they will be subject to different levels of monitoring accordingly.
Unacceptable risk
Applications using subliminal techniques or social scoring systems employed by public authorities are strictly prohibited. Additionally, real-time remote biometric identification systems used by law enforcement in publicly accessible spaces are banned.
High risk
These include applications related to transportation, education, employment, and welfare, among others. Before placing a high-risk AI system on the market or into service in the EU, companies must conduct a preliminary “conformity assessment” and meet a long list of requirements to ensure the system’s safety. As a practical measure, the regulation also mandates that the European Commission create and maintain a publicly accessible database where providers are required to supply information about their high-risk AI systems, ensuring transparency for all stakeholders.
Limited risk
This refers to AI systems that meet specific transparency obligations. For example, an individual interacting with a chatbot must be informed that they are interacting with a machine so they can decide whether to proceed (or request to speak with a human).
Minimal risk
These applications are already widely used and constitute the majority of AI systems we interact with today. Examples include spam filters, AI-enabled video games, and inventory management systems.
Additionally, the AI Act establishes that primary responsibility will lie with the “providers” of AI systems; however, certain responsibilities will also be assigned to distributors, importers, users, and other third parties, impacting the entire ecosystem.
AI in Italy, some data
Meanwhile, AI applications in our country continue to spread: the Artificial Intelligence Observatory of the Politecnico di Milano, in its latest report (February 2023), calculated that the AI market in Italy has reached a value of 500 million euros, with a +32% increase compared to 2021. Additionally, 61% of large enterprises in Italy have already initiated an AI project, while this percentage is only 15% among SMEs. However, an increase is expected in the next 24 months.
Regulating AI applications represents a fundamental step towards the responsible and safe use of this powerful technology, shaping a future where artificial intelligence is a reliable ally that respects human values. This contributes to creating a safer and more ethical digital environment for everyone. The key lies in balancing technological innovation with the protection of human rights, an objective that requires constant commitment from all stakeholders involved.