The EU Artificial Intelligence Act

Artificial Intelligence is becoming more and more integrated into the everyday lives of both consumers and businesses and is used for anything from treating cancer to translation tools or showing you relevant advertisements. Needless to say, there is a need for clear regulations on AI to make it easier to navigate this new normal. This is why the AI Act was introduced in the European Union as a normative framework to manage and mitigate the risk of AI. 

Highlights

The AI Act was passed in the European Parliament on 13 March 2024 and was approved by the EU Council on 21 May 2024.

The Act is expected to be published in the Official Journal of the EU this summer, and 20 days after being published it will enter into force.

 Risk-based approach: the regulations will affect AI tools differently depending on the tools’ potential for causing harm.

Developers, distributors, vendors, importers, and users of AI will be affected somehow.

Member countries will be responsible for taking measures regarding control, and an AI office will be created in the European Commission.

Penalties for non-compliance will be up to 35,000,000 EUR or 7% of a business’ annual revenue – whatever is higher. 

What is the background for the AI Act?

The AI Act seeks to regulate the use of artificial intelligence and ensure that AI systems are used in ways that respect European values and rules.

AI is inevitable for innovation and already plays a large role in many societies. Providing a European stance on AI aims to ensure trust and transparency regarding the public use of AI, foster innovation, and democratize AI. To do so, the act sets boundaries for when AI can and cannot be applied, but without limiting innovation and fair competition.

The initiative has a human-centered approach and will affect all industries and all types of artificial intelligence.

Who will the AI Act apply to?

The AI Act is going to affect widely, and the following can expect to be directly affected:

  • All businesses and organizations in and outside the EU, if the AI tool is available in the EU market, or if its use affects persons within the EU.
  • Developers, distributors, vendors, importers, and users of AI.
  • The following industries will be extra affected:
    • Finance and insurance: When AI is used to determine insurance premiums or credit rating.
    • Health: When AI is used in medical devices or treatment systems.
    • Manufacturing: When AI is used in machines, toys, gas, oil, and elevators.
    • Transportation: Cars, airplanes, trains, ships, agricultural or forestry vehicles, two- or three-wheeled vehicles, and drones.
    • Critical infrastructure: Operations, safety, environment, pollution, and control systems.
    • The public sector: Systems used in casework regarding people’s vulnerabilities or to determine social benefits.
One of the expected derived effects of this may be that AI regulations will be introduced in other non-EU areas, as seen when data protection laws became a hot topic in non-EU countries after GDPR.

Risk-based approach

The act is made on a risk-based approach. This means that the risk associated with using a specific tool will determine the level of requirements made for this tool.

 Unacceptable risk (prohibited): Social scoring, taking advantage of an individual’s vulnerabilities, or biometric profiling of persons. These tools will not be allowed to develop or market in the EU.

High risk (conformity assessment): Using AI to pinpoint specific individuals in recruiting (CV scanning), criminal offenses, or asylum. These tools deal with critical infrastructure and basic human rights.

Limited risk (transparency): Using AI as personal assistants or for commercial purposes. These are the most commonly used tools in the EU and can be developed and used in accordance with current legislation without further legal obligations. However, users must be made aware that they are interacting with AI, as part of the transparency requirement.

Minimal risk (voluntary code of conduct): spam filters, video games, etc. A voluntary code of conduct is expected to be presented by the EU later on.

AI Act risk levels

General AI models (eg. Generative AI) can be used to build on top of. Developers wishing to further develop such models must have the necessary information to ensure that their system is safe and compliant with the AI Directive. Therefore, the act compels providers of such models to make certain information available to downstream system providers.

Timeline for AI Act implementation

13 March 2024: The AI Act was passed in the European Parliament with 523 votes in favour, 46 against and, 49 abstentions.

21 May 2024: The AI Act was approved by the EU Council.

May/June/July 2024: The AI Act will be published in the Official Journal of the EU, and will enter into force 20 days later.

(+6 months) ~ January 2025: Bans will enter into force (unaccepted risk).

(+12 months) ~ June/July 2025: Requirements for general-purpose AI (GPAI/GenAI) will enter into force.

(+24 months) ~ June/July 2026: Requirements for high-risk systems listed in annex 3 will enter into force.

(+36 months) ~ June/July 2027: Requirements for high-risk systems listed in annex 3 will enter into force.

Failure to comply

An AI Office will be created within the European Commission. This office will coordinate the creation of governance systems and relevant national authorities. 

The fines for non-compliance will be up to 35,000,000 EUR or 7% of the business’ total worldwide annual turnover for the preceding financial year – whichever is higher. For SMEs and startups, the lower of the two will constitute the fine. 

Again, the severity of the penalty will be based on the level of risk associated with the use of the technology. The pentalty structure is tier-based, and non-compliance with the articles related to prohibited AI is the most severe. 

AI Act Fines

The member countries are responsible for taking measures regarding control and enforcement on a national level. In Denmark, this will be handled by Digitaliseringsstyrelsen.