The AI Act – a summary

At the end of 2023, a political agreement was reached that represents the most ambitious proposal to regulate artificial intelligence (“AI”) technology applications – the EU AI Act (“AI Act”). The Parliament has voted in favor of the Act and only formalities remain until it is published in the Official Journal of the European Union. Below is a description of some important parts of the AI Act and what organizations using or planning to use AI technology should consider. The text is a simplified summary of the rules of the AI Act and does not fully follow the terminology of the Act and should therefore not be interpreted literally.

Risk-based structure

The AI Act has a risk-based structure, where some AI applications are banned entirely, some that are considered high risk are subject to a comprehensive set of requirements, and special rules apply to general-purpose AI models that can perform a variety of tasks, such as OpenAI’s ChatGPT and Google’s Gemini (formerly Bard). In addition, some types of AI models considered a lower risk are only subject to limited requirements, such as providing certain information.

Ban on certain AI applications

Article 5 of the AI Act lists a number of prohibited practices, including:

  • AI systems that use subconscious techniques to intentionally manipulate or mislead a person’s behavior.
  • AI systems that exploit any of a person’s vulnerabilities to manipulate their behavior.
  • Biometric categorization of individuals and social scoring.
  • Real-time biometric identification of persons in public places.

There are significant exceptions to the prohibitions in some cases, such as for law enforcement purposes.

High-risk applications

The AI Act addresses many AI applications that are considered high risk. It imposes extensive requirements for compliance with various obligations of the AI Act before an AI model is made available on the market and put into use. It is important to note that applications used only for testing and research are not covered by the requirements.

High-risk applications are defined in Annex I and III of the AI Act. Annex I says that requirements for AI systems will also apply together with requirements in existing product safety legislation listed in Annex I. Annex III lists several other applications that are considered high risk. The high-risk applications in Annex III apply to certain uses in the following sectors:

  • Biometric identification and categorization of natural persons
  • Critical infrastructure
  • Education and training
  • Working life
  • Access to and enjoyment of essential private services and essential public services and benefits (including eligibility to benefits and credit checks).
  • Law enforcement
  • Migration and border control
  • Administration of justice and democratic processes

The requirements apply if the use is covered by Annex III and at the same time can be considered high risk. If the judgement is made that an application is not high risk although the area of use is covered by Annex III, this must be documented. Systems that are not considered to be high risk must also be registered. Rules on this are found in Article 6 of the AI Act.

The requirements for high-risk applications before they can be put into service include and relate to, inter alia:

  • Implementation and use of a risk management system
  • Data use and data quality
  • Technical documentation
  • Keeping of records
  • Transparency and information requirements
  • Human review
  • Accuracy, robustness and information security
  • CE marking, registration and attestation of conformity

General-purpose models / Generative AI

Given the development of generative AI such as OpenAI’s ChatGPT and Google’s Bard/Gemini over the past two years, additional regulation for general-purpose models has been added to the AI Act. The Act focuses on general-purpose models that can perform a wider range of different types of tasks. The requirements for general-purpose models depend on categorization. For models that are considered to pose systemic risks that may have a significant impact on society, the requirements are more extensive. The model is presumed to have such systemic risks if the data processing capacity is above 10^25 floating point operations per second (“FLOPS”). General-purpose models based on open-source licensing is exempted from some of the requirements that otherwise apply to general-purpose models.

The requirements for general-purpose models include the following:

  • Technical documentation
  • Information to those who intend to integrate the general-purpose model into other AI systems
  • A policy to consider and comply with copyright issues when using and collecting training data
  • Provide documentation on the training data used for the model

For general-purpose models with systemic risks, the following requirements are added:

  • Continuous evaluation of the model
  • Cyber security
  • Incident reporting
  • Additional technical documentation requirements

Other obligations and provisions under the AI Act

The regulatory requirements of the AI Act are primarily related to prohibited applications, high-risk applications and general-purpose models. However, in some situations, there are disclosure requirements for AI systems that do not fall into any of these categories, such as chatbots, deepfakes and other generative AI models (AI-generated images, speech, video and text), to ensure that users understand the extent to which they are interacting with AI systems and AI-generated material.

The Act also includes some provisions to facilitate compliance and innovation for start-ups and other smaller organizations, through opportunities for testing and the use of regulatory sandboxes in cooperation with regulators.

Actors responsible under the AI Act

The most extensive responsibilities under the AI Act fall on AI system providers, i.e. actors that develop AI systems and make them available on the market in the EU. Providers have the main responsibility for complying with the requirements of the Act regarding high-risk applications and the responsibilities under the general-purpose model provisions. For high-risk applications, importers and distributors also have certain responsibilities, such as ensuring that the supplier has fulfilled its obligations. Organizations deploying AI systems also have responsibilities for their use of AI applications, such as ensuring adequate human oversight and keeping logs generated by the AI system. Any actor (regardless of category) that put its brand on an AI system, modifies the system in a significant way, or changes its use, may have much more extensive responsibilities like those of suppliers for high-risk applications.

Sanctions, supervision and supervisory authorities

The AI Act has a penalty system similar to that often used by the EU in recent regulations, such as the GDPR. The following penalties may apply to different types of applications:

  • Prohibited applications – up to €35 million or 7% of global annual turnover
  • High-risk applications and general models – up to €15 million or 3% of global annual turnover
  • Other requirements – up to €7.5 million or 1% of global annual turnover

To monitor compliance with the Act and to work on other AI issues at EU level, the European Commission has recently launched the EU AI Office. In addition to the AI Office, there will be national supervisory authorities, inspection and certification bodies working on compliance with the Act. It is not yet clear who these will be.

Adoption and implementation time

As mentioned above, the AI Act has been passed by the European Parliament. The Act will then enter into force 20 days after its publication in the Official Journal of the European Union. After entry into force, there are some varying periods of time before the AI Act becomes fully applicable. The time from entry into force to the required application of the rules is as follows:

  • Prohibited applications – 6 months (around November 2024)
  • General-purpose models – 12 months (around May 2025)
  • High-risk applications according to Annex III (see above) – 24 months (around May 2026)
  • High risk applications according to Annex I and product safety legislation (see above) – 36 months (around May 2027).
  • Other requirements (information requirements) – 24 months (around May 2026)

What should we consider as an organization?

For some applications, there is a long way to go before the provisions of the AI Act come into force, but to be prepared, it is good to already start familiarizing yourself with the structure of the Act and consider how your organization will be affected:

  • Have a routine of having a regulatory checklist when a new AI system is introduced or developed in the organization that matches the requirements of the AI Act.
  • Is our use covered by the definition of AI in the AI Act?
  • If so, does our AI usage seem to fall under any of the categories of the Act, such as prohibited or high-risk applications?
  • What kind of actor are we most likely to be (supplier, deployer organization, distributor, etc.)?

If it seems likely that your organization is within the scope of the AI Act, an in-depth analysis of its applicability should be carried out. If you conclude that your organization is covered, a compliance project should be launched.

Do not hesitate to contact us if you have any questions about the AI Act, applicability assessments and compliance projects.

Specialister inom området

Johan Hübner

Partner / Advokat

Stockholm

Johan Hübner

Agne Lindberg

Partner / Advokat

Stockholm

Agne Lindberg

John Neway Herrman

Senior Associate / Advokat

Stockholm

John Neway Herrman

Linus Larsén

Senior Associate / Advokat

Stockholm

Linus Larsén

Erik Ålander

Senior Associate / Advokat

Stockholm

Erik Ålander