Insights AI Act: European Council approves the political agreement

Contact

On 9 December, the European Parliament and Council reached a political agreement on the terms of the AI Act (previously reported by Wiggin). The Council formally adopted the Act on 2 February 2024 releasing the final compromise text. This version reveals further details of the extensive changes the co-legislators have made to the Commission’s original proposal published in 2021, some of which are highlighted below.

Definition of AI system

The Act regulates “AI systems”, defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. The Recitals to the Act clarify that rules-based systems are excluded.  Certain exclusions apply for providers of AI systems that make them available for research or non-professional activities and for free/open-source AI.

Prohibited AI

Prohibited AI systems include AI using subliminal/manipulative techniques, or that exploits vulnerabilities (e.g. due to age or disability), which materially distorts a person’s behaviour causing (or likely to cause) significant harm, biometric categorisation systems inferring sensitive information (e.g. race, political opinions, trade union membership, etc.) (with an exception for law enforcement), social scoring resulting in detrimental/unfavourable treatment, assessing the risk of an individual committing a criminal offence solely based on profiling or personal traits/characteristics (save where there is concrete evidence against the relevant person), the creation of facial recognition databases through the untargeted scraping from the internet or CCTV, and emotional recognitions systems used in the workplace or educational institutions (unless used for safety reasons).

The use of real-time remote biometric identification systems in public places for law enforcement remains prohibited save where they are strictly necessary to search for “specific” victims of abduction, human trafficking and sexual exploitation, or missing persons, in cases of an imminent threat to life or terrorist attack and to search for criminal suspects, in each case subject to specific safeguards.

High risk AI

There are two categories of high-risk AI systems: those intended to be used as a safety component in the regulated products listed in Annex II, including machinery, toys, lifts, etc., and AI systems used in particular areas listed in Annex III including non-banned biometric identification and categorisation and emotional recognition, critical infrastructure, education/training, employment, law enforcement, etc. Exceptions exist for AI systems under Annex III that do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. The Commission must provide further guidelines on the classification of AI systems as high risk no later than 18 months from the Act coming into force.

High risk AI systems are subject to obligations relating to transparency, risk management, accountability, data governance, human oversight, accuracy, robustness and cybersecurity. There is also an obligation to conduct a fundamental rights impact assessment. Conformity assessment procedures will apply to Annex II AI systems including certification and CE marking.

Limited risk AI

The disclosure requirements for AI systems intended for interaction with natural persons, and for permitted emotion recognition and biometric categorisation systems, remain in the text. Further disclosure requirements apply to AI systems generating synthetic audio, image, video or text content, AI-generated text relating to  matters of public interest, and for deployers of AI systems that generate or manipulate image, audio or video content constituting a “deep fake” (an AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful). Where the content forms part of an evidently artistic work or programme, this obligation is limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

General purpose AI

A general purpose AI system (“GPAI”) is defined as an “AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”. GPAI is subject to obligations relating to the provision of technical documentation (e.g. in relation to training and testing), information for downstream AI providers who want to integrate the GPAI into their own AI systems, and information about training data. GPAI with systemic risks is subject to further obligations such as in relation to model evaluations, risk assessment and mitigation, response and reporting procedures and cybersecurity.

The Recitals make it clear that the use of copyright protected content requires the consent of the rightsholder unless an exception applies. Article 4 of the 2019 Copyright Directive provides that the use of copyright protected content for text and data mining for lawfully accessible works is permitted unless rightsholders have expressly reserved their rights in an appropriate manner (e.g. in a machine readable form for online content). Where the rights have been reserved, anyone wishing to copy or extract relevant works will therefore need a licence. The text provides that a copyright policy must be put in place to ensure that an opt-out from the text and data mining exception can be identified and respected and there is an obligation to publish a detailed summary of the content used to train the GPAI.

Territorial Scope, Penalties, Timing

The Act extends to providers in countries outside the EU that place AI systems or GPAI on the EU market or that put into service AI systems in the EU, and to providers or deployers of AI systems outside the EU where the output it produces is used within the EU.

Penalties for breach include fines ranging from €7.5m to €35m or 1.5% to 7% of worldwide turnover, whichever is higher.

Once the Act comes into force, the provisions in respect of prohibited AI systems will apply within six months, the provisions on GPAI will apply within 12 months and the remaining provisions will apply within 24 months, save in respect of high risk AI under Annex II which will apply within 36 months. GPAI models already on the market when the Act comes into force will be given a two-year grace period.

The Parliament is expected to approve the compromise text in April.

For more information, click here.