
The European Parliament’s rapporteurs on the AI Act circulated on Monday (13 February) an agenda for a key political meeting which includes new compromises on AI definition, scope, prohibited practices, and high-risk categories.
On Saturday, Brando Benifei and Dragoș Tudorache shared a new set of compromise amendments, obtained by EURACTIV, in the agenda for a shadow meeting on Wednesday due to settling some of the most critical questions still open on the draft AI Act – a proposal to regulate Artificial Intelligence based on its capacity to cause harm.
AI definition
The definition of Artificial Intelligence is a fundamental issue as it determines the application of the AI rulebook. The leading EU lawmakers proposed using the definition of the US National Institute of Standards and Technology.
AI is thus defined as “an engineered or machine-based system that can, for a given set of objectives, generate output such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”.
The text’s preamble specifies that the AI should be able to act with a minimum level of independence from human control, may possess learning capabilities (i.e. machine learning) and does not cover fully traceable and predictable systems.
A critical clarification relates to the ‘objectives’ of the AI model, as large language models known as General Purpose AI can be adapted to carry out various tasks.
“The reference to a given set of objectives is not related to the final goal or purpose of the system, but rather to the parameter optimisation process within the model. Such objectives may be both explicit and implicit,” the compromise reads.
In addition, the text now clarifies that whenever an AI solution is integrated into a more extensive system, all the components interacting with the new solution should be considered part of the system.
The AI definition was moved from the annexe to the body of the law, meaning it will not be possible for the European Commission to amend it later.
Scope
Regarding scope, the co-rapporteurs want to know whether the AI regulation should prevent EU providers from deploying prohibited AI solutions like social scoring systems in the single market but also from exporting them abroad.
A partial exemption was proposed on open-source AI systems.
“Until those systems are put into service or made available on the market in return for payment, regardless of whether that payment is for the AI system itself, the provision of the AI system as a service, or the provision of technical support for the AI system as a service.”
Prohibited practices
AI systems using biometric traits to categorise people using or inferring sensitive or protected attributes have been added to the list of prohibited practices. Under the General Data Protection Regulation, protected information includes race, sexual and religious orientation.
The leading MEPs also want to ban AI models that fill in facial recognition databases by indiscriminately scrapping face images from social media profile pictures, CCTVs and any other use cases listed under the list of high-risk areas.
High-risk categorisation
The AI Act defines some AI systems as having a high risk of causing harm, a category that will have to comply with stricter requirements. The regulation’s Annex III lists high-risk areas and use cases.
The initial proposal stated that AI models whose intended purpose fell under these areas and use cases were to be deemed high-risk. This notion of ‘intended purpose’ does not apply to certain use cases, like the AI systems used by political parties, in the democratic process and for scientific research.
If the AI developers consider their system is not at high-risk, even if it falls under Annex III, they could notify the national authority or AI office if more than one EU country is involved.
The initial compromise mandated the relevant authority to respond within one month. In the new version, the co-rapporteurs proposed a tacit consent clause, meaning that the exemption will be justified if the authority does not reply within three months.
EU database
The original proposal required high-risk AI system providers to register in an EU-wide database. The lawmakers are proposing extending this obligation to AI deployers that are public bodies or private companies designated as gatekeepers under the Digital Markets Act.
General principles
A new article with general principles applying to all AI systems has been introduced as a voluntary basis for all algorithms not falling under the high-risk category. The Commission and AI Office would have to issue recommendations on how to comply with these principles.
The principles include human oversight, technical robustness, compliance with data protection rules, appropriate explainability, non-discrimination and fairness, as well as social and environmental well-being.
AI literacy
A new measure was added requiring the EU and its member states to promote media literacy among the general public. AI providers and deployers will have to ensure AI literacy for their staff, including how to comply with the AI regulation.