STRASBOURG (FRANCE), 13 Mar. (EUROPA PRESS) –
The plenary session of the European Parliament approved this Wednesday the historic agreement reached last December between institutions to establish the first rules with which to limit the risks of Artificial Intelligence (AI) in the European Union, a new framework whose full application is scheduled for from 2026.
The new regulations were approved in the plenary session held in Strasbourg (France) with the vote of 523 MEPs, 46 against and 49 abstentions, but it remains for the 27 to also give their approval and for the regulation to pass a final check. legal-linguistic before its entry into force.
The new standard offers a risk-based approach that categorizes risk levels and accompanying restrictions based on scale, implying prohibition in cases of “unaffordable” risk, such as biometric categorization systems, extraction untargeted images to create databases for facial recognition, emotion recognition, social scoring systems or systems that manipulate behavior.
However, a series of strict exceptions are planned that will allow the use of biometric surveillance systems in public spaces, as long as there is a prior court order and for a list of strictly defined crimes.
In this way, real-time biometric monitoring will be limited in time and location and only for the selective search of victims of kidnapping, trafficking or sexual exploitation, to avoid a specific and present terrorist threat and to locate or identify a suspect of having committed a crime included in the law.
AI systems authorized but considered very high risk due to their significant impact on health, safety, fundamental rights, the environment and the rule of law are also defined.
Artificial Intelligence systems used to influence the outcome of elections and voter behavior are also classified as high risk, and citizens will have the right to file complaints and receive explanations about decisions based on high-risk AI systems. that affect their rights.
Another key has been how to introduce specific rules for foundational models, such as the ChatGPT or DALL-E platforms, which came to light after the European Commission presented its first regulation proposal, so this chapter has been developed in the course of the negotiation.
The pioneering legislation also provides for sanctions for non-compliance that will range from 35 million euros or 7 percent of global turnover to 7.5 million, depending on the size of the company.
The objective of the new European regulation is to establish security standards and fundamental rights that prevent technology from being used for repressive, manipulative or discriminatory purposes; but without it translating into hyperregulation that hinders the competitiveness of the European Union.