AI

AI systems with ‘unacceptable risk’ are now banned in the EU

From Sunday in the European Union, the regulators of the block can prohibit the use of AI systems that they consider to form “unacceptable risk” or damage.

On 2 February, the first compliance deadline for the AI ​​law of the EU, the extensive AI -regulatory framework that the European Parliament finally approved last year after years of development. The law officially entered into force on 1 August; What follows is the first of the deadlines of the compliance.

The details are set out in Article 5But broadly the action is designed to cover a large number of use cases where AI could appear and interact with individuals, from consumer applications to physical environments.

Under the Bloc’s approachThere are four broad risk levels: (1) Minimal risk (for example E -mail spam filters) will not have regulatory supervision; (2) Limited risk, including Customer Service Chatbots, will have a slight legal supervision; (3) High risk – AI for healthcare recommendations is an example – will be confronted with heavy legal supervision; and (4) Unacceptable risk applications – the focus of this month’s compliance requirements – will be completely prohibited.

Some of the unacceptable activities include:

  • AI used for social score (for example, building risk profiles based on the behavior of a person).
  • Ai who manipulates the decisions of a person subliminal or deceptive.
  • Ai who operates vulnerabilities such as age, disability or socio -economic status.
  • Ai who tries to predict people who commit crimes based on their appearance.
  • Ai who uses biometrics to distract the characteristics of a person, such as their sexual orientation.
  • Ai who collects “real -time” biometric data in public places for the purposes of law enforcement.
  • Ai who tries to distract the emotions of people at work or at school.
  • AI that creates face recognition databases – or extends by scraping images online or from security cameras.
See also  Transparency in AI: How Tülu 3 Challenges the Dominance of Closed-Source Models

Companies that use one of the above AI applications in the EU will be subject to fines, regardless of where they are headquarters. They can be on the hook for a maximum of € 35 million (~ $ 36 million), or 7% of their annual income from the earlier fiscal year, depending on which larger is.

The fines will not get started for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with WAN.

“Organizations are expected to fully satisfy by 2 February, but … the next large deadline that companies must be aware of is in August,” Sumroy said. “By that time we will know who the competent authorities are and the fines and enforcement provisions will come in force.”

Provisional promises

The deadline of 2 February is a formality in some respects.

Last September, more than 100 companies signed the EU AI -Pact, a voluntary promise to apply the principles of the AI ​​Act prior to the application for application. As part of the Pact, including Amazon, Google and OpenAI – are committed to identifying AI systems that are probably categorized as a high risk under the AI ​​Act.

Some technical giants, especially meta and apple, skipped the pact. French AI Startup Mistral, one of the toughest critics of the AI ​​Act, also chose not to sign.

That does not mean that Apple, Meta, Mistral or others who did not agree with the Pact, do not meet their obligations – including the ban on unacceptably risky systems. Sumroy points out that, given the nature of the prohibited usage scenarios, most companies will not participate in those practices.

See also  Convicted child rapist banned from athletes' village during 2024 Olympics

“For organizations, an important concern about the EU AI law or clear guidelines, norms and codes of conduct will arrive on time – and crucial, whether they will provide clarity about compliance,” Sumroy said. “However, the working groups get their deadlines so far about the code of conduct for … developers.”

Possible exemptions

There are exceptions to different prohibitions of the AI ​​Act.

For example, the law allows law enforcement to use certain systems that collect biometrics in public places if those systems help perform a “targeted search” to, for example, a kidnapping victim, or to prevent a “specific, substantial and threatening” threat To life. This exemption requires the authorization of the correct administrative body, and the law emphasizes that law enforcement cannot make a decision that “yields a negative legal effect” on a person based solely on the output of these systems.

The law also cuts exceptions to systems that distract emotions in workplaces and schools where “medical or safety” is justification, such as systems designed for therapeutic use.

The European Commission, the executive power of the EU, said it would release additional guidelines In “early 2025”, after a consultation with stakeholders in November. However, those guidelines still have to be published.

Sumroy said it is also unclear how other laws in the books can deal with the forbidden and related provisions of the AI ​​Act. Clarity may only arrive later in the year, as the enforcement window approaches.

“It is important for organizations to remember that AI Regulation does not exist separately,” Sumroy said. “Other legal frameworks, such as GDPR, NIS2 and Dora, will interact with the AI ​​law, which creates potential challenges – in particular around overlapping knowledge permits for incident. Understand how these laws fit together, will be just as crucial as understanding the AI ​​act itself. “

See also  The Rise of Hunyuan Video Deepfakes

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button