AI in the EU: A Roadmap for Integration and Compliance under the New Legis­lation

Why does the EU need an AI act?

The AI Act has several objec­tives:

  • To ensure that all AI systems in the EU market are safe and respect existing EU law
  • To make AI investment and innovation easier by providing clear legal guide­lines.”
  • To improve how EU laws on rights and safety apply to AI and ensure they’re enforced properly.
  • To help create a unified market for safe and trust­worthy AI and prevent it from becoming fragmented.

Who will be affected?

The AI Act covers a wide range of respon­si­bil­ities throughout the AI process. It primarily focuses on how AI affects people’s wellbeing and funda­mental rights. It applies to anyone offering or using AI systems within the EU, whether they’re inside or outside the EU. This includes those who introduce AI to the EU market, regardless of their location.

Roadmap?

The AI Act is antic­i­pated to be approved and published by the European Parliament and Council around Q2 or Q3 of 2024. Being an EU regulation, it will directly apply to Member States without requiring local laws.

Timeline:

  • Q2-Q3 2024: Expected entry into force.
  • Immedi­ately after entry into force: European Commission estab­lishes the AI Office, while Member States set up AI regulatory sandboxes.
  • Six months post-entry into force (Q4 2024-Q1 2025): Prohi­bi­tions will take effect.
  • 12 months post-entry into force (Q2-Q3 2025, to be confirmed): Possible imple­men­tation of require­ments for GPAI models.
  • 24 months post-entry into force (Q2-Q3 2026): Full imple­men­tation of all other AI Act require­ments.

All AI tools will need a classi­fi­cation

Due to the pending publi­cation of the Act, the exact classi­fi­cation of AI appli­ca­tions remains unclear.

Classi­fi­cation Description Compliance level Example
Prohibited AI systems
Prohibited because it risks harming people’s safety, security, and rights.
Prohi­bition
Covers AI uses like social scoring poten­tially causing unfair treatment, workplace emotion recog­nition, inferring sensitive data from biometrics, and predicting individual behavior. Certain excep­tions allowed.
High-risk AI systems
Allowed if it meets the rules of the AI Act, which includes assess­ments before being sold.
Signif­icant
Involves AI in recruiting, biometric surveil­lance, safety features like in medical and automotive devices, accessing crucial public and private services (like credit evalu­ation, benefits, and insurance), and safeguarding critical infra­structure in energy and trans­portation.
Minimal risk AI systems
Permitted, subject to specific trans­parency, where uses pose limited risk.
Limited
Some AI systems that directly engage with people (like chatbots) and manip­u­lated visual or audio content created by AI (“deepfake”).
Permitted, with no require­ments where use poses minimal risk.
Minimal
All remaining AI systems not mentioned earlier (like photo-editing tools, product recom­men­da­tions, spam filters, and sched­uling software) are included by default.

Cat. 1 — prohibited AI systems

The AI Act prohibits AI systems that pose unacceptable risks and could violate someone’s funda­mental rights or cause harm. This includes

  • AI systems that manip­ulate people by exploiting weaknesses or using hidden methods to control them against their will, especially vulnerable groups such as children, the elderly, or people with disabil­ities.
  • Systems that socially score or categorize people based on behavior or personal charac­ter­istics, resulting in unfair treatment.
  • AI that reads emotions at work or school (except for security reasons).
  • Using biometrics to infer sensitive details such as race or religion.
  • Collecting facial images from the Internet or CCTV without targeting specific individuals
  • Predictive policing, guessing someone’s likelihood of committing a crime.
  • Law enforcement using real-time biometric identi­fi­cation in public spaces, with some excep­tions.

Cat. 2 — high-risk AI systems

AI systems used as a safety component of a product covered by EU harmo­nization legis­lation. This list is not yet published, but some examples: Motor vehicles, civil aviation, railways, marine equipment, medical services, toys, and AI systems used in situa­tions where there is a high risk of harm to health, safety or rights. This list includes:

  • Running critical infra­structure (like safety systems in traffic, water, gas, heating, and electricity).
  • Managing migration, asylum, and borders (such as monitoring migration trends or verifying travel documents).
  • Law enforcement tools, like fraud detection.
  • Using AI in legal and democ­ratic processes (like inter­preting laws or conducting legal research).
  • Identi­fying and catego­rizing people using biometrics.
  • Handling education and job-related systems (like admission to schools or assessing job perfor­mance).
  • Managing employment and self-employment (like hiring and tracking worker perfor­mance).
  • Controlling access to essential public and private services (like deciding who gets benefits or insurance).

Excep­tions to being labelled high-risk for an AI system:
  • Identifies decision patterns without directly impacting decisions, flagging incon­sis­tencies instead.
  • Aims to enhance human work quality.
  • Performs a specific small task without direct safety or security concerns.

What must providers of high-risk AI systems do?
  • Regis­tering high-risk AI systems on the EU database
  • Enabling and conducting human oversight.
  • Estab­lishing and maintaining proper quality management systems.
  • Compliance with accuracy and cyber­se­curity standards.
  • Trans­parency and providing infor­mation to users.
  • Effective data gover­nance.
  • Maintaining technical documen­tation and records.

As you can see, most of our services already meet these criteria, even though they may not be considered high-risk systems. We provide extensive logging, our users can export their full datasets at any time and provide insight into what is part of the model and what is not. And we have a human-in-the-loop system for editors or AI managers, who are the gatekeepers of what data becomes part of a dataset and for how long. And we meet even the most stringent data security require­ments when used as an on-premises system. We have also built in a return path: If the system detects false positives, the user can trace back which training data caused the error and adjust it if necessary. One of the reasons why many public broad­casters in the EU trust us.

Cat. 3 — minimal risk systems

AI systems need to be clear that they are AI (like chatbots). People using emotion recog­nition or biometric systems should be informed and give consent. Also, any AI-altered visual or audio content needs to be disclosed and labelled.

Most of our use cases will fall into this category, and because our platform meets even more stringent criteria, you won’t need to make any changes to your existing workflows, as we only generate metadata and don’t modify the media itself.

But how will AI innova­tions still be possible?

The AI Act requires setting up special places called regulatory sandboxes in the EU. These are safe spaces where companies, especially small and medium ones, can try out their AI systems with super­vision before selling them. Every EU country will have or join one, following the same rules. Companies using these sandboxes get a report to show they followed the AI Act, helping them get approval faster to sell their AI systems.

Companies devel­oping high-risk AI can also test these systems outside the regulatory sandboxes before selling them. To do this, they first need approval from market author­ities, follow EU data protection rules, inform and get consent from users, and ensure the AI’s decisions can be changed if needed. The testing can’t last too long, usually up to six months, with a possi­bility to extend for another six months.

But it’s also important to make sure that the level of bureau­cracy for SMEs is not too high, that’s part of the work of the regulators who will now have to write the detailed legis­lation.

How will the regulation look like?

Author­ities in each EU country will be empowered to oversee AI.  At the EU level, there will be an independent AI Office in the EU Commission to oversee AI models, set standards and coordinate with national author­ities. This will be accom­panied by an AI Board repre­senting member states to guide the AI Office, enforce AI law and develop codes of conduct for AI models. In addition, a scien­tific panel of experts and an advisory forum with repre­sen­ta­tives from industry and civil society will assist the AI Office in devel­oping evalu­ation methods for GPAI models and monitoring safety risks.

What are the next steps?

The EU is final­izing the technical details of the AI law in the coming weeks. Once agreed, it will be submitted to the European Parliament and Council for approval in the first half of 2024. After trans­lation, it will be published and come into force 20 days later. Once it is published, AI services will be able to assess their services for compliance.

Inter­na­tionally, the EU will work with various organi­za­tions such as the Council of Europe, the G7, the OECD and others to promote global rules compatible with the AI Act.

Share

Email
LinkedIn
Facebook
Twitter
Search

Table of Contents

latest AI news

Subscribe to our newsletter

Don’t worry, we reserve our newsletter for important news, so we only send a few updates once in a while. No spam!