AI in the EU: A Roadmap for Integration and Compliance under the New Legislation

Why does the EU need an AI act?

The AI Act has several objectives:

  • To ensure that all AI systems in the EU market are safe and respect existing EU law
  • To make AI investment and innovation easier by providing clear legal guidelines.”
  • To improve how EU laws on rights and safety apply to AI and ensure they’re enforced properly.
  • To help create a unified market for safe and trustworthy AI and prevent it from becoming fragmented.

Who will be affected?

The AI Act covers a wide range of responsibilities throughout the AI process. It primarily focuses on how AI affects people’s wellbeing and fundamental rights. It applies to anyone offering or using AI systems within the EU, whether they’re inside or outside the EU. This includes those who introduce AI to the EU market, regardless of their location.

Roadmap?

The AI Act is anticipated to be approved and published by the European Parliament and Council around Q2 or Q3 of 2024. Being an EU regulation, it will directly apply to Member States without requiring local laws.

Timeline:

  • Q2-Q3 2024: Expected entry into force.
  • Immediately after entry into force: European Commission establishes the AI Office, while Member States set up AI regulatory sandboxes.
  • Six months post-entry into force (Q4 2024-Q1 2025): Prohibitions will take effect.
  • 12 months post-entry into force (Q2-Q3 2025, to be confirmed): Possible implementation of requirements for GPAI models.
  • 24 months post-entry into force (Q2-Q3 2026): Full implementation of all other AI Act requirements.

All AI tools will need a classification

Due to the pending publication of the Act, the exact classification of AI applications remains unclear.

Classification Description Compliance level Example
Prohibited AI systems
Prohibited because it risks harming people's safety, security, and rights.
Prohibition
Covers AI uses like social scoring potentially causing unfair treatment, workplace emotion recognition, inferring sensitive data from biometrics, and predicting individual behavior. Certain exceptions allowed.
High-risk AI systems
Allowed if it meets the rules of the AI Act, which includes assessments before being sold.
Significant
Involves AI in recruiting, biometric surveillance, safety features like in medical and automotive devices, accessing crucial public and private services (like credit evaluation, benefits, and insurance), and safeguarding critical infrastructure in energy and transportation.
Minimal risk AI systems
Permitted, subject to specific transparency, where uses pose limited risk.
Limited
Some AI systems that directly engage with people (like chatbots) and manipulated visual or audio content created by AI ("deepfake").
Permitted, with no requirements where use poses minimal risk.
Minimal
All remaining AI systems not mentioned earlier (like photo-editing tools, product recommendations, spam filters, and scheduling software) are included by default.

Cat. 1 - prohibited AI systems

The AI Act prohibits AI systems that pose unacceptable risks and could violate someone’s fundamental rights or cause harm. This includes

  • AI systems that manipulate people by exploiting weaknesses or using hidden methods to control them against their will, especially vulnerable groups such as children, the elderly, or people with disabilities.
  • Systems that socially score or categorize people based on behavior or personal characteristics, resulting in unfair treatment.
  • AI that reads emotions at work or school (except for security reasons).
  • Using biometrics to infer sensitive details such as race or religion.
  • Collecting facial images from the Internet or CCTV without targeting specific individuals
  • Predictive policing, guessing someone’s likelihood of committing a crime.
  • Law enforcement using real-time biometric identification in public spaces, with some exceptions.

Cat. 2 - high-risk AI systems

AI systems used as a safety component of a product covered by EU harmonization legislation. This list is not yet published, but some examples: Motor vehicles, civil aviation, railways, marine equipment, medical services, toys, and AI systems used in situations where there is a high risk of harm to health, safety or rights. This list includes:

  • Running critical infrastructure (like safety systems in traffic, water, gas, heating, and electricity).
  • Managing migration, asylum, and borders (such as monitoring migration trends or verifying travel documents).
  • Law enforcement tools, like fraud detection.
  • Using AI in legal and democratic processes (like interpreting laws or conducting legal research).
  • Identifying and categorizing people using biometrics.
  • Handling education and job-related systems (like admission to schools or assessing job performance).
  • Managing employment and self-employment (like hiring and tracking worker performance).
  • Controlling access to essential public and private services (like deciding who gets benefits or insurance).

Exceptions to being labelled high-risk for an AI system:
  • Identifies decision patterns without directly impacting decisions, flagging inconsistencies instead.
  • Aims to enhance human work quality.
  • Performs a specific small task without direct safety or security concerns.

What must providers of high-risk AI systems do?
  • Registering high-risk AI systems on the EU database
  • Enabling and conducting human oversight.
  • Establishing and maintaining proper quality management systems.
  • Compliance with accuracy and cybersecurity standards.
  • Transparency and providing information to users.
  • Effective data governance.
  • Maintaining technical documentation and records.

As you can see, most of our services already meet these criteria, even though they may not be considered high-risk systems. We provide extensive logging, our users can export their full datasets at any time and provide insight into what is part of the model and what is not. And we have a human-in-the-loop system for editors or AI managers, who are the gatekeepers of what data becomes part of a dataset and for how long. And we meet even the most stringent data security requirements when used as an on-premises system. We have also built in a return path: If the system detects false positives, the user can trace back which training data caused the error and adjust it if necessary. One of the reasons why many public broadcasters in the EU trust us.

Cat. 3 - minimal risk systems

AI systems need to be clear that they are AI (like chatbots). People using emotion recognition or biometric systems should be informed and give consent. Also, any AI-altered visual or audio content needs to be disclosed and labelled.

Most of our use cases will fall into this category, and because our platform meets even more stringent criteria, you won’t need to make any changes to your existing workflows, as we only generate metadata and don’t modify the media itself.

But how will AI innovations still be possible?

The AI Act requires setting up special places called regulatory sandboxes in the EU. These are safe spaces where companies, especially small and medium ones, can try out their AI systems with supervision before selling them. Every EU country will have or join one, following the same rules. Companies using these sandboxes get a report to show they followed the AI Act, helping them get approval faster to sell their AI systems.

Companies developing high-risk AI can also test these systems outside the regulatory sandboxes before selling them. To do this, they first need approval from market authorities, follow EU data protection rules, inform and get consent from users, and ensure the AI’s decisions can be changed if needed. The testing can’t last too long, usually up to six months, with a possibility to extend for another six months.

But it’s also important to make sure that the level of bureaucracy for SMEs is not too high, that’s part of the work of the regulators who will now have to write the detailed legislation.

How will the regulation look like?

Authorities in each EU country will be empowered to oversee AI.  At the EU level, there will be an independent AI Office in the EU Commission to oversee AI models, set standards and coordinate with national authorities. This will be accompanied by an AI Board representing member states to guide the AI Office, enforce AI law and develop codes of conduct for AI models. In addition, a scientific panel of experts and an advisory forum with representatives from industry and civil society will assist the AI Office in developing evaluation methods for GPAI models and monitoring safety risks.

What are the next steps?

The EU is finalizing the technical details of the AI law in the coming weeks. Once agreed, it will be submitted to the European Parliament and Council for approval in the first half of 2024. After translation, it will be published and come into force 20 days later. Once it is published, AI services will be able to assess their services for compliance.

Internationally, the EU will work with various organizations such as the Council of Europe, the G7, the OECD and others to promote global rules compatible with the AI Act.

Subscribe to our newsletter!

Share this article:
LinkedIn
Facebook
Twitter

don't miss the latest news!

Subscribe to our newsletter

Don’t worry, we reserve our newsletter for important news, so we only send a few updates once in a while. No spam!