Veriff
Libraryblog The EU AI Act: First Regulation on Artificial Intelligence

 The EU AI Act: First Regulation on Artificial Intelligence

The use of artificial intelligence in the European Union (EU) will be regulated by the EU AI Act, the world’s first comprehensive AI law. Find out how it works.

Header image
Author
Aleksander Tsuiman
Head of Regulatory Compliance
December 13, 2023
Fraud
Fraud Prevention
Share:
On this page
What is the EU's AI Act?
Is the EU's AI Act already adopted?
How will the EU regulate with the AI Act?
The AI Act contains:
To whom does the AI Act apply?
How to ensure compliance for the AI Act?
Enforcement and penalties of the AI Act

The European Union´s Artificial Intelligence Act explained

Technology and artificial intelligence (AI) advances have raised the call for protective regulation to prevent risks and harmful outcomes for populations across the globe. One place where these rules are beginning to take shape is Europe. In April 2021, the European Union´s Commission proposed the first comprehensive framework to regulate the use of AI. The EU’s priority is ensuring that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

At Veriff, we´re constantly developing our real time remote biometric identification solutions. We use AI and machine learning to make our identification process faster, safer, and better. We operate globally, and our legal teams continually monitor the various local legal landscapes, so we are perfectly positioned to navigate these regulations.​

What is the EU's AI Act?

The AI Act is a proposed European law on artificial intelligence – the first law on AI by a major economic power anywhere. It was proposed by the EU Commission in April 2021.


Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you may be. The EU’s AI regulation is already making waves internationally.

Is the EU's AI Act already adopted?

The AI Act is not yet in force - the law is currently being processed under the European Union's “ordinary legislative procedure”. This means that a legislative proposal was put forward by the EU Commission and currently the proposal is being examined by the two legislative bodies of the EU – the EU Parliament and the EU Council.

As the European Union has reached a political agreement concerning the AI Act then the agreed text will now have to be formally adopted by both Parliament and Council to become law and will enter into force 20 days after publication in the EU’s Official Journal.

The AI Act would then become applicable two years after its entry into force, except for some specific provisions that will start applying after 6 to 12 months.

It is worth noting that the public-facing AI landscape has changed significantly since the EU Commission first published its legislative proposal in April 2021. Namely, the rise of large-language models (including generative AI) and foundational models since the end of 2022 has created a situation where there will be significant changes to the originally proposed text, specifically around the rules for  large-language models and foundational models (called general purpose AI).

Talk to us

Talk to one of Veriff's fraud experts to see how IDV can help your business.

The AI Act is proposed following these objectives:

  • Ensure that AI systems placed on the EU market and used are safe and respect existing laws on fundamental rights and Union values;
  • Ensure legal certainty to facilitate investment and innovation in AI;
  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

The EU will regulate the AI systems based on the level of risk they pose to the health, safety, and fundamental rights of a person. That approach should tailor the type and content of such rules to the intensity and scope of the risks (high risk or minimal risk) that AI systems can generate. The law assigns applications of AI to three risk categories. 

  • First, applications and systems that create an unacceptable risk, such as untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, government-run social scoring, AI systems or applications that manipulate human behavior to circumvent users' free will, are banned. 
  • Second, high-risk applications, potentially such as systems to determine access to educational institutions or for recruiting people, are subject to specific and thorough legal requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.
  • Lastly, applications not explicitly banned or listed as high-risk are regulated lightly. Those systems are called minimal risk systems and that would be a category vast majority of AI systems would presumably fall into.

A separate set of rules are introduced for general purpose AI models as the focus will be ensuring transparency in the value chain where those models are used. This will include drawing up technical documentation, complying with EU copyright law and providing summaries about the content used for training. There will also be a separate category of “high impact general purpose AI systems” and those will be required to meet more stringent obligations. Although the exact criteria whereby a system can be designated as “high impact” is not fully known, it is clear that if this criteria is met the providers will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, ensure cybersecurity and report on their energy efficiency.

The AI Act contains:

  • Information about its scope and applicable definitions for participants in the AI lifecycle and ecosystem
  • Sets forth prohibited AI systems 
  • Regulates thoroughly high-risk AI systems
  • Measures supporting innovation (e.g. regulatory sandboxes)
  • Governance and implementation of the AI Act and codes of conduct
  • Regulation around fines

To whom does the AI Act apply?

Based on the EU Commission’s original proposal, we presume that the AI Act will apply to the following entities:

  • Providers of AI systems placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country (e.g. US); 
  • Users of AI systems located within the EU;
  • Providers and users of AI systems that are located in a third country, but where the output produced by the system is used in the EU.

Please bear in mind that the AI Act's actual focus is more on the AI systems. This means that although the obligations are to be fulfilled by different actors in the AI value chain then those obligations need to be mainly fulfilled in relation to the AI systems. For example, a provider of an AI system can provide systems that are high risk and low risk and, therefore, per system would need to comply with different requirements. The same applies for the users of AI systems.

How to ensure compliance for the AI Act?

It is advised to closely monitor the developments around the AI Act to stay up to date and watch for the final text to be published, but at the same time, the original text as proposed by the EU Commission, is still worth a read to understand the baseline. Notwithstanding the foregoing, there are some tips and tricks to share:

  • Work your way through the original text - your organization needs to pinpoint where in the AI value chain your organization sits. For example, being a provider or a user of AI systems subjects you to potentially different obligations. Although the exact nature of those obligations is not yet fully clear, it is recommended to start making where your organization makes use of AI systems.
  • Work with your Legal, Risk, Quality, Engineering/Product development departments to identify risks around AI usage in general.
  • There will be an extensive creation of respective technical and non-technical standards – either by the European Standardisation Organisations and/or by the European Commission by engaging experts. The work around is worth monitoring.
  • Some standards already exist for understanding certain obligations, which concern the products identified earlier as high-risk AI systems. Also, there are already standards regarding quality management and risk management systems. They give a baseline to what must be considered in case you fall into the high-risk AI provider category.

Enforcement and penalties of the AI Act

Fines under the AI Act would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information. 

According to information currently available, more proportionate caps are foreseen for administrative fines for SMEs and start-ups in case of infringements of the AI Act.

Want to learn more?

Talk to one of Veriff's compliance experts to see how IDV can help your business.

Get the latest from Veriff. Subscribe to our newsletter.

Veriff will only use the information you provide to share blog updates.

You can unsubscribe at any time. Read our privacy terms.