Technology and artificial intelligence (AI) advances have raised the call for protective regulation to prevent risks and harmful outcomes for populations across the globe. One place where these rules are beginning to take shape is Europe. In April 2021, the European Union´s Commission proposed the first comprehensive framework to regulate the use of AI. The EU’s priority is ensuring that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
At Veriff, we´re constantly developing our real time remote biometric identification solutions. We use AI and machine learning to make our identification process faster, safer, and better. We operate globally, and our legal teams continually monitor the various local legal landscapes, so we are perfectly positioned to navigate these regulations.
The AI Act is a proposed European law on artificial intelligence – the first law on AI by a major economic power anywhere. It was proposed by the EU Commission in April 2021.
Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you may be. The EU’s AI regulation is already making waves internationally.
The AI Act is not yet in force - the law is currently being processed under the European Union's “ordinary legislative procedure”. This means that a legislative proposal was put forward by the EU Commission and currently the proposal is being examined by the two legislative bodies of the EU – the EU Parliament and the EU Council.
As the European Union has reached a political agreement concerning the AI Act then the agreed text will now have to be formally adopted by both Parliament and Council to become law and will enter into force 20 days after publication in the EU’s Official Journal.
The AI Act would then become applicable two years after its entry into force, except for some specific provisions that will start applying after 6 to 12 months.
It is worth noting that the public-facing AI landscape has changed significantly since the EU Commission first published its legislative proposal in April 2021. Namely, the rise of large-language models (including generative AI) and foundational models since the end of 2022 has created a situation where there will be significant changes to the originally proposed text, specifically around the rules for large-language models and foundational models (called general purpose AI).
The EU will regulate the AI systems based on the level of risk they pose to the health, safety, and fundamental rights of a person. That approach should tailor the type and content of such rules to the intensity and scope of the risks (high risk or minimal risk) that AI systems can generate. The law assigns applications of AI to three risk categories.
A separate set of rules are introduced for general purpose AI models as the focus will be ensuring transparency in the value chain where those models are used. This will include drawing up technical documentation, complying with EU copyright law and providing summaries about the content used for training. There will also be a separate category of “high impact general purpose AI systems” and those will be required to meet more stringent obligations. Although the exact criteria whereby a system can be designated as “high impact” is not fully known, it is clear that if this criteria is met the providers will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, ensure cybersecurity and report on their energy efficiency.
Based on the EU Commission’s original proposal, we presume that the AI Act will apply to the following entities:
Please bear in mind that the AI Act's actual focus is more on the AI systems. This means that although the obligations are to be fulfilled by different actors in the AI value chain then those obligations need to be mainly fulfilled in relation to the AI systems. For example, a provider of an AI system can provide systems that are high risk and low risk and, therefore, per system would need to comply with different requirements. The same applies for the users of AI systems.
It is advised to closely monitor the developments around the AI Act to stay up to date and watch for the final text to be published, but at the same time, the original text as proposed by the EU Commission, is still worth a read to understand the baseline. Notwithstanding the foregoing, there are some tips and tricks to share:
Fines under the AI Act would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information.
According to information currently available, more proportionate caps are foreseen for administrative fines for SMEs and start-ups in case of infringements of the AI Act.
Veriff will only use the information you provide to share blog updates.
You can unsubscribe at any time. Read our privacy terms.