Artificial Intelligence Act: Oops. EU did it again

If the European Union (EU) legislative process had anything to do with pop music, its latest regulation could be called “Oops I did it again”. To be fair, the Artificial Intelligence Act (AI Act) is probably not as enjoyable as Britney Spears’ 2000 hit, but it’s certainly as memorable. When passing the General Data Protection Regulation (GDPR) in 2016 to watch over data privacy, the EU set a precedent. The AI Act only confirms the EU’s steady positioning as a pioneer in regulating technology.

“The AI Act is a milestone, marking the first rules for AI in the world, aiming to make it safe and in respect of Europe’s fundamental rights.” That’s how the European Union’s Belgian Presidency celebrated a political vote on the regulation’s final text, which took place on 2 February 2024 in Brussels. “Historic, a world premiere,” added European Commissioner for Internal Market Thierry Breton that same day.

After that crucial political step, the final vote for the AI Act was expected to be a formality. It took place on 13 March 2024 in the European Parliament, marking the final adoption of what is widely regarded as a landmark in AI regulation globally.

As an EU regulation, the AI Act will apply in all Member States, with no further ratification at country level. When an AI system’s output is used in the EU, the regulation also applies to AI providers or users based in non-EU countries. The regulation will enter into force between six to thirty-six months from now, depending on the level of risk associated with the AI system.

The regulation sets ground rules for designing, providing, using and monitoring AI tools in Europe in a safe and trustworthy way. It also sets the stage for fines in case of non-compliance. These fines can be so high that companies should be well advised to take the new regulation seriously. In this blog entry, we look into these rules and what they imply for Luxembourg regulated entities.

A “definition” of AI…

Expanding on the Organisation for Economic Co-operation and Development’s (OECD) 2019 definition, the AI Act sees AI as a “machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

AI is a field of research and innovation that sees technological breakthroughs almost every month. The very definition of AI has been evolving over time and will probably never be definitive.

And so, just like they did with GDPR, EU lawmakers took a functional approach to capture an ever-changing notion, while making the text future-proof: considering AI for how it works and what it can do, instead of trying to set once and for all what it is.

…based on risk assessment

As mentioned, AI systems are labelled according to the level of risk they may pose. The AI Act defines risk as “the combination of the probability of an occurrence of harm, and the severity of that harm” to health, safety and fundamental rights.

The most dangerous AI systems—those aimed at manipulating people’s behaviour, collecting extensive biometric data, or performing social credit scoring for example—are simply forbidden.

As for legitimate AI systems, most of them will fall under the “high-risk” category, except for those that don’t threaten people’s health, safety or fundamental rights, nor influence decision-making.

Finally, the AI Act gives special consideration to a third category: General Purpose AI (GPAI) systems. Those systems rely on General Purpose AI models such as ChatGPT and the like. GPAIs weren’t initially in the text’s scope though, and were only added at a later stage of the negotiation. This might well be one of the first instances where lawmakers had to act swiftly to keep pace with the technological evolution. Failing this, given the rapid evolution of AI, the Act risked being outdated before even coming into force.

GPAI systems fall into the high-risk systems category. Therefore, GPAI deployers need to comply with the generic requirements associated with this category. But on top of these, they will also have to consider the unique set of requirements the Act specifies for them.

Moreover, a GPAI system with high-impact capabilities (because of a very high computing power, for example) is classified as a GPAI model “with systemic risk”. In that case, the provider needs to notify the Commission.

Whatever the risk level, AI systems are all subject to minimum transparency obligations. This is to ensure a basic level of clarity and understanding, and notably to inform people that they are interacting with an AI system.

Provider or deployer? Know where you stand with high-risk AI systems

Switching from the computer side of things to human beings and their corporate extensions, the regulation makes a distinction between “providers” and users, or “deployers”. The former develop and place an AI system on the market. The latter—well—deploy such a system for professional use. Both can be a natural or a legal person and fall under the Act’s jurisdiction when acting in the EU or when/where the AI system’s results are used in the EU.

Each of these two categories carry with them obligations, but they don’t share the burden equally, which is only logical. Picture a car: the driver just has to use it the way it was designed for and respect road safety rules. The car manufacturer, on the other hand, when designing the model, needs to adhere to a lot more rules regarding safety, ergonomics, sustainability and so on to make sure the downstream users are in the position to enjoy a safe and sound use of a potentially harmful product.

And so, the deployers of high-risk AI systems—our car drivers of sorts— first have to make proper use of the systems, in accordance with the instructions provided. They also need to assign people armed with the right set of skills and training to perform human oversight and monitor the AI systems’ operations.

In addition, when deployers use a system for credit scoring or risk assessment and pricing in relation to people, which is a natural practice in the life and health insurance industry, then they need to perform a Fundamental Rights Impact Assessment (FRIA) as well, aimed at mitigating possible harms of AI systems to individuals’ fundamental rights.

Most of the companies investing in AI systems in Luxembourg are likely to fall into the “deployer” category.

High-risk AI system providers—the AI Act’s version of car manufacturers—have obligations of their own, quite heavier. The first is to set, document and maintain a risk and quality management system throughout the AI system’s whole life cycle. They also need to adopt appropriate data governance and management practices when data sets are involved, which is quite likely.

Moreover, high-risk AI system providers have to ease the regulatory authorities’ control. All of the following should be achieved by design:

  • Automatic logging of substantive changes and events that have an effect on risk identification;
  • Appropriate levels of robustness and cybersecurity;
  • Features enabling human monitoring.

To ensure compliance, they are required to set up and maintain a quality-management system. Lastly, they have to establish and share technical documentation to prove conformity, along with the relevant set of operating instructions to final users so that they comply with the regulation as well.

GPAI providers need to answer to a unique set of requirements:

  • Drawing up and maintaining the technical documentation on the model;
  • Making it available to providers of AI systems who plan to integrate the GPAI model in their own AI system;
  • Setting up a policy to respect the EU copyright law and publish a summary of the content used for training the GPAI; and
  • In the case of a GPAI with systemic risk, (i) performing model evaluation, (ii) assessing and mitigating possible systemic risks at the EU level, (iii) keeping track of, documenting, and reporting relevant information about serious incidents and possible corrective measures to address them and (iv) ensuring an adequate level of cybersecurity protection.

A “provider”, to be compliant, will need to invest much more than a “deployer” in terms of organisation, skills and resources. It’s therefore crucial for companies to quickly assess where they stand in this regard, and plan to fulfil their compliance requirements accordingly.

Besides, both providers and deployers should ensure that their staff dealing with AI have a sufficient level of AI literacy. In terms of data management, they both need to ensure input data quality and relevance, and promptly report any operational risks or incidents. When it comes to transparency and communication, they need to inform employees and comply with registration duties, prioritising clear communication about AI use.

AI systems that process data—which they all are likely to do—also need to respect the GDPR rules on data protection. In some cases, these rules imply information obligations even prior to activating the system, along with a Data Protection Impact Assessment (DPIA), to identify and mitigate the data protection risk. For example, a bank screening its customers against a credit reference database, thus performing an extensive evaluation of a person, will need to conduct a DPIA.

What’s in it for regulated entities?

As part of their Know Your Customer (KYC) process, and to avoid ending up financing illegal activities, banks collect data on their clients. Insurance companies constantly assess the risk associated with every policy. And by doing so, as mentioned, they are likely to give risk scores to a person. Risk mitigation is a notion that asset managers are also familiar with, as well as handling huge data sets.

These activities include quite a lot of tedious or repetitive administrative tasks. It’s precisely those that AI can help tackle with unparalleled efficiency. Banks, insurance companies, asset managers and the like will adopt AI tools sooner or later, if they haven’t done so already.

The sheer appeal of AI, combined with the very nature of their business makes it a certainty that these companies—pretty much the entire financial sector, really—will fall under the AI Act’s scope. They should prepare accordingly. But how to proceed?

The EU AI Act is a theoretical text that sets high-level standards. It doesn’t provide specific rules industry by industry, nor does it go into the detail of operations, where the proverbial devil likes to hide.

That means there is no single way to achieve compliance, and companies can expect quite a bit of room to adapt their organisation and operationalise the compliance process. Freedom of action is a good thing for sure. It allows a high degree of fitness with the companies’ own needs and organisation, but it also gives way to mistakes—here’s your devil getting out of hiding.

As a consequence, every level of responsibility within any given company needs to take part in the sound adoption of AI systems. Considering the scale of the potential benefits of AI use on the one hand, and the changes required and risks involved on the other, it’s a strategic matter: one on which the Board typically has to take position. Hence, it will be the Board’s job to give strategic orientation and appoint the right people to make it a reality in the company.

Implementation then starts at C-suite level. It may be useful to create an office dedicated to AI, particularly in AI ethics, quite like the GDPR’s Data Protection Officer. This AI officer should conduct a gap analysis to assess the company’s needs related to AI systems, and allocate resources to AI design, monitoring and reporting.

Further down the corporate ladder, managers and staff will be responsible for organising operations. They will have to pay close attention to the type of AI (high-risk, General Purpose, or—perish the thought—prohibited AI) they run, while monitoring the level of risk associated.

AI risk monitoring isn’t an option

The EU legislator’s functional approach on the text’s key concepts has a major consequence, well worth mentioning. AI systems, risk levels, providers and deployers: none of these notions are set in stone. On the contrary, they are likely to change along with product developments or stakeholders’ choices, depending on the situation. As this risk status evolves up or down, so does the compliance burden.

A basic AI system can suddenly turn into a high-risk one as it gets more complex or starts carrying a systemic risk, thus triggering the obligation to set up a risk management programme, and to automatically log events. Just like if you are a huge car tuning fan, there is a high chance the police will want to see some paperwork when you start planting jet engines in the trunk of your Renault 5 GT Turbo.

Paying close and constant attention to AI systems’ risk levels, as well as to the company’s status regarding these systems, is the only way to assess the degree of compliance required and make sure everyone is on the right side of the steering wheel.

All of this implies that companies will need to count on people with the relevant background in sufficient numbers: data scientists, data engineers, developers, to name a few. Basically, all profiles that are notoriously hard to source in the current business context. That may also involve significant investments in training to acquire, develop and maintain the right set of skills. Do you remember how tormented Leonard was about his upcoming AI training—and how relieved he felt reading this blog entry?

The AI landscape’s governance: at EU or at Luxembourg level?

The European Commission (EU’s main executive body) recently decided to create a European AI Office, in parallel with the AI Act. The AI Office will be tasked with providing guidance for its sound implementation. As the EU’s centre of AI expertise, it will help the Commission in its evaluation, control and sanctions duties across the AI landscape.

In dialogue with all of the stakeholders of the AI field—scientists, industry leaders, think tanks, or civil society representatives—the AI Office will provide knowledge, expertise and support to the authorities, both at European and national level, to foster well-informed decision making and set the grounds for a single European AI governance system.

However, this new Office’s operations shouldn’t affect national competent authorities’ powers and competences. This raises the question of a future Luxembourg AI supervisory authority.

The options are plentiful: Is there going to be a national equivalent of the AI Office that will control all things AI? Will the Institut luxembourgeois de la normalisation, de l’accréditation, de la sécurité et qualité des produits et services (ILNAS) take on the task? Or maybe the Commission Nationale de Protection des Données (CNPD), as part of its data protection mission? Or the Commission de Surveillance du Secteur Financier (CSSF), the Luxembourg financial sector regulator? Or will it be all of these authorities, each dealing with what belongs to their area of competency?

This question remains unanswered for the time being. Besides, it’s possible that the EU AI Office and Luxembourg’s future AI regulatory authority may have different readings of the AI Act or of its implementation. Until time and practice settles things, we may see some degree of friction between national and European authorities.

Following GDPR’s entry into force, an oversight came to light: organisations had been left to navigate the regulation on their own, lacking guidance in terms of interpretation.

To address this, under the AI Act, the EU has engaged the European standardisation bodies in the area of electrical engineering: the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). These bodies have set up a Joint Technical Committee 21 on Artificial Intelligence (or JTC 21) to develop technical standards to offer clear guidelines for organisations.

The International Organization for Standardization, too, had set about this task, and recently published its ISO/IEC 42001:2023 standard. It’s specifically dedicated to the setup, maintenance and continuous improvement of Artificial Intelligence Management Systems. This initiative closely aligns with the European Union efforts to create a robust regulatory framework for AI.

To ensure coherence with EU policies, the CEN-CENELEC will adopt ISO 42001 as a harmonised standard. This adoption will facilitate compliance with the AI Act, reinforcing the standard’s importance in the context of EU-wide AI regulation and its commitment to safe, transparent, and accountable AI technologies.

Conclusion

AI tools systems have the potential to drive sweeping changes. In Google’s CEO Sundar Pichai’s own words, “AI is too important not to regulate, and too important not to regulate well.” However, reaching an agreement between 27 Member State representatives wasn’t exactly a picnic.

At times, the ambassadors had to deal with the opposition of major countries. France and Germany, for example, were suspected of trying to favour their interests by pushing for a minimal text that wouldn’t hamper their own national AI forerunners Mistral AI and Aleph Alpha.

Yet, the European Union tried to strike the best possible balance between preserving AI’s huge business potential and protecting the world’s biggest integrated market from its possible adverse developments. Only time will tell if the text will achieve its goals and to what extent, but the AI Act remains the world’s first comprehensive regulation on AI, a major milestone towards a wide adoption of Responsible AI.

As for businesses, the most advanced are already using or developing AI systems, and it’s only a matter of time before the most cautious do so as well. They all should take a long, hard look at the AI Act’s content to understand what is at stake for them in terms of risk management, reporting requirements and organisational change. Only then they should plan their AI journey accordingly.

By the sheer extent of its scope, by the scale of the adaptations made necessary in response to its requirements, the EU AI Act will induce changes on a strategic scale for many companies. Even those who barely engage with AI systems will need to ramp up their compliance response. Failing to fulfil their obligations in this regard means the prospect of facing dire penalties in the form of extremely dissuasive turnover-based fines.

Compliance with the EU AI Act should be the business leaders’ priority for the next six months if their company is even remotely involved with AI tools. In Luxembourg, in the EU, and potentially anywhere else in the world.

What we think
Vojtech Volf

As the EU recognises financial services as potentially essential, the sector might soon fall into the AI Act’s high-risk category. This highlights the vital need for robust governance, effective risk management, and human oversight. Far from being mere regulatory requirements, these are crucial for maintaining the trust and safety in financial services.

Vojtech Volf, ICT Regulatory and Compliance Manager, PwC Luxembourg

Embracing the AI Act fosters trust and differentiates businesses in a crowded market. It encourages a strategic approach to AI deployment, where compliance enhances reputation and drives innovation. Prioritising ethical practices and risk management translates into tangible business value, positioning companies for future success in the evolving AI landscape.

Saharnaz Dilmaghani, PhD., Artificial Intelligence & Data Science Senior Associate, PwC Luxembourg
Saharnaz Dilmaghani

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top