Governance Intelligence's sister publication IR Magazine recently spoke with Dragos Tudorache, a member of the European Parliament, about the new EU Artificial Intelligence (AI) Act, which was finalized last month. Tudorache has been a key player in helping get the regulation across the finish line. Here, he explains what the regulation means and who it affects.
Why did you think AI needed to be regulated within the European space?
First of all, the impact was there. It is a technology unlike any other piece of tech that maybe changes one sector or another; this has a deep influence in changing economic and social relations. It is cutting across society, cutting across economies, cutting across politics and human interaction, so it is a deeply transformative technology you cannot leave unchecked.
What does going unchecked mean? This is where it starts, with the realization that arrived quite early on: there was a growing conversation about risk. It is a technology with huge potential and huge benefits. But at the same time there are risks inherent to the way this technology is being used in certain contexts. How do you mitigate those risks? That is why we need regulation.
What steps do IROs need to consider when setting up an AI policy to ensure their firm is compliant with the upcoming regulation?
They need to look at their own business and see whether they are users of AI for HR purposes or management purposes, or whether they are producers of AI. If they are simple users of AI, they need to see whether they are deploying AI in one of the areas that are high risk, such as human resources or education. If so, they will need to respect some obligations linked to that.
But for any form of AI that is being used outside humans’ interests and rights, there is no regulation imposed.
Those categories include AI for banking, insurance, education, human recruitment, safety features and critical infrastructure: these are high-risk areas. So [you need to] take a look at the standards and see how you fit in with the significant risk threshold in these categories. There’s a description already in the text as to what that means. And there will also be technical standards to be adopted by the time this regulation comes into effect so look at those descriptions, look at those standards, and decide where you fit in.
If you want to check, you can also go into a regulatory sandbox, which is a tool allowing businesses to explore and experiment with new products, under the supervision of a regulator, who will be able to advise.
Where does ESG fit into the categories?
Any kind of analytical tool that does not interfere with human rights or the values we are protecting in society and is just an internal tool is fine. An analytical tool just tells you whether you believe you are socially compliant or sustainable. It has nothing to do with how regulation is built. An AI-driven tool is just a tool to do an internal assessment.
Do you think this regulation affecting Europe will then impact other nations doing something similar?
Yes, I think so, for two main reasons. One of them is what is being called ‘the Brussels effect’. In other words, the European market is a big-enough market with sufficient consumers to not leave any company outside of Europe untouched. Therefore, once you develop standards that are good for Europe, they tend to become standards you apply in other markets and other jurisdictions as well.
There is also another reason, which is more structural, more content-based: it’s not only societies in Europe that have grown and become aware of the risks of AI. A similar debate is taking place in many other societies around the world. And it’s therefore more of a bottom-up request. Besides the Brussels effect and its consequences, there is also a genuine awareness and call for action in all these other jurisdictions.
How would it be enforced?
The regulation has very sharp teeth – that goes all the way to a fine of 7 percent of the global annual revenue or €40 mn ($44 mn), whichever is higher for the company. The regulation has a list of sanctions applicable to different types of infringements. Of course, you must give companies the chance to make corrections to adopt the changes but, if that doesn’t happen, then you just take it off the market and apply a penalty.
To read IR Magazine's latest playbook on AI, please click here. In the report, Erik Carlson, COO and CFO at Notified, talks about a recurring theme in discussions around the AI opportunity in IR: whether artificial intelligence will replace the IRO. ‘What I say is, No, AI is not going to replace you, but someone who can use AI better than you might,’ he notes.