As the draft regulation on artificial intelligence makes its way through the final stages of approval within the European institutions, Noshin Khan, Associate Director, Ethics & Compliance at OneTrust, shares her take on this landmark piece of legislation.

The proposal for a regulation on artificial intelligence, brought forward by the European Commission in April 2021, is entering the home straight. After the Council of the European Union announced its policy guidelines at the end of 2022, the draft regulation was amended and voted on by the European Parliament in plenary session in June 2023.

This was followed by the trilogues phase involving negotiations between representatives of the European Parliament, the Council of the European Union and the European Commission. On 9 December 2023, an agreement was reached, paving the way for a final agreement in the short term (February or March 2024) for implementation in 2025.

« Above all, this text is reactive. By this I mean that it was written in response to the recent – and impressive – progress in this discipline, which is twenty or thirty years ahead of the experts’ initial forecasts. Artificial intelligence has been around since the 1950s, but it has never really been regulated, although the GDPR does regulate certain AI-related activities, mainly in the key principles of purpose, consent, transparency, individual rights, data protection by design and by default, and so on. However, this is the first time that a whole text has been devoted entirely to regulating AI, » says Noshin Khan, Associate Director, Ethics & Compliance at OneTrust.

A text based on a risk-level approach

« Given the increasingly clearly identified risks – profiling in health data, wrongful arrests of citizens following random facial recognition, the proliferation of biases – the AI Act was based on an approach predicated around levels of risk. This allows us to consider anything that could be harmful to human beings in the field of health, education, etc. We must also be careful in cases where there is less risk and where it is possible to self-certify because this opens the door to potential abuses – we all have biases inherent in our individuality, so how can we objectively self-certify? » asks Noshin Khan.

The main objectives of this European regulation on AI are to guarantee safety and fundamental rights in relation to the potential risks while encouraging innovation and adoption. « The problem is knowing how to protect while innovating. These rules are fairly restrictive and time-consuming. We are looking at something with many layers to it. If we were to draw an analogy, the aim is to replicate citizens’ rights in the real world like a mirror in the virtual world. But closing your door in the real world is easier than closing access to your private life in the digital world, » says Noshin Khan.

Just as the segregation of certain populations has been fought throughout history, the challenge here is to prevent AI algorithms from reproducing any exclusion mechanism based on discriminatory criteria. « When you compare the situation in Europe with that in China or the United States and the rest of the world, the major economic players see these regulations as a drag on innovation, leaving the advantage to unregulated countries. But in time, I think the AI Act will become a global standard, just as the GDPR has influenced many countries around the world. Some countries, including India, are already working on their own AI regulations, » adds Noshin Khan.

CNIL as the regulatory authority in France?

What body will be designated to enforce the AI Act? For the moment, it remains fuzzy, but CNIL seems likely to play this role in France. « The issue of personal data is very important in AI, so CNIL is very likely to be appointed as the regulatory authority. We know that it has a department dedicated to AI, and CNIL already publishes toolkits for businesses. That said, CNIL runs the risk of missing out on all the ethical aspects, which are not its specialty. That is why I believe that a multidisciplinary authority (security, privacy, IT, ethics and compliance), separate from CNIL, would be more appropriate« , says Noshin Khan.

« Internally, when we wrote our employee policy on the use of artificial intelligence, several departments worked on it. The document was initially drafted by the ethics and compliance department, then reviewed by the DPO, the privacy department, security and IT. Our policy would not have been at all the same if it had been managed solely by Privacy, » says Noshin Khan.

Inventory, third-party assessment and compliance

Given the challenges these new regulations pose, the question that many companies are asking themselves is where to start. « The best practice we recommend is to carry out an inventory of your systems that use AI. This could be HR recruitment software, internal team collaboration software, chatbots, or IT development. This phase is important because you can only manage what is known and listed. You can then put in place the various controls required for your governance, » says Noshin Khan.

OneTrust’s « AI Governance » solution is a module that forms an integral part of the overall OneTrust platform. « This module takes in all the current regulations – as it will take in the AI Act when it is definitively adopted – and adapts to these texts so as to reproduce their requirements in a highly operational way. The aim is to make users’ work come into compliance easier, » explains Noshin Khan.

Another equally important aspect is monitoring the stakeholders with whom a company works on a daily basis. « The AI Act includes a strategic aspect relating to the control of third parties. For example, if you need to buy recruitment software for your HR team, you need to know how the data is collected and stored, whether it is secure, the ethical level of the algorithm, and so on. The questionnaires for third parties are already ready in the solution, you just need to send them to your business partners. Once you have received the responses, they fit naturally into your assessment and regulatory compliance processes, » says Noshin Khan.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.