In April 2021, the European Commission announced a draft regulation on artificial intelligence (AI). Through this text, the Commission reaffirms AI’s strategic significance for Europe, while establishing a common vision based on ethics, with the aim of preventing the inherent risks related to this set of technologies.
On February 23, 2022, the European Commission presented its proposal for a directive on corporate sustainability “due diligence”. Establishing a binding European corporate duty of care is a significant step towards a more sustainable European economic ecosystem that is more respectful of human rights, social rights and the environment within global value chains.
On September 1, 2022, the Waserman Law came into force in France through the transposition into national law of the European Directive 2019/1937 of October 23, 2019, whose objective is to unify the protection of whistleblowers within the EU. This transposition was made by amending the Sapin II law. The related application decree was published on October 3, 2022.
These three issues are just one example of the many regulations that companies are facing around the world, especially in Europe, and of the pressure they are under in terms of compliance and ethics. “Companies are facing a flood of regulation. Technology is helping them to be compliant. It’s not about acquiring technology tools for the sake of it, it’s about being smart about choosing the most relevant solutions to cover risks, legal requirements but also the best practices,” says Noshin Khan, Senior compliance counsel at OneTrust.
The younger generation: a target to be addressed with creativity
In companies and organizations, younger age groups, i.e. those under 30, may not be the most aware of compliance and ethics. This is a real challenge for companies, especially those in the technology sector, due to the high percentage of these populations in their workforce. In order to engage the younger generation on these topics, it is necessary to deploy innovative and attractive outreach strategies.
“If you need to use TikTok to reach your audience, you should go for it. The goal is to deliver the message where the young employees are. Moreover, we must keep in mind that the commitment of these employees is key, they are our eyes and ears. If they feel they belong to a group, to a ‘family’, young employees will get involved in projects related to compliance and ethics,” adds Noshin Khan.
As a result, Noshin Khan has developed a certification program called “Trust Troopers”. Like the movie “Hunger Games”, this program offers a real course consisting of 20-minute training sessions, as well as short videos or quizzes that can be accessed on a smartphone. “This course allows everyone to take on this subject at their own pace. It consists of mandatory modules, but also a series of books and films that we recommend reading and watching, such as Philadephia, Scandal, Worth, The Tinder Swindler, etc.,” notes Noshin Khan.
Building trust is a prerequisite for deploying compliance and ethics projects in companies. “Compliance and ethics departments can implement every conceivable mechanism, but if everyone is not on board, their task will be even more complex. Conversely, when an employee comes to you and asks if it’s okay to give a customer an Amazon card because he or she has a doubt, then you know the message has been delivered and understood,” says Noshin Khan.
The ethics of artificial intelligence algorithms: a sensitive subject
Another concern for companies is the very broad definition of what artificial intelligence is, according to the draft European “Artificial Intelligence Act” regulation. “According to this text, an Excel spreadsheet could be considered as AI, which would impact all companies. In my opinion, we should focus on companies specializing in the most complex algorithms that do pure AI and for which this is the core business, such as Machine Learning or Deep Learning. If the target is too broad, the real risks will fall through the cracks,” suggests Noshin Khan.
In order to address the sensitive topic of ethics within artificial intelligence algorithms, it is advisable to create a committee that is transversal to all the company’s activities. This type of entity allows to have an overall view on all the issues, to standardize the processes throughout the entire organization, and also to considerably decrease or eliminate any bias or discrimination.
“In the future, it will also be required to be able to prove that you have taken into account all the issues related to artificial intelligence and that all the necessary measures have been implemented”, adds Noshin Khan. Depending on the level of risk associated with AI, some companies may be able to self-certify based on in-house audits and reports provided by a specialized compliance software solution.
Another tip for complying with legislation is to keep control of the artificial intelligence algorithms developed internally at all times. In concrete terms, this means understanding how the algorithms work at any given time and how they achieve the results they propose.
“Artificial intelligence is already 25 years ahead of what was expected in 2020. It provides the results expected by users, but they are sometimes no longer able to explain how the algorithm did it. They are faced with what we call ‘black boxes’,” adds Noshin Khan.
Proposed European AI regulations are much stricter than elsewhere in the world
Faced with these so-called black boxes, developers are led to interpret the results, assuming that the artificial intelligence has proceeded in such and such a way. This is a flaw that the draft European regulation wishes to avoid with a requirement to explain how the algorithms work.
“Both China and the United States are much less bothered than Europe about these aspects. The European text is much stricter than other texts in the world, which can potentially harm us and cause us to fall behind. But it can also become an international reference on which most countries tend to align themselves, such as with the GDPR,” concludes Noshin Khan.
In any case, whether the issue is artificial intelligence bias, whistleblower protection or corporate “due diligence”, complying with all these regulations and defending the main ethical principles requires substantial human and organizational resources and technological tools. This constitutes a true corporate strategy that must be supported by the company directors themselves.