An increasing number of compliance professionals and experts are calling for the development of an “augmented compliance” approach that uses new technologies, including AI, as an integral part of its implementation. But is this approach risk-free?

In France, the Sapin II law gave new impetus to the concept of compliance by requiring large companies to take appropriate steps to guard against the risks of corruption. Companies with more than 500 employees and annual turnover of at least €100 million are required to adopt certain measures to implement this duty of care, such as drawing up a code of conduct and risk mapping. But compliance encompasses much more than this single requirement.

More broadly, compliance refers to a process aimed at ensuring that companies comply with standards predetermined by the public authorities. Legal expert Marie-Anne Frison-Roche describes these as “global” and “monumental” goals. They include not only the fight against corruption and market transparency, but also the protection of the environment, workers, gender equality, and so on.

To comply as fully as possible with these public goals, companies are developing and deploying a range of standard-setting soft law instruments, such as codes of conduct, compliance programs, ethical charters, and whistleblowing procedures. Furthermore, an increasing number of companies are turning to new technologies such as AI to guarantee more effective compliance. But this is not without its dangers.

AI at the heart of compliance by design

In a speech she made in Berlin in 2017, Margrethe Vestager, European Commissioner for Competition, encouraged companies to comply with antitrust rules through “compliance by design.” She went on to explain: “This means that pricing algorithms need to be built in a way that doesn’t allow them to collude.” Since then, some legal experts have adopted the phrase as a way of describing the prospect of compliance implemented through “digital self-regulation” or by training artificial intelligence in the law.

As legal expert Cécile Granier explains in the collective work Compliance Tools (2021), “Thinking about compliance by design means incorporating one or more standards into the very structure of a computer program —such as an interface, platform, application, software, or artificial intelligence—at the time you create it. The rules to follow are integrated into the design of the object and are an integral part of it. The technology is therefore put at the service of compliance and becomes a support for its implementation.” Some compliance professionals and specialists are now looking to AI to provide this “support.”

In a recent opinion piece, Luc Julia, Renault’s Scientific Director, and Julien Biot-Hadar, a compliance expert, wrote: “Today’s compliance departments can draw on new technologies (AI, machine learning, graph databases, etc.) to detect signals or behaviors that could indicate wrongdoing or criminal misconduct, such as non-compliance with internal processes, violation of regulations, or, more specifically, the detection of patterns associated with intrusion, corruption, collusion, and fraud, the complexity and frequency of which we are all too often powerless to prevent.”

There are several arguments that support their position. Firstly, by taking into account an ever-increasing number of standards, including those relating to corporate social responsibility (CSR), compliance is becoming ever more complex and expensive to implement. For example, as reported in the recent True Cost Of Financial Crime Compliance Study from LexisNexis, the increase in the number of financial crime regulations is the primary factor driving up compliance costs, according to compliance professionals surveyed from all over the world.

Worse still, 78% of them claim that the growing complexity of these regulations is hampering their businesses. In light of this, accelerating the development of compliance by design through the use of AI is essential to limit compliance costs and streamline implementation.

Secondly, AI provides compliance officers with a twofold benefit: they can refine their risk analysis and improve the quality of how they carry out their duties. For example, unsupervised machine learning can group information according to similarities, without any predetermined rules, and thereby detect unusual behavior that may indicate unexpected anomalies. As RSM partner Jocelyn Grignon recently explained about the role of AI in compliance, “thanks to AI, we can ask the right questions and draw the right conclusions.”

AI in compliance: a danger?

Giving AI too prominent a role in compliance could nonetheless bring with it major legal dangers, as law professor Marie-Anne Frison-Roche explained in 2022 at a conference at the Court of Cassation entitled Artificial Intelligence and Corporate Management. Those companies that use this technology as a “complete and infallible solution” to compliance will be expected to comply with all applicable regulations from the get-go.

This means that, in the event of a dispute, the burden of proof will rest entirely on those companies. Worse still, Frison-Roche believes that this may even create “irrebuttable presumptions of non-compliance” for companies, in other words, presumptions that can neither be contested nor refuted.

Furthermore, as this very cautious observer of compliance by design notes, “the author of the standards will assume that a company chose not to comply on a voluntary basis, since it had the technological means to comply completely and infallibly. This means that case law—and there is a growing tendency for case law to do this […]—will turn everything into an obligation of result. It will say: ‘you knew everything, you understood everything, you planned for everything, so everything is an obligation of result.’”

This is why Frison-Roche suggests thinking of compliance from a “substance-based” rather than a process-based definition. In her view, compliance is not a matter of obeying all regulations in advance, but simply of pursuing “monumental goals” (achieving gender equality, averting financial, climatic, digital, or health disasters, and so on) through an obligation not of results, but of means. According to this line of thinking, AI should be a tool used only to a limited extent for compliance: “it should be a ‘huge help’, without ever claiming to be a complete and infallible solution, because it’s humans who should take center stage, not machines.”

Luc Julia and Julien Biot-Hadar also believe that humans must continue to play a key role in compliance. Although they are impressed by the potential uses of AI for compliance, they write in their above-mentioned opinion piece that “this innovation does not sound […] the death knell for intuition, original approaches, or simply the wealth of experience that means humans are sometimes more effective than digital automation at detecting fraudulent situations or blocking suspicious transactions.” In short, compliance officers can rest assured that AI will not easily replace them.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.