In anticipation of the future European regulatory framework, the AI Act, there has been a proliferation of initiatives and labels promoting ethical and responsible artificial intelligence. A standardization process has also been launched to produce harmonized standards at a European level within the next two years.

How many times a day does AI influence your choice, or even make a decision for you? When an algorithm suggests a new song or series on a streaming platform, its impact is limited. However, AI can literally change the course of your life when it comes to making decisions about your studies with Parcoursup (French admissions platform for first year of higher education), or applying for a job or a loan.

Considering what is at stake, the trust we put in these algorithmic models becomes a key issue. According to the European Commission, a trustworthy AI must meet seven requirements, including model sturdiness, explainability, privacy, diversity and equity. It offers a self-assessment guide, called ALTAI (Assessment List for Trustworthy Artificial Intelligence), to guarantee an AI’s compliance with these requirements.

Society’s embrace, or at least acceptance, of AI, can only come through trust. Seventy-three percent of French people thus stated that developing a trustworthy AI is an important, even crucial, issue, according to an IFOP (French polling institute) study from December 2020. Many companies’ business depends on it. Starting from mid-2025, and the implementation of the AI Act, they will benefit from a regulatory framework, just like the GDPR for the protection of personal data.

This AI Act aims to categorize AIs according to their level of risk. AI apps and systems that generate unacceptable risk, like the social credit system currently being implemented in China, will be banned. Models involving a high level of risk, such as automatic resume selection tools, credit scoring and AI-assisted legal systems, will be regulated. The future European regulations will also set safeguards in areas such as self-driving vehicles, medical devices and biometric identification.

Planning for the implementation of the AI Act

According to Gwendal Bihan, CEO of Axionable, a consultancy firm in sustainable AI, and vice-president of the Impact AI collective, which brings together over sixty companies (Microsoft, AXA, Orange, Deloitte…) in promoting a trustworthy AI, businesses would be wise to get ready for the new regulations as soon as possible. “They must start to document, to track their AI, in order to comply with the AI Act’s risk-based approach.

According to him, the GDPR has readied minds, and companies want to avoid another slow start. “I feel a better sense of anticipation, continues Gwendal Bihan. We must avoid going into technical debt and having to retrofit models that were designed before the regulation.

For that matter, the comparison with the GDPR is natural because, as in the case of personal data processing, we must start by drawing up an inventory of existing AIs within an organization. “It is easy enough to map models that were developed internally. Things get more complicated when AI hides in market solutions used by human resources or financial and administrative management.”

After mapping, the entity in charge of processing, namely the company, carries out risk analysis. There should also be “a dedicated intermediary who has the means and authority to act”; the equivalent of the Data Protection Officer (DPO) with the GDPR. This may be the chief AI ethics officer, a title that is beginning to appear in the organization charts of some multinational corporations.

While we wait for the AI Act, businesses can rely on methodological frameworks to ensure the documentation, governance and traceability of models. The Laboratoire national de métrologie et d’essais (LNE), a French organization that carries out measurements and testing of products, offers a certification of processes for AI, and the GoodAlgo firm, a certification label called ADEL. Likewise, the Labelia organization launched a responsible and trustworthy AI label. After Axionable, MAIF (French insurance company) was the second company to receive this certification.

AFNOR at work in France

After labels and certificates come standards. In France, AFNOR (French standardization association) was tasked by the government in May of 2021 with directing the standardization of the Great Artificial Intelligence Challenge. This project is led by the Secretary General for Investment (SIG), which belongs to the Prime Minister’s office, and financed by the Future Investments Program (PIA) to the tune of 1.2 million euros.

The goal is to “create a trustworthy standard-setting environment that supports the tools and processes used to certify critical systems based on artificial intelligence.” “The goal of the AI standardization committee is to define and promote France’s stance on the subject, and highlight French initiatives,” explains Louis Morilhat, the Great AI Challenge policy officer for AFNOR Group.

There are three levels to this standardization effort: the French level with AFNOR’s AI standardization committee, the European level with the European Committee for Standardization (CEN) and its CEN-CLC/JTC 21 technical committee on AI, and finally the international level, with the International Organization for Standardization (ISO) and the JTC 1/SC 42 committee.

To fuel debate on a national level, the Great AI Challenge is made up of three pillars. The first pillar, the Confiance.ai consortium, is technological and brings together manufacturers like Airbus, Thales, Renault, Air Liquide and Safran. The consortium outlines the methods and tools used to develop a trustworthy AI. The second pillar is dedicated to approval and testing. Finally, the standards pillar is represented by AFNOR.

The standards road map, released in April, includes six areas. One deals with the characteristics of a trustworthy AI (security, safety, explainability, sturdiness, transparence, equity). Another focuses on risk analysis. Another yet is centered on the means implemented to guarantee that AI systems are controllable and that a human being can take over at any moment.

The goal is to bring the national strategy outlined by the road map as close as possible to the strategies set up on a European and global level,” continues Caroline de Condé, head of the AI standardization Great Challenge project at AFNOR. This consultation process should lead to the definition of harmonized standards on a European level by October 31st, 2024. Companies that comply with these standards will benefit from a presumption of compliance with the future regulatory framework.

Ten major themes will thus run through the harmonized standards, including compliance assessment, namely requirements fostered by certification bodies and audit mechanisms, and cybersecurity rules specific to AI. “France is currently behind two initiatives being pushed at a European level, remarks Caroline de Condé. The first one focuses on a unified approach to the characteristics of a trustworthy AI, and the second on the drafting of a catalog of risks.”

The CNIL (French Data Protection Authority), a future supervisory authority?

During this standard-setting process, consultation with the French business sector continues. Among the organizations concerned are startups, businesses, institutions, legal, academic and nonprofit players, but also collectives such as Hub France IA and France Digitale. “A platform to collect the thoughts and feedback of these players will be up and running by the end of September, early October,” points out Louis Morilhat.

Further along, after the AI Act comes into effect, the CNIL will be the national supervisory authority in charge of regulating AI systems, according to recommendations in a study by the French Council of State, published on August 30th.

In order to prepare for its future missions, the committee has, according to Usine Digitale, tested two algorithmic mechanism explainability measures in order to avoid the “black box” effect. Moreover, the CNIL has already published an analysis grid that “allows organizations to assess for themselves the maturity of their AI systems in regard to the GDPR.”

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.