As the AI Act may take time to fully come into force, experts are wondering whether this European law—intended as an ethical safeguard for artificial intelligence—could quickly become obsolete given the staggering speed at which forgers are advancing. This was one of the major themes of the Forum INCYBER 2026 in Lille.

“It now takes just five seconds taken from YouTube or any other recording to perfectly copy and imitate a human voice and deceive any interlocutor,” says Jérôme Nevicato, a citizen reserve member of the gendarmerie (ComCyberMI) in charge of monitoring AI-related threats. Anything that can serve as evidence in court can now easily be forged by ordinary individuals.

“All the tools are available as open source, which has led to an explosion in the number of deepfakes and a drastic reduction in the complexity curve,” adds Adel Mebarki, CEO of the startup Foresight Data Agency, which specializes in analyzing digital phenomena. “There are indeed many standards and tools to identify falsifications and these deepfakes, but over time, none withstand misuse,” continues Mr. Nevicato. “This accumulation could ultimately lead to powerlessness.”

Especially “high-risk” AI

At the forefront is therefore the AI Act, the law intended to regulate the use of artificial intelligence in Europe. Adopted a little over two years ago, its implementation could be delayed. The European Commission wants to avoid overly burdensome requirements that could weigh on companies. “It is mainly providers of ‘high-risk’ AI that are targeted,” explains Garance Mathias, a lawyer specializing in digital law, “when artificial intelligence constitutes a safety component or is integrated into a product subject to the European product safety framework (toys, elevators, medical devices, etc.).”

The “liar’s dividend”

We had been warned about fabricated images. We had not anticipated the reverse: the moment when even authentic images would lose their value as evidence. This is what researchers at Yale University have called the “liar’s dividend.” According to them, in some cases, crying “fake news” makes it easier to weather a scandal than remaining silent. It is now easier for a liar to call into question facts that are nevertheless indisputable.

A sign that not everything is permitted is the recent investigations launched in Europe against Grok, the AI tool developed by X, the network of Elon Musk. This has led to the American billionaire being implicated by the French justice system. “An investigation is underway concerning the generation and dissemination of non-consensual sexual content (deepfakes) and attacks targeting women and minors,” confirms Mr. Nevicato. “In absolute terms, even when an image is entirely generated by AI, with a synthetic face, it constitutes an offense as soon as it depicts minors in a pornographic context.”

A trillion-dollar market

“Today, the challenge is no longer just to distinguish human content from AI-generated content,” confirms Dr. Emilia Tantar of the Luxembourg House of Cybersecurity. “We must now consider more subtle and technical risks, such as universal adversarial noise (small perturbations introduced into the weaknesses of AI models, editor’s note), for example just a few pixels imperceptible in input data but capable of significantly disrupting systems.” The stakes are enormous: estimated at $244 billion last year, the artificial intelligence market could reach nearly $1 trillion by 2031.

“Compliance is becoming a strategic lever rather than just a constraint,” concludes Quentin Cozette, a cybersecurity and AI governance expert, “especially in a landscape where multiple regulations coexist, such as NIS2 and the AI Act.” Rapid compliance could therefore ultimately bring advantages for companies, particularly in terms of their image with clients.

A legislative straitjacket

Beyond the AI Act, other initiatives exist, such as the program Eufactcheck.eu, set up by the European Journalism Training Association and aimed at debunking fake news related to politics and European issues. “This example illustrates the need for collaborative verification and an official, normative reference at the European level,” continues Dr. Tantar, “similar to verification by the Académie française for dictionaries.” But could this announced legislative framework hinder innovation in AI? “Absolutely not,” replies Mr. de Mercey. “It’s a myth! Thanks to advances in car brakes, we now drive faster and more safely.”

It remains to be seen who will be responsible for AI compliance within companies. “The DPO and the CISO will be required to work together on a single reference document,” predicts Jérôme de Mercey, co-founder of the software publisher Dastra and former member of the CNIL. “It’s a legal matter, it’s about law, and the CISO will need clear guidelines! We must maintain the same methodology as for the GDPR.”

An endless loop

“The problem is that we are consulted after an attack,” emphasizes Mr. Mebarki. “We are more often involved in reaction than in prevention, whereas we should instead identify weak signals that precede an operation as early as possible in order to assess the response. We identify ‘patterns’ of deepfakes, but they evolve at great speed—it’s an endless loop.”

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.