Who is afraid of AI, its hallucinations, its ability to lie, sexually humiliate, and kill (literally)? Whether they run one of the giants developing AI or feed it ‘by hand’, whether they are cyber prosecutors, run a deep tech start-up or an NGO fighting online sexual violence, they have little in common. They may even find themselves on opposite sides in future litigation. But they share the same vision that humans must remain in control of AI. Here is a brief, non-exhaustive overview of a mosaic of personalities who are working to ensure that AI remains humanity's best friend.

On 26 February, the influential Dutch NGO Offlimits, which fights against online sexual violence, took legal action in the Netherlands against X to stop Grok AI from allowing the creation of sexual deepfakes. The following day, 27 February, the US government announced that it was severing ties with Anthropic over the US military’s use of Claude. What do these two events have in common? The idea that humans must retain primacy over what AI does.

Human oversight on decisions made by AI

What was at stake on 27 February at the Pentagon was ultimately the choice of how AI can decide on the death of an adversary. As Tariq Krim brilliantly summarised in his Cybernetica newsletter, should a military officer be involved in the decision-making process before any lethal use of force by AI? Or is it sufficient to seek out those responsible after the fact, in accordance with applicable laws? In both cases, the US military is in charge of making its decisions. But it is clear that requiring the full deployment of AI be conditional on the exercise of human oversight (even strictly military) is an operational constraint that slows things down. If two AIs are deployed on the battlefield, one with human oversight and the other with human responsibility sought after the fact, the one acting autonomously is more agile. Dario Amodei, CEO of Anthropic, made the choice ‘in good conscience’ (in his words) that a human should remain in the loop, and he lost the Pentagon contract. His competitor OpenAI made the opposite choice, believing that US law would be sufficiently protective, and won the contract.

Giving humans back control over their dignity

The day before in the Netherlands, on 26 February, Offlimits filed a lawsuit demanding Grok AI to stop creating sexual deepfakes, with a hearing scheduled for 12 March under a penalty of €100,000 per day. The summons are not a world first: the Paris public prosecutor’s office, headed by prosecutor Laure Beccuau, preceded it with a search of X’s premises on 3 February, followed by summonses for voluntary interviews with Elon Musk on 20 April, and other employees as well, including former CEO Linda Yaccarino, for the same offences. Regulators’ concerns are international : authorities in Australia, Canada, the United Kingdom, the European Union (the Irish regulator and the European Commission), as well as India and Malaysia have opened investigations or are closely assessing the situation. What is at stake is the ability of AI to autonomously decide to infringe on the dignity of an individual. In Europe, the United States and elsewhere, it is not so much the legal basis that is lacking, but experience. Each country is experimenting, taking legal action that serves as a test, and which is all the less noticeable because these procedures are spread over long periods of time and are not coordinated. Robbert Hoving, director of Offlimits, says that these “nudification” applications are a “slow-motion disaster”. A response is being organised, but as with any rescue operation, its apparent slowness is difficult for victims to bear.

Making AI wiser with human expertise in all its diversity

What if, rather than suffering the errors of AI and its well-known hallucinations, we worked to educate it at the source, feeding it with scientific expertise, but also nuances from different languages and cultures? This is the task that Akash Pugalia, Chief Digital Officer at TP, a global player in customer experience management, from call centre agents to content moderators, has set himself. His two weapons: ‘Specialized global expertise’ and ‘Power of diversity’. The idea is simple: since AI feeds on human knowledge, we might as well give it the best by drawing on real experts from all over the world and from social backgrounds: whether they are Domain experts (medical, finance, legal, STEM), Multilingual and cultural specialists, Evaluation and safety professionals, RLHF and reasoning validation and Synthetic data and red teaming capabilities, the needs are specialised, immense and global. Akash Pugalia’s statement of principle, in which he sets out his vision, dates from the last days of 2025, in an article published in Forbes. So 2026 looks set to be the year when human intelligence is injected into AI on a massive scale. Not only will this mass of experts become an AI ‘training infrastructure’ in itself, but it will also help provide the ‘human oversight’ that the European AI Act requires for high-risk AI systems from August 2026 onwards. The code of conduct currently being developed will provide further guidance to detect sexual and other types of deepfakes.

 Giving humans back the keys to detecting artificial content

 AI has already reached a level of plausibility or credibility that makes it undetectable even to the trained eye, hence safeguards such as Article 50 of the AI Act on transparency obligations for AI providers and deployers are imperative. By 2028 – which seems like an eternity given the pace at which new AI models are being deployed – content that is artificially generated or manipulated by AI will have to explicitly indicate its origin. This means that files of all kinds (images, videos, sounds, texts) will have to be marked and enriched with metadata so that it is clear that they are artificial or manipulated. In France, two companies play a role : Keeex for metadata and Label4.ai for content tagging and forensic detection (which estimates the proportion of generative AI involvement in content). These are the only two French expert entities to have contributed to the European code of conduct currently being developed, the first draft of which was published in mid-December. Anthony Level, co-founder of Label4.ai, asserts that there is still no structured ecosystem to ensure the development of this transparency industry, which is so necessary for trust. Hence his determination to hammer the message that AI content must be watermarked today, in advance of upcoming European regulations. From California to India, from the EU to China and South Korea, the movement for transparency in AI-generated content is underway.

How will it all come together?

From Paris prosecutor Laure Beccuau and defence lawyer Robbert Hoving to CEO Dario Amodei, whose company Anthropic is valued at $380 billion, from French transparency start-up entrepreneur Anthony Level to Akash Pugalia, who is building a curated global expert network with tens of thousands of people, there is no deliberate synergy, but each of these persons consciously and deliberately contributes their part to ensure that humans retain their authentic voice and their ability to decide their future. Their strength and chance of success may lie precisely in the fact that they do not know each other, and that they are simply salient elements, disparate in appearance but convergent in depth, expressing the “collective wisdom” that will save AI from itself.

Articles by the same author:
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.