Following X's lead, Meta has decided to discontinue the use of human fact-checkers, instead relying on AI and user self-regulation to moderate content on Facebook and Instagram. Is this a return to basics or a step backward? Opinions are divided.

Is this a return to basics or a step backward? On January 7, in a shocking announcement, Mark Zuckerberg declared the end of fact-checking policies on Facebook and Instagram, referring twice to a “return to our roots.” For the Meta CEO, “restoring free speech on [their] platforms” involves ending partnerships with American media outlets responsible for fact verification.

“We’ve reached a point where there are too many errors and too much censorship,” argues Mark Zuckerberg, citing the political agenda of “traditional media.” The cessation of fact-checking is accompanied by relocating Meta’s moderation teams from California, labeled as a progressive state, to the more conservative Texas.

In the absence of human verifiers, Meta will continue to use artificial intelligence to automatically moderate violent, child pornographic, or terrorist content. For other questionable content, the American giant relies on a “community notes” system, similar to the one implemented by Elon Musk’s X platform.

On X, users with contributor status can add notes to any post they deem potentially misleading. The note is published if considered helpful by a sufficient number of contributors with differing viewpoints, as explained by X’s help center. Meanwhile, the “problematic” content remains online.

A Return to the Wild West?

The “return to roots” mentioned by Mark Zuckerberg refers to the early days of social networks—Facebook was launched in February 2004—when content moderation relied primarily on its members. This resembles a return to the Wild West, where the law of the strongest gives more weight to active profiles expressing strong opinions.

The timing of Zuckerberg’s announcement, just days before Donald Trump’s inauguration, can be seen as a form of allegiance to the latter. Amidst virile rhetoric, the Meta CEO aligns with the reactionary values of the re-elected president and his “friend” Elon Musk.

Although Meta’s decision does not (yet) concern Europe, we will inevitably feel its effects. “The internet has no borders, and content travels at the speed of light from one side of the Atlantic to the other,” recalls Laura-Blu Mauss, general coordinator of the French NGO Respect Zone. “There will inevitably be repercussions here.”

The Trust & Safety Forum, to be held on April 1st and 2nd alongside the InCyber Forum in Lille, will provide an opportunity to bring together various stakeholders. “All voices must be heard to assess the benefits and risks of ending fact-checking,” says Jean-Christophe Le Toquin, co-founder of this international event dedicated to online security and trust issues.

As a former president of Point de Contact, an association that allows internet users to report potentially illegal content such as child exploitation, hate speech, or terrorist content, Jean-Christophe Le Toquin knows from experience that developing a reporting culture is challenging.

“Within a community, the majority of members don’t dare to speak out due to fear, ignorance, or lack of technical skills,” he observes. “As always, it’s a handful of individuals who make their voices heard the most. How can we balance the opinions of this hyperactive and well-trained group on divisive topics like gender issues or immigration?”

The European DSA as a Safeguard

In response to these potential risks and in the name of combating misinformation, an open letter signed by leaders of Respect Zone, Point de Contact, and Internet Sans Crainte calls on Europe to “defend our fundamental freedoms.” The Old Continent has a solid regulatory framework for this purpose.

Enacted a year ago, the Digital Services Act (DSA) requires platforms to be more transparent about their content moderation methods. This has compelled Meta to conduct an impact assessment following the termination of its fact-checking program. Among other safeguards, the DSA requires platforms to cooperate with “trusted flaggers,” such as NGOs, associations, or professional organizations whose reports are prioritized.

Faced with these obligations, it’s understandable that Elon Musk and Mark Zuckerberg criticize excessive regulation in Europe. “While social networks aim to operate uniformly worldwide, we may end up with a very different American Facebook compared to Facebook Europe,” anticipates Jean-Christophe Le Toquin.

“The business model of social networks relies primarily on advertising, and their algorithms favor divisive content that generates the most interactions,” notes Laura-Blu Mauss. With the “community notes” system promoted by X and Meta, social networks, in her view, attempt to shift responsibility onto the user. “According to them, they merely provide a service to their users without acknowledging that they influence the editorial line with their algorithms.”

As some users leave X and Meta, Laura-Blu Mauss laments that the alternatives available are American, in the absence of a major European social network. “Freedom of expression as we understand it in Europe is not the same as that permitted by the American First Amendment.”

Mobilizing Civil Society

“Haters are organized and precise, while the rest of the world is not,” continues Laura-Blu Mauss. “Combating hateful content requires significant mobilization to remain consistently responsive.” Without replacing regulators, she believes that civil society and specialized associations have a growing role to play in response to changing moderation policies on platforms.

Respect Zone, celebrating its ten years of preventing online violence, intends to mobilize at its level. Without having the status of a “trusted flagger,” the NGO will continue to identify content inciting hatred or promoting sexist, racist, or anti-LGBTQI+ discrimination. “However, this demands significant resources for associations and extensive mobilization of volunteers and experts.”

Among other tools, Respect Zone offers a charter dedicated to moderating speech in digital spaces to internet stakeholders, as well as businesses and local communities. “It promotes responsible and respectful freedom of expression, in accordance with the law,” notes Laura-Blu Mauss.

Reviving Public Debate

Olivier Babeau offers a different perspective, which he elaborated on in a column published in Le Figaro in late November. The president and founder of the Institut Sapiens, a think tank, explains why he chooses to remain on the social network X despite the massive exodus of users and media. These departures, he argues, reflect a refusal of public debate and the confrontation of ideas.

While he acknowledges the issues of polarized debates, “fake news,” and online excesses, he finds Meta’s previous fact-checking system unsatisfactory. “This strict control of exchanges leads to sanitizing the debate or being weaponized by one side. Any expression that deviates from the line is violently attacked, and its authors are ostracized.”

Moreover, fact-checkers can consciously or unconsciously express viewpoints, and when they stick to factual grounds, “numbers can contradict other numbers.” Quoting the Roman poet Juvenal, “Who will guard the guards themselves?” he questions, “Who fact-checks the fact-checkers?”

Since the beginning, social networks have created informational chaos, “but it’s where opinions are now formed, not through the reading of a few influential media outlets as in the past.” And if the community system isn’t perfect, “it’s quite powerful.” “People will say false things or nonsense, but that’s part of freedom of expression. Alternative opinions shouldn’t be taken as an attack.”

To successfully develop debate, according to Olivier Babeau, it’s necessary to strengthen general culture and develop critical thinking in schools. The study of epistemology notably helps distinguish between a viewpoint and scientific knowledge.

In his view, the greatest risk doesn’t come from free expression but from the proliferation of fake profiles enabled by AI. “As these avatars easily pass the Turing test, we need a system to ensure a user is talking to another human.”

Furthermore, rather than complete anonymity, he advocates for pseudonymity. This would allow authorities to identify a user in case of legal violations while enabling certain professions bound by confidentiality, like the military, or simply an employee regarding their employer, to express themselves freely.


Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.