The role played by social media in this summer’s street violence, from the riots in France following the death of young Nahel to the incitement to loot stores in the UK, is yet another example of how they can be misused. Social media sites are no longer just a place for personal views, but rather an extension of public space in the cyberworld. Their users must therefore be protected. This is especially important now that new vulnerabilities have surfaced with Meta, Twitter, and Snapchat, including the disclosure of intimate material (revenge porn) or personal details (doxxing).
The question of how to regulate social media then arises. Regulation means ensuring that measures can be taken to control the risks that a particular activity generates. In May 2018, Emmanuel Macron and Mark Zuckerberg joined forces to set out a regulatory policy, which was published in a report a year later.
From self-regulation to co-regulation
The authors of this report note, firstly, how inadequate the moderation measures voluntarily implemented by social media companies are, and secondly, how opaque the results obtained are. Nevertheless, the platforms are best placed to implement a regulatory policy. Only they have the technical capabilities to spot problematic posts in large quantities, or to detect anomalies on a global scale.
The role of public authorities is to make social media more accountable, by supervising their moderation efforts. This co-regulatory approach requires a dialogue between the two parties, focusing on ways to take action and make adjustments.
Regulating social media is nevertheless difficult. In its initial assessment of how the 2018 anti-fake news law is working, ARCOM (a merger of the French Higher Audiovisual Council (CSA) and the High Authority for the Distribution of Works and the Protection of Rights on the Internet (HADOPI)) noted that the social media sites surveyed, including Meta and Twitter (but not TikTok), had been cooperative, but that it was unclear from the information provided what measures they were using to curb fake news. The authority also pointed out that the very definition of fake news varied from one social media company to another. This unfavorable balance of power, due to a lack of digital sovereignty, also applies to the EU.
European regulatory policy is limited to a code of good conduct agreed in 2016 and a European Commission communication on tackling illegal content online from September 2017, followed by recommendations on the same subject in 2018. This lack of solid rules stems from the 2000 European e-commerce directives, which stipulate that an information society should not be held responsible for information transmitted over its communication network. Furthermore, digital services provided by these companies must not be restricted, unless they comply with conditions that are extremely onerous for the authorities.
The upshot is that, in February 2019, the European Commission estimated that 72% of hate content was eventually deleted, whereas by 2022, this had fallen to just 63.5%.
Social media: the EU on the front line
A ruling by the Court of Justice of the European Union (CJEU) put an end to this situation in October 2019. The original case involved a dispute between Meta and an Austrian politician, who wanted an abusive post about her removed. While the Austrian courts and Supreme Court ruled in her favor, the European Court of Justice was asked to judge whether it was possible to contravene the 2000 directives by forcing Facebook to delete the post. The court ruled in favor of the plaintiff.
The social media company was required to not only delete the post, but also any content it deemed to be “identical.” This decision applies worldwide. In other words, illegal content must no longer be visible to users, regardless of the country from which they log on to Facebook. This decision therefore opens the way for EU legislation to update the 2000 directives, which are now obsolete.
The first strong measure to regulate social media is the January 29, 2021 regulation to combat the dissemination of terrorist content online. The definition of terrorist content is unambiguous: it is content that incites to commit terrorist acts, glorifies such acts, or provides instructions for making weapons and explosive devices. In each member country, a competent authority will issue removal orders (in France, this is the Central Office for Combating Information and Communication Technology-Related Crime (OCLCTIC)).
Social media regulation also comes in the form of the Digital Service Act (DSA). Adopted in October 2022 and implemented in August 2023, the DSA requires social media companies whose traffic exceeds a certain threshold to assess the systemic risks they cause and publish “reasonable, effective, and proportionate” measures to mitigate them. Their algorithms must also be assessed by specialists from the European Center for Algorithmic Transparency (ECAT). They aim to detect any bias in the algorithms and ensure they are subject to adequate human control.
A complaints handling system must clearly display the progress moderators have made in dealing with a report. Lastly, fines of up to 6% of worldwide annual revenue will be imposed on platforms that fail to comply.
Measures already applied in France
In France, the Avia Law of 2020, largely censured, led to the creation of the French Online Hate Crime Observatory. It documents online hate speech and analyzes how it spreads. A national unit for combating online hate crime has also been set up. This new judicial unit brings together judges specializing in digital offenses such as cyberbullying. A decree in August 2020 saw the creation of PEReN, a center of expertise for digital platform regulation, which is tasked with providing technical tools for inspecting social media sites and proving the existence of anomalies such as algorithmic bias. Furthermore, the DSA was brought into force ahead of schedule with the law of August 21, 2021. Article 42 of this law makes the transparency measures set out in the European regulation mandatory.
The bill “to secure and regulate cyberspace” is intended to complete the transposition of the DSA into French law. Although the bill was introduced in May 2023, the drafting of its content was disrupted by the riots, which were relayed by social media sites such as TikTok and Telegram, making it necessary to hold a working group meeting on July 12, 2023.
The bill includes the banning of cyberbullying offenders from social media as an additional control measure. Social media companies would be responsible for enforcing this measure, and failure to do so would result in a fine. Passed by the National Assembly in October 2023, the bill will now be examined by a joint committee.
These measures are a positive step toward controlling the risks posed by social media. However, implementing them remains a challenge, not only because of the amount of data that needs to be monitored and analyzed, but also because of the willingness or ability of social media companies to comply. While some of them, for instance TikTok and Meta, have stated they will comply with the DSA, others will have to be stringently monitored, including Telegram, whose founders refuse to collaborate with the public authorities, and X (formerly Twitter), which has drastically reduced its pool of moderators.
The recent Hamas terrorist attack was widely reported on social media. Yet Elon Musk, the new owner of Twitter, stood idly by, forcing the European Commission to send him a letter reminding him of his obligations. Regulating social media is a task that requires constant vigilance on the part of public authorities.