- Home
- Digital transformation
- Questions about the metaverse. Episode 2. How can we regulate toxic behaviour?
Questions about the metaverse. Episode 2. How can we regulate toxic behaviour?
With hate speech and aggressive behaviour, new immersive worlds are, unsurprisingly, reproducing real-world abuse. With their future at stake, platforms are implementing moderation strategies that combine human operators and artificial intelligence.
With pro-Nazi statements and risks of paedophile predators on Roblox, cases of harassment on Fortnite, accusations of virtual rape on Meta’s Horizon Worlds, the metaverse’s beginnings recall those of Web 2.0, where online free speech quickly needed to be regulated. To avoid becoming a new Far West, immersive worlds will also have to step up their moderation strategies quickly.
The difference being that it is not just written content that needs to be moderated like on social networks. New virtual worlds are multi-dimensional, and not just because they are designed in 3D. Not only do platforms have to regulate the content of the text and audio conversations that are exchanged, but the behaviour of avatars as well.
In the case of the sexual assault suffered by a researcher from NGO SumOfUs, a video shows a group of men gathered together, passing round a bottle of vodka before trapping the young woman in a room. With this type of abuse, the metaverse could quickly become an unsafe space for women and visible minorities, who are already the main victims of harassment on social networks.
Regulation: a question of survival for the metaverse
Given the business stakes of the metaverse, platforms are taking the threat very seriously. Luxury and sportswear brands such as Nike, Balenciaga and Gucci are already taking part, but they could leave just as quickly if their reputation is damaged by violent, sexist or hateful content.
Especially since the future of the metaverse looks promising. Today, Roblox, The Sandbox, Decentraland and Horizon Worlds from Meta are mainly used by an audience of tech-savvy users who are accustomed to the codes of massively multiplayer online roleplaying games (MMORPGs). In the future, the metaverse will be used by anyone and everyone. The Meta group (formerly Facebook), which owns Facebook, Instagram and WhatsApp, wants its billions of members to switch to this new world.
But how can it manage the excesses of such a mass of individuals interacting with each other in real time? The mission is « practically impossible », according to Andrew Bosworth, Technical Director at Meta, in an internal memo revealed by the Financial Times. However, he called harassment in the metaverse an « existential threat » to his company’s future.
For now, the platforms have implemented systems that are already in place in online games such as League of Legends, like the ability to report inappropriate behaviour. In a graduated response approach, the offender is given a warning that, in the event of another offence, results in a temporary or permanent ban.
Other, less conventional protection systems are available to users, such as the ability to mute an aggressive avatar, to isolate themselves in safe zone, or to create a space bubble around their avatars to prevent others from invading their personal space.
« Currently, the moderation effort lies chiefly with the victim of toxic online behaviour, » says Stella Jacob, metaverse moderation consultant and the author of a professional thesis on the subject. « We are falling into the same trap as the real world where the victim of aggression has to remove themselves from the public space. »
Disney-style moderation for Meta
Since these are private companies, the moderation strategy differs depending on the platform’s philosophy. In the same memo revealed by the Financial Times, Andrew Bosworth says he wants to apply at Meta a similar degree of security as what exists at Disney. American tech giants should impose their Puritan-tinted worldview where works of art are censured for nudity.
« Other platforms like The Sandbox take more of an online game-style philosophy with a freer, more collaborative approach, » says Hervé Rigault, General Manager at Netino by Webhelp, a content moderation agency for online communities.
« In a co-built world, the idea is to bank on self-regulation and education to protect the user experience ». We need to find the right balance between one’s own freedom and the freedom of others. Antagonising the community is out of the question. For it to be applicable, members must agree with the moderation policy. »
The platform must clearly demonstrate its position and its rules of good behaviour so that users can participate with full awareness. Like the PEGI system for video games, a rating system should set categories based on age and if there is potentially violent or offensive content.
Ambassadors to look after new members
For The Sandbox, Netino by Webhelp uses human moderation by training « ambassadors ». In the onboarding phase, they remind new members of the rules of good behaviour and help resolve their technical issues. « An avatar’s first steps in an immersive world can be complicated, » says Hervé Rigault. « Ambassadors from the game or the help desk can ensure that the user experience is safe and positive. »
The moderation effort cannot just rely on human effort, however. They will be greatly assisted by artificial intelligence models like those offered by US companies The Hive and Spectrum Labs or French company Bodyguard.ai. This Nice-based start-up raised €9 million in March to adapt its online moderation solution to the Metaverse. It has given itself five years to tackle this technical challenge successfully.
« In the metaverse, filtering keywords in not enough, » says Founder and CEO Charles Cohen. « Not only do you need to be able to moderate audio and video content in quasi-real-time and in multiple languages, but also behaviour in context so as to minimise the number of false positives that could hurt the user experience. »
From AI at all stages vs a community-based approach
Bodyguard.ai will progress in stages, starting with moderating audio content. Initially, speech-to-text technologies will convert audio into text for analysis. « The next step will consist in identifying the speaker’s intonation and intent, » says Charles Cohen. « Are they speaking in a joking or sarcastic way? Saying ‘shut up’ to a friend or a stranger isn’t the same thing. »
Bodyguard.ai will then tackle video content using machine learning technologies. « Which entails training the algorithms with a large amount of video data with context on the elements present in the image. » Finally, they will model criminal behaviour by analysing an avatar’s arm and leg motions and their movements within the metaverse.
For Stella Jacob, artificial intelligence can help cut down on the work of human moderators. However, she does note that AI can be biased and convey racist or discriminatory stereotypes itself. She also questions the role of executive, legislative or judicial power that could supersede that of the platforms. European Commissioner for Competition Margrethe Vestager has said she is prepared to regulate the metaverse.
Stella Jacob argues for a community-based approach with moderation done by volunteer users who are independent of the platform, as is the case on Discord servers. « However, we must first weigh volunteers’ profiles, their background, their political orientations, etc. »
The consultant is also interested in the experiments under way in so-called transformative or restorative justice domain. The publisher of League of Legends, an online game known for toxic exchanges, Riot Games set up a community court that decides on the punishments to hand down in cases of toxic online behaviour. The person who is judged receives the log of the incriminating conversation as proof. Something for them to reflect on…
the newsletter
the newsletter