As the impact of online platforms on our digital lives is increasingly questioned across Europe, Trust & Safety remains a field still under construction. Shani Benoualid, ambassador of the Trust & Safety Forum, analyses the strengths, blind spots and operational challenges facing digital actors, as well as the concrete role that young people are expected to play.

The Trust & Safety Forum highlights a field that is still emerging in Europe. How would you describe this domain and the challenges it raises today?

Trust & Safety encompasses all actions aimed at making digital environments safer: preventing abuse; combating illegal content, whether it involves child sexual abuse, non-consensual sharing of intimate images, sextortion, hate speech, discrimination, harassment or terrorist content; better protecting minors from inappropriate content; and safeguarding our democracies against disinformation and manipulation attempts. It is a proactive approach that integrates protection and risk-mitigation mechanisms from the design stage of online services.

It is still a young field in Europe, at the crossroads of technical, legal and societal questions, mobilising very different actors: platforms, public authorities and regulators, associations, researchers, lawyers, engineers… The challenge now is to structure it as a real discipline, with shared methods, clear responsibilities and a collective ability to respond to digital risks. Europe has set an ambitious framework with the Digital Services Act; the task now is to turn this ambition into operational practices and sustainable cooperation among all the actors involved.

How can we explain that Europe has such an advanced regulatory framework (DSA), while its Trust & Safety ecosystem remains relatively unstructured?

Because Europe created the regulatory framework before developing the operational capacity to implement it. The DSA sets ambitious obligations, especially for very large platforms, including systemic risk assessments, due diligence in handling manifestly illegal content, and heightened transparency, particularly regarding recommendation systems.

But the European Trust & Safety ecosystem is not yet built on a solid professional infrastructure: there is no recognised professional body, no competency standards, no network equivalent to the Trust & Safety Professional Association (TSPA), the international association that brings together practitioners and structures their methods, nor to an initiative like the Digital Trust & Safety Partnership (DTSP), which gathers companies around shared principles and assessments.

Without this foundation, public, private and nonprofit actors often operate in silos, without a shared reference framework or coordination mechanisms. This gap slows down effective implementation of the DSA and limits the sector’s ability to build expertise. Over the coming years, the priority may not be to produce more rules, but to structure the European Trust & Safety field. This is precisely the ambition of the Trust & Safety Forum: to offer a space for dialogue, networking and expertise-sharing among actors who have often worked separately until now.

Associations have become key players in Trust & Safety. In your view, what capacities and support do they still lack to fully assume these responsibilities, particularly regarding reporting?

Associations play a central role in the European ecosystem: they identify and report illegal content, support victims, run awareness and digital education campaigns, monitor emerging phenomena, and act as mediators and prevention actors. It is a young and very dynamic ecosystem, where new actors regularly emerge and where levels of structuring vary widely from one organisation to another.

Within this landscape, the reporting dimension requires particular attention. With the DSA, trusted flaggers become real watchtowers and play a key role in effective enforcement of the regulation. But to assume this responsibility, they must have the necessary capacity: trained teams, appropriate and robust tools, aligned methodologies, solid command of applicable legal frameworks, and of course smoother cooperation with platforms.

These missions also require sustainable funding. Today, too many associations still rely on fragile models. Professionalising and stabilising this essential link means strengthening the entire European Trust & Safety chain.

Faced with the rise of generative AI, what structural changes are necessary for Trust & Safety approaches to be effective in Europe?

Generative AI already amplifies well-known risks: automated production of illegal content, large-scale manipulation, deepfakes, and increased exposure of vulnerable populations, particularly minors and people in psychologically fragile situations. These phenomena exceed the capacity of traditional moderation approaches, making it necessary to link AI and Trust & Safety much more closely.

Both the DSA and the AI Act have laid the foundations for a risk-based, transparent and accountable approach. The challenge now is to translate this into shared operational practices: integrating risk assessments from the model-design phase, developing more robust detection and oversight methods, and organising regular cooperation between platforms, public authorities and associations.

This alignment of innovation, risk management and operational capacity will allow Europe to avoid a permanent catch-up race and ensure a high level of user protection.

What meaning do you give to your role as ambassador of the Trust & Safety Forum, and what would you consider a useful contribution?

Being ambassador of the Trust & Safety Forum means contributing to a space that allows very different actors to come together around issues that are still recent and sometimes difficult to grasp. In my work, I have the opportunity to interact with administrations, platforms, law-enforcement agencies and associations: this diversity of perspectives is essential for understanding the reality of digital risks and how each actor responds to them.

In this role, I can bring insight from that experience: the needs of associations, operational constraints, the expectations of the most exposed populations, and what we observe concretely in how the European framework is applied by different actors. The idea is to help make these issues more intelligible and to facilitate exchanges between worlds that do not naturally interact.

If this role helps foster dialogue, highlight shared challenges and contribute to a collective dynamic around Trust & Safety in Europe, then it will have fulfilled its purpose.

How do your professional and personal commitments around digital issues fit together, and what would you say to young people who want to get involved?

My commitments intersect and complement one another, even if they operate at very different levels. In my institutional work, I intervene on the strategic side: contributing to public-policy development in the fight against online hate, monitoring regulatory developments at national and European levels, and supporting various nonprofit projects. Meanwhile, nonprofit work is rooted in everyday life: understanding online dynamics, observing actual practices, and supporting people targeted by hate or harassment campaigns. This proximity to the field sheds light on issues that can sometimes seem abstract at the institutional level, while the strategic framework gives coherence and reach to the work carried out closer to users.

To young people who want to get involved, I would say that the digital sphere is a field where you can take action quickly and at your own scale. The Trust & Safety ecosystem is still being built and needs new perspectives—people who understand the practices, codes and rapid evolution of platforms, technologies and discourse. Engagement can start very simply: joining an association, participating in awareness campaigns, supporting someone targeted by abuse, or helping make online discussions more responsible. Often, these first steps open the door to more sustained forms of action.

Who is Shani Benoualid?

Shani Benoualid is adviser for digital affairs and the fight against online hate at DILCRAH (the Interministerial Delegation for the Fight against Racism, Anti-Semitism, Anti-LGBT Hate and Discrimination Linked to Origin). She monitors online hate trends, supports associations in their work, and coordinates projects involving administrations, digital platforms, law-enforcement agencies and civil society to strengthen prevention, improve reporting and promote digital citizenship.

She is also co-founder of #jesuislà, a citizen collective active on social networks, whose members organise to intervene in comment sections, support victims of cyberharassment, calm discussions and counter disinformation campaigns. The collective also conducts awareness-raising and advocacy efforts for a safer and more respectful digital space.

Committed to public debate, she has initiated collective op-eds and contributed to publications and reports dedicated to combating hate and reinforcing the responsibility of digital actors.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.