- Home
- Cybersecurity
- Online regulation of minors : young and overlooked
Online regulation of minors : young and overlooked
If nothing is done, the global wave of regulations banning minors’ access to social media and AI companions will prevail – and in the short term. Having outlined the limitations and blind spots of these prohibition policies, and shed light on the brand-new and all too neglected global standard on age verification systems, let’s now turn our attention to the strangest and probably most underestimated set of initiatives: those that aim to give young people a voice in their use of digital technology.
With a threefold insight:
- blocking access for minors is a false good idea (an idea thought to be good but which turns out to be bad in practice),
- age estimation is a good bad idea (an idea thought to be bad at first glance but which proves good in use), and
- the voice of young people is a false bad idea (an idea assumed to be bad – or at least purely cosmetic – when in fact it is intrinsically good).
Ban of minors is spreading so fast that a quick recap of recent developments is in order: following Australia, a global pioneer in banning under-16s from social media— the regulation is in force since last December—, the ban on minors has come into force in March in Indonesia and in the Indian state of Karnataka (home to 40% of India’s IT industry, with its capital Bangalore), and it is now reaching Europe: France, Spain, Austria, the United Kingdom, where draft legislation and consultations are well underway, to the point where one wonders which countries will not have their own ban in place by the end of the year.
Faced with this tsunami – driven without doubt by a sincere concern for young people’s welfare – who still dares to stand up and allow young people to have their say?
Civil society is beginning to organise itself, as seen in the European youth collective Ctrl + alt + reclaim and its legitimate protest piece in Le Monde on 15 January 2026 “The voice of young people is absent from the debate on digital regulation” or the educational initiative led by Internet Sans Crainte in the French Senate during Safer Internet Day 2026, enabling young people to speak to our public decision-makers about their use of digital technology.
Platforms and social networks ranging from Snapchat to Meta, via TikTok, Twitch and Discord, as well as Microsoft, have all launched “Youth Councils” in recent years. Are these merely a smokescreen aimed at politicians and the media (including ourselves), or a genuine tool for internal influence by Trust & Safety teams over product and sales teams?
We asked Snap, a pioneer in this field, to tell us more about their motivation. [A note for those who are suspicious of the industry in general and tech giants in particular: as surprising as it may seem, getting major players to open up about their motivations in this area is no easy feat. The explanation probably lies in the still-experimental nature of these programmes, and the climate of intense tension between platforms on the one hand, and the media, political and judicial spheres on the other]. A key principle of Snap’s Teen Council Programme: not to use the discussions and views expressed by young people as a tool to rubber-stamp decisions that have already been taken internally, but rather to convey to management and cross-functional teams what the youth say about their online lives and experiences – for example, the social norms at play in group chats, and concerns such as harassment, sextortion, or drug dealing. There is so little public documentation on these youth councils that it is worth taking a look at Snap’s January 2026 publication on the conclusion of the first edition of their youth council in the US, which was held in Washington D.C. and included a visit to the White House. In a short video featuring testimonials, a photo at the White House, and a few paragraphs explaining the activities and the people met, including a post of the First Lady, you’ll find plenty of interesting information, some of it subtle, such as the use of ties among Generation Z. Whatever you may think of it, this serves as a benchmark for the rest of the industry.
Are you still firmly sceptical about the sincerity and, in any case, the effectiveness of such an approach by the major tech players? Let’s speak to researcher Ioanna Noula, who is leading the first ‘regulatory sandbox’ project focused on young people’s use of social media and AI, and whose aim is to enable young people to express their views on their digital lives.
The term ‘regulatory sandbox’ sounds a bit odd, and even more so when applied to young people: is it really a serious idea to put teenagers in a sandbox so they can speak freely to regulators and platforms? Yes, it is very serious! Andras Molnar is one of the leading experts on these “regulatory sandboxes”, drawing on his years of research at the OECD and now at the TUM Think Tank at the University of Munich, and here is the definition he gives in his informative article published on 15 April, “Sandboxes: Tools for regulatory experimentation and learning”: “Regulatory sandboxes (sandboxes) are temporary, controlled, and supervised environments in which new technologies, business models, or regulatory approaches can be tested before being subject to the full force of existing rules.” If you are looking for a prime example of a “French-style” sandbox, the CNIL’s publication on AI and public services in spring 2025 is a good one.
Let’s return to Ioanna Noula and her COR Sandbox project. On 1 April 2026, she organised the first major workshop on regulatory sandboxes applied to young people and their relationship with AI companions, and it took place in France [Note to our international readers : the original version of this article is in French and has been drafted primarily for the French audience] at the Trust & Safety Forum. In practical terms, she brought together young people from different countries and various youth councils, established by an NGO in Greece, by the regulator in Ireland, or even by TikTok. Ah, yes, I did say TikTok. Does it bother you that a platform like TikTok is involved? If so, you’re not the only one; it’s a criticism often levelled at Ioanna Noula. The big tech players are said to be too toxic, or at least too powerful, to be given a say. Ioanna Noula disagrees with this. Her view is that there is no other way to break down organisational and cultural silos than to put everyone in the same room and get them talking. On 1 April, there was this exceptional moment when a regulator asked a teenage girl: “But why do you need to talk to a chatbot?” And the girl went on to share her experience, explaining that when you’re young and your parents project a negative image and guilt onto you, it’s invaluable to be able to talk to a machine that doesn’t judge you. Young people may not have the expertise, but they have the experience. And throughout the day they people stole the show from the leading experts in regulation and the industry.
And now you might be thinking: honestly, bringing young people, platforms and regulators together in the same room to discuss the risks of digital technology – isn’t that just the basics? Think again – Snap has confirmed this workshop has never been done anywhere else in the world!
So yes, there is an urgent need to host ‘sandboxes’ in every country that seek to regulate youth online lives. So that they can speak out with confidence, so we stop telling them they don’t know what’s best for them, and we stop seeing them as human beings in the making who need correcting. So that we listen to them as fully-fledged human beings, just as they are. And that, in doing so, we put in place better solutions to protect them – solutions that are better founded, better justified, and therefore more effective.
Ioanna Nouala, an idealist? Yes, that’s what people tell her. And why not?
the newsletter
the newsletter