A growing number of governments are moving to restrict children’s access to social media, and the pace is accelerating. New analysis suggests this is no longer a niche policy debate, but a global shift that could soon reshape how major platforms operate.

According to fresh analysis published by safety technology firm Privately SA, more than 40 countries worldwide have already introduced, proposed, or formally reviewed measures aimed at limiting how children use social media.

Australia, France, And The UK Lead The Debate

The latest momentum follows Australia banning under-16s from using social media in December 2025. France is planning a similar law for under-15s later this year, while policymakers in the United Kingdom continue to debate whether to follow the same path.

While the rules vary widely between countries, the direction of travel is becoming clearer. Governments are no longer asking whether age checks should exist. The focus has shifted to how platforms can enforce age limits without creating new privacy risks for users.

Age Verification Is No Longer Optional

As regulation tightens, age assurance is increasingly becoming a legal requirement rather than a voluntary safeguard. Social media platforms are facing growing pressure to prove they can accurately verify a user’s age, particularly for younger audiences, while still protecting personal data.

“The debate has moved from should platforms verify age to how they do it,” said Deepak Tewari, CEO of Privately SA. He added that many countries are now pushing for enforceable age controls that provide stronger privacy guarantees, rather than relying on self-reported ages or traditional ID uploads.

Trust In Platforms Remains Low

Alongside its global analysis, Privately SA commissioned consumer research in December 2025 to better understand public attitudes toward privacy and social media. The findings point to a major challenge for platforms.

Only 13 percent of adults surveyed said they trust online platforms to handle sensitive biometric data, such as facial images. That lack of trust becomes increasingly important as age verification tools rely more heavily on advanced technology.

However, opinions shift when privacy protections are strengthened. According to the research commissioned by Privately SA, acceptance of facial age estimation nearly triples to 39 percent when age checks are carried out entirely on-device, with no images leaving the user’s phone.

Why Privacy Is Becoming The Deciding Factor

Privacy concerns are emerging as the central battleground in the global push to protect children online. Parents, regulators, and users alike remain wary of systems that require IDs or store biometric data on central servers.

Privately SA says its on-device facial age estimation approach addresses these concerns by ensuring images are never uploaded or stored externally. The company states that its technology was used for more than five million age checks in 2025 and has already been deployed by three of the ten largest social media platforms in Australia, as well as across several major platforms in the UK.

According to Privately SA, this approach allows platforms to meet new legal obligations while minimising data collection and avoiding unnecessary barriers for legitimate users.

A Global Trend, Not A Universal Ban

It is important to note that the “more than 40 countries” figure does not mean all of them have introduced full social media bans for children. Based on Privately SA’s analysis, the total includes countries with enacted laws, draft legislation, formal government proposals, parliamentary inquiries, regulator-led consultations, or enforceable age-verification or parental-consent requirements.

Taken together, the findings point to a clear global trend. Governments are signalling that child safety online is becoming a regulatory priority, and social media platforms will need to adapt as expectations rise.

As more countries weigh new rules, the way age verification is implemented could have a lasting impact on how young people access games, online communities, and social platforms in the years ahead.

What Does This Actually Mean For Social Media Users

In simple terms, this story is not about a single global ban on social media. It reflects a broader shift in how governments are approaching children’s access to online platforms, based on analysis and research commissioned by Privately SA.

The claim that more than 40 countries are involved does not mean all of them have already banned social media for children. According to Privately SA’s analysis, the figure combines countries at very different stages, from active laws to early policy discussions. The common thread is direction, not uniform enforcement.

For social media companies, the message is becoming clearer. Age verification is moving away from being a voluntary safety feature and toward becoming a legal requirement in many regions. Platforms are under increasing pressure to prove a user’s age, especially for younger users, rather than relying on self-declared birthdays.

At the same time, research commissioned by Privately SA highlights a major challenge. Trust in platforms to handle sensitive data remains low, with only 13 percent of adults saying they trust social media companies with biometric information. Support rises significantly when age checks happen entirely on-device, with no personal data leaving the user’s phone.

For everyday users, including teens, parents, and gamers, this could mean more frequent age checks when signing up for or accessing social platforms. How intrusive those checks feel will depend on how platforms implement them and whether privacy-preserving approaches become the norm.

The key takeaway is straightforward. Governments around the world are taking children’s online safety more seriously, and social media platforms will have to adapt. As highlighted by Privately SA’s analysis, the future of social media is likely to be shaped by how well companies can balance legal compliance with user privacy.