The UK government, under Prime Minister Keir Starmer, has unveiled a robust set of measures aimed at bolstering children's online safety and holding tech platforms accountable for harmful content, positioning the nation to join global efforts restricting social media access for minors.

Key initiatives include closing legal loopholes to ensure all AI chatbot providers comply with the Online Safety Act. Regulators will gain explicit powers to swiftly address illegal or dangerous material. These steps lay the foundation for rapid implementation of proposals emerging from an upcoming children’s digital wellbeing consultation.

The consultation will explore critical options such as establishing a minimum age for social media use, curbing addictive features like infinite scrolling, limiting children’s access to AI chatbots and VPNs that bypass safety protections, and enhancing platforms' ability to prevent the sharing of harmful intimate images.

To assist families during the rollout of these reforms, the government is launching a public campaign to equip parents with tools to discuss online risks with their children and effectively utilize safety settings on platforms.

The new regulatory powers will enable lawmakers to respond promptly to evolving online behaviors without the delays of primary legislation. Additional protections include preserving vital digital data in cases of serious incidents, as part of a broader strategy to establish the UK as a global leader in online child safety.

As the UK advances these tougher laws, other nations have already implemented stringent frameworks. Australia leads with what is considered the most sweeping restriction: legislation passed in 2024 bans children under 16 from social media platforms, requiring age verification and imposing heavy fines for non-compliance, while targeting addictive design features and harmful content exposure.

France mandates a minimum age of 15 for social media, with compulsory parental consent for younger users and pressure on platforms to deploy effective age verification systems. Though enforcement remains challenging, the framework is rigorously outlined in law.

In the United States, there is no nationwide social media ban for children, but the federal COPPA law prohibits data collection from those under 13. States such as Florida and Arkansas have enacted or proposed measures demanding age verification or parental consent, reflecting a fragmented yet increasingly robust state-level approach.