Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain. The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform. This case is not an outlier. It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?
TRUTH LIVES on athttps://sgtreport.tv/
In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.
The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”. UnderReed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.
The ACLU identified three specific constitutional failures. First, the speech targeted is protected. The Supreme Court has never permitted government to suppress legal speech simply because a legislature finds it unsuitable for children. InBrown v. Entertainment Merchants Association, the Court was unambiguous: “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Creating a “wholly new category of content-based regulation” permissible only for speech directed at children would be “unprecedented and mistaken”. Second, these regimes fail strict scrutiny because they are not premised on demonstrated causation. As the ACLU wrote, KOSA “is not premised on a direct causal link, but instead is based on correlation, not evidence of causation”. This is a decisive legal and moral point. InBrown, the Court struck down California’s video game restriction on exactly the same grounds: the state had produced only correlative data. A law that restricts the speech of millions of people must show that the restriction will actually prevent the harm it identifies. Neither KOSA nor the OSA can clear that bar. Third, these regimes are both under- and over-inclusive. They leave news media, books, music and magazines entirely unregulated while targeting social media platforms. And they will, inevitably, sweep up beneficial speech alongside harmful speech: 92% of parental control apps have been found to incorrectly block LGBTQ+ content and suicide-prevention resources alongside material that is genuinely harmful. Congress, the ACLU concluded, may not rely on unproven future technology to save the statute.
The empirical premise of both regimes is that social media causes mental illness in adolescents. This claim is contested by a substantial body of peer-reviewed research. In a widely noted book review inNature, Candice L. Odgers, a psychologist specialising in adolescent mental health at UC Irvine, wrote that the graphs produced by Jonathan Haidt in his workThe Anxious Generation, which align the rise in teen mental illness with smartphone adoption, “will be useful in teaching my students the fundamentals of causal inference, and how to avoid making up stories by simply looking at trend lines”. Hundreds of researchers, Odgers wrote, “have searched for the kind of large effects suggested by Haidt. Our efforts have produced a mix of no, small and mixed associations. Most data are correlative.” The direction of causality may run the other way: distressed and isolated adolescents gravitate toward online community; social media does not necessarily create the distress.
The practical implication is stark. Existing criminal law already covers the most serious harms comprehensively: child sexual abuse material (CSAM), terrorist content, incitement to violence and harassment are all criminal in both jurisdictions and all designated “priority illegal content” under the OSA’s Schedules 5-7. The genuinely novel element of both regimes is the duty to suppresslegalspeech about mental health, gender identity and emotional distress. That element is what fails both the First Amendment and basic proportionality analysis.
Source: SGT Report