Published Date : 9/5/2025Â
The ongoing debate over the UK’s Online Safety Act (OSA) provides a useful illustration of how two key pillars of a successful age assurance sector are only tangentially related to technology. The first and most fundamental pillar is trust. Some critics argue that the unintended consequences of the OSA show that the government’s age check policy cannot be trusted, which in turn affects the credibility of third-party providers. The second pillar is communication, and the age verification sector believes that part of the problem lies in how companies interpret the law.
The Information Technology & Innovation Foundation (ITIF) offers a detailed analysis of where the OSA has faltered and suggests that these flaws should serve as a cautionary tale for U.S. lawmakers. The ITIF points out that many underestimated the broad impact of the law, affecting not just major platforms like Spotify and Reddit, but also smaller online forums for hobbies and interests. Some services have even shut down completely out of concern for potential liability.
Vague language in the law regarding what constitutes “harmful” content and what is considered “highly effective” age verification is identified as a significant issue. While some content clearly falls into these categories, the ambiguity has led to platforms erring on the side of caution and removing more content than necessary to avoid penalties.
A surprising issue is that most adults did not realize the OSA would affect them. The assumption that age verification is only for keeping children out has led to a rude awakening for adult users, as platforms must verify the age of every user to determine eligibility.
The ITIF has three major recommendations for U.S. policymakers. First, proposals should balance children’s safety with adult privacy. Second, they should avoid collateral damage by narrowing the focus to truly harmful content, as laws that visibly impact benign online forums will face public backlash. Third, they should be cautious about extending restrictions to lawful content and ensure that regulators are clear about what types of content should be restricted.
The Age Verification Providers Association (AVPA) has issued statements in defense of age assurance and the OSA. In response to a brief from the Center for Democracy & Technology (CDT), AVPA emphasizes that while both organizations agree on many issues concerning transparency, user agency, privacy by design, and strict deletion policies, the current solutions already offer many of these safeguards. The AVPA highlights that the UK’s implementation of strong age checks for social media and pornography has been successful without major security incidents.
The cultural difference between the UK and the U.S. is a significant factor. The AVPA notes that the lack of a comprehensive federal privacy law in the U.S. makes it harder to win trust, as people rely on a patchwork of state rules and sector laws, leading to doubts about data handling.
The second statement from AVPA aims to clarify the OSA’s coverage and address accusations of censorship. The OSA targets only illegal material, aligned with existing offline restrictions, while Section 22 introduces a statutory protection for freedom of expression. Misinterpretations and early implementation hiccups, such as over-blocking content, have led to misunderstandings. Platforms may have misread the OSA’s requirements or lacked the necessary technology to target specific harmful content.
Deliberate overreach by some platforms, possibly to provoke backlash, has also contributed to the criticism. However, Section 22 provides a legal route for organizations hosting lawful content facing restrictions, ensuring that platforms comply with free speech duties. Over-blocking lawful content risks enforcement action from Ofcom, reinforcing the Act’s commitment to expression.
The AVPA emphasizes that the OSA is not a censorship tool but a balanced framework that targets illegal content while embedding robust free speech protections. It is crucial for U.S. lawmakers to understand these nuances and learn from the UK’s experience to avoid similar pitfalls.Â
Q: What is the Online Safety Act (OSA) in the UK?
A: The Online Safety Act (OSA) is a UK law designed to protect children and adults from harmful online content. It requires platforms to implement age verification measures and remove harmful content, with penalties for non-compliance.
Q: What are the main issues with the OSA?
A: The main issues include broad impact on various online services, vague definitions of harmful content, and unintended consequences such as the shutdown of benign forums.
Q: How does the ITIF recommend U.S. lawmakers approach similar legislation?
A: The ITIF recommends balancing children’s safety with adult privacy, avoiding collateral damage to non-harmful content, and being clear about what types of lawful content should be restricted.
Q: What is the role of Section 22 in the OSA?
A: Section 22 of the OSA introduces statutory protection for freedom of expression, ensuring that platforms do not over-block lawful content and comply with free speech duties.
Q: How does the AVPA defend the OSA against accusations of censorship?
A: The AVPA clarifies that the OSA targets illegal content and includes robust free speech protections. Misinterpretations and early implementation issues have led to misunderstandings, but Section 22 ensures that lawful content is protected.Â