UK Online Safety Bill: Duty of Care Framework for Digital Platforms
The UK government publishes the Online Safety Bill, establishing a duty of care framework requiring platforms to protect users from harmful content. The legislation introduces risk-based requirements and grants Ofcom extensive enforcement powers.
On January 25, 2022, the UK government published the Online Safety Bill, introducing a comprehensive duty of care framework requiring digital platforms to protect users from harmful content and experiences. The legislation marked Britain's post-Brexit assertion of regulatory sovereignty in digital governance, establishing Ofcom as online safety regulator with broad powers to set codes of practice and impose substantial penalties for non-compliance.
Duty of Care Framework and Risk-Based Regulation
The bill's central innovation established a duty of care owed by user-to-user services and search engines to their users, particularly children. Rather than prescribing specific content moderation outcomes, the framework required platforms to conduct risk assessments identifying potential harms, implement proportionate safety measures, and demonstrate effectiveness of those measures to Ofcom. This risk-based approach aimed to create flexible obligations adaptable to evolving online risks.
Harm categories included illegal content (terrorism, child sexual abuse material, fraud), content harmful to children (pornography, cyberbullying, promotion of self-harm), and—most controversially—legal but harmful content for adults. The legal-but-harmful category generated intense debate, with critics warning of censorship risks and proponents arguing certain lawful content nonetheless warranted protective measures, particularly for vulnerable users.
Proportionality principles meant requirements varied by service type, size, and risk profile. Small platforms with limited child user bases faced lighter obligations than major social networks with hundreds of millions of users globally. Category 1 services—those posing highest risks—faced strictest requirements including adult content moderation policies, features enabling users to filter content, and transparency reporting on content moderation practices.
Ofcom's Regulatory Powers and Enforcement
The bill designated Ofcom, the UK's communications regulator, as online safety authority. Ofcom gained powers to require platforms to conduct risk assessments, issue codes of practice specifying expected safety systems and processes, assess compliance through information requests and audits, and impose enforcement actions ranging from improvement notices to business disruption orders and substantial fines up to £18 million or 10% of annual global turnover.
Critically, Ofcom could require platforms to use accredited technology to identify and remove child sexual abuse material and terrorism content—including encrypted services. This provision generated fierce controversy, with privacy advocates arguing it effectively mandated client-side scanning that fundamentally weakened encryption, while child safety campaigners emphasized the need for technical measures to combat the worst illegal content.
The regulator also gained power to require senior management liability for non-compliance, enabling criminal prosecution of executives for intentional non-compliance with certain duties. This personal accountability mechanism aimed to ensure corporate leadership prioritized safety obligations rather than delegating to lower levels without adequate oversight. The provision reflected frustration with perceived platform unresponsiveness to existing voluntary safety frameworks.
Content Moderation and Free Expression Balance
The bill attempted to balance safety objectives with free expression protections through several mechanisms. Platforms must have clear, accessible terms of service, apply them consistently, and provide appeals processes for content moderation decisions. Ofcom's codes of practice must consider free expression implications, and journalistic content received specific protections from removal.
However, critics across the political spectrum raised concerns. Some worried the legal-but-harmful category would lead to over-removal as platforms adopted risk-averse moderation to avoid regulatory penalties. Others argued the bill insufficiently protected children and gave platforms excessive discretion in defining harmful content. Civil society groups warned about disproportionate impacts on marginalized communities, whose expression often faces heightened moderation risk.
The bill also addressed recommender systems, requiring Category 1 services to provide users options to use algorithms not optimized purely for engagement. This provision acknowledged that algorithmic amplification constituted a separate layer of risk beyond hosted content—platforms might not host illegal content but algorithms could nonetheless promote harmful material in ways creating systemic impacts.
Child Safety Provisions and Age Verification
Protecting children emerged as the bill's primary political driver, following high-profile cases where young people encountered harmful content leading to tragic outcomes. The legislation required platforms likely to be accessed by children to implement age verification or age assurance technologies preventing underage access to adult content or age-appropriate safety features for child users.
Age verification requirements generated significant debate around privacy, effectiveness, and proportionality. Critics noted that effective age verification might require invasive identity checks, creating privacy risks and potentially excluding users without traditional identity documents. Technical experts questioned whether determined minors could circumvent controls. Industry argued about feasibility of implementing robust age checks across diverse service types.
The bill also required prominent adult content sites to implement age verification, reviving provisions from the abandoned Digital Economy Act 2017. This reflected sustained political pressure for restricting minors' access to pornography, despite ongoing challenges in defining scope (which sites qualify?), enforcing compliance (how to address overseas operators?), and balancing with privacy rights (what verification methods are acceptable?).
Fraud and Economic Crime Provisions
Unlike earlier drafts, the published bill expanded scope to include fraud prevention obligations. Platforms must take proactive measures to identify and address fraudulent content, particularly paid-for fraud advertising. This addition reflected growing political concern about online fraud's economic and social costs, with elderly victims particularly targeted through sophisticated scams on social media and search engines.
The fraud provisions required platforms to implement systems identifying likely fraudulent content, remove it expeditiously, and prevent repeat offenders from posting. For search engines, this meant obligations around paid advertising and search results ranking. The addition complicated the bill's overall framework—fraud prevention involved different trade-offs and technical approaches than illegal content or child safety.
Financial services regulators and fraud prevention agencies welcomed the provisions, arguing platforms had long evaded accountability for enabling fraud at scale. Platforms countered that fraud identification required sophisticated analysis, bad actors constantly evolved tactics, and holding platforms strictly liable for user fraud imposed unreasonable burdens given the vast volume of content posted daily.
International Coordination and Regulatory Fragmentation
The bill's extraterritorial application—covering services accessible in the UK regardless of establishment location—created potential for regulatory fragmentation as multiple jurisdictions imposed distinct platform obligations. The UK's approach differed from the EU's Digital Services Act in scope (broader harm categories), regulatory model (prescriptive vs principles-based elements), and enforcement (single regulator vs coordinated national authorities).
This divergence raised operational challenges for global platforms managing compliance across jurisdictions. A single feature change might satisfy UK requirements while violating EU rules, or vice versa. Platforms would need sophisticated regulatory intelligence, jurisdiction-specific implementations, or adopt strictest requirements globally—each approach carrying costs and trade-offs.
The bill reflected post-Brexit UK's strategy of asserting distinct regulatory identity while claiming global influence through standard-setting. Whether other democracies would converge on UK approaches or pursue divergent frameworks remained uncertain, with implications for internet's structural architecture—would we see a splinternet of jurisdiction-specific implementations, or emergence of de facto global standards shaped by major regulatory frameworks?
Implementation Timeline and Legislative Process
The bill's publication initiated parliamentary scrutiny involving committee review, multiple reading debates, and potential amendments before Royal Assent. Implementation would extend beyond legislation passage—Ofcom needed time to establish regulatory frameworks, develop codes of practice through consultation, and build expertise in online safety domains.
For platforms, the extended timeline created planning challenges. Final obligations remained uncertain subject to parliamentary amendments, yet substantial system changes would require significant development time. Forward-looking organizations began preparatory work—assessing existing safety systems against anticipated requirements, building data infrastructure for risk assessments and transparency reporting, and engaging with Ofcom's preliminary consultations.
The Online Safety Bill represented the UK's ambitious attempt to reshape platform governance, establishing comprehensive safety obligations while claiming to protect free expression. Its success would depend on implementation details—how Ofcom translated principles into practical requirements, whether enforcement proved effective, and whether the framework achieved safety improvements without unintended consequences for online discourse. These questions would unfold over years as the legislation moved from proposal to operational reality.
Continue in the Governance pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Public-Sector Governance Alignment Playbook — Zeph Tech
Align OMB Circular A-123, GAO Green Book, OMB M-24-10 AI guidance, EU public sector directives, and UK Orange Book with digital accountability, risk management, and service…
-
Third-Party Governance Control Blueprint — Zeph Tech
Deliver OCC, Federal Reserve, PRA, EBA, DORA, MAS, and OSFI third-party governance requirements through board reporting, lifecycle controls, and resilience evidence.
-
Governance, Risk, and Oversight Playbook — Zeph Tech
Operationalise board-level governance, risk oversight, and resilience reporting aligned with Basel Committee principles, ECB supervisory expectations, U.S. SR 21-3, and OCC…





Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.