The Online Safety Bill: part of the Government's strategy for a thriving digital economy

The UK Government has issued an Online Harms White Paper to conclude a consultation which began in April 2019 on the Online Safety Bill – proposed new legislation to tackle harmful content on the internet. The objectives of the proposed new law are to make Britain "the safest place in the world to be online" and, in doing so, to enhance trust in technology to enable the digital economy to thrive.

The UK Government has issued an Online Harms White Paper to conclude a consultation which began in April 2019 on the Online Safety Bill – proposed new legislation to tackle harmful content on the internet. The objectives of the proposed new law are to make Britain "the safest place in the world to be online" and, in doing so, to enhance trust in technology to enable the digital economy to thrive.

The timing of this consultation could not have been more apt; during this period, we have seen greater scrutiny than ever before on the issue of harmful content on the internet. There have been attempts to crack down on the spread of 'fake news' in relation to Brexit and Covid-19, radicalisation of vulnerable people by terrorist groups, cyber bullying and child grooming and, most recently, calls to censor anti-vaccine misinformation. Some social media platforms have already implemented measures to monitor and take down harmful content but these have largely been responsive to social and political pressure and a desire to 'do the right thing', as opposed to legal requirements.

Unlike broadcast content, for which there is a well-developed regulatory framework, internet content remains largely unregulated, and responsibility for content lies mainly with individual users.  However, the clear direction of travel is towards technology companies having responsibility for user content, leaving behind the idea of social media and other online platforms being a 'mere conduit' for information.

Who will the new law apply to?

The proposed new law will apply to companies that either:

  • host user-generated content accessible in the UK; or
  • facilitate online interaction between users in the UK.

This will catch companies that provide social media platforms, search engines, file hosting/transfer sites, chat rooms and messaging services – estimated to account for around 3% of UK businesses. The law will apply to companies anywhere in the world if they provide services to UK users.

These companies will have a duty of care towards their users and will have to take a 'safety by design' approach to create safer environments for users.

Defining 'harm' and the new duty of care

The proposed new law will use a general definition of harmful content and activity based on whether they give rise to a "reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals". The intention is that secondary legislation will then identify specific sub-categories that pose the greatest risk to users. To be harmful, content does not necessarily need to be unlawful, but different expectations will apply depending on whether content and activity is:

  • illegal (e.g. for terrorist purposes or the sale of weapons);
  • legal but potentially harmful to children (e.g. content depicting violence or pornography); or
  • legal but potentially harmful to adults (e.g. content relating to eating disorders or self-harm).

Stakeholders during the consultation phase called for greater detail on the scope of harms caught by the new law, but there is also recognition of the fast pace with which the nature of harm can change, and a static list could quickly become out of date. The Government has, however, clarified that data protection breaches, consumer law issues, intellectual property infringements and fraud will not be considered 'harm' for the purposes of the new law.

The new framework will create a duty of care for in-scope companies towards their users. Companies will be expected to take a risk-based approach and to implement appropriate systems and processes to improve user safety. The focus will be on what steps a company has taken to improve safety overall as opposed to enforcement being taken in respect of individual pieces of content. A range of principles will be given to help interpret that duty of care.

These are:

  • Improving user safety: taking a risk-based approach that considers harm to individuals
  • Protecting children: requiring higher levels of protection for services used by children
  • Transparency and accountability: increasing user awareness about incidence of and response to harms
  • Pro innovation: supporting innovation and reducing the burden on business
  • Proportionality: acting in proportion to the severity of harm and resources available
  • Protection of users’ rights online: including freedom of expression and right to privacy
  • Systems and processes: taking a system and processes approach rather than focusing on individual pieces of content

Ofcom's new role

Ofcom, the body responsible for regulating broadcast content, will be responsible for overseeing the new regulatory framework. Ofcom will issue codes of practice which will cover areas such as accessibility of content, transparency, communication with users, signposting and appeals.

Ofcom will have the power to issue fines of up to £18 million or 10% of global annual turnover, whichever is higher. Ofcom will also be able to take business disruption measures against non-compliant companies, anywhere in the world.

Criminal sanctions for senior managers are currently not part of the Government's proposals but the Government has indicated that it could introduce these sanctions in the future if they do not cooperate with the regulator.

Competing interests: striking the right balance

There is, of course, a need for law makers and the regulator to strike the right balance between censoring harmful content to protect internet users, on the one hand, and preserving the fundamental right to freedom of expression, on the other hand. If the regime lacks 'teeth' it will not be effective in achieving its objectives – indeed, some critics have argued that the long-awaited proposals do not go far enough; but a lack of sufficient clarity or draconian enforcement could result in platforms becoming very risk adverse.

The privacy of individuals is another competing interest which the new law will need to protect. Private messages are not exempt, and the regulator will be able to require companies to implement automated technology to identify child abuse and illegal terrorist content. However, companies will be expected to consider the impact on users' privacy and ensure users are well informed about the use of their personal data. In the wake of the GDPR, this duty will not be taken lightly.

Timeline for the new law

The Online Safety Bill is expected to be published in 2021.

If passed into law, this will represent the first comprehensive framework for tackling online harms in the world, and will undoubtedly bring about significant changes in the way online platforms are monitored and used.

 

 

 

Authored by Louise Crawford.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.