Update confirms the introduction of an active “duty of care” and a dedicated regulator, as part of a comprehensive new online regulatory regime.

By Alain Traill, Rachael Astin, Gail E. Crawford, and Patrick Mitchell

Following a wave of commentary from industry, the social sector, and other organisations, on 11 February 2020 the UK government set out preliminary details of a new regulatory regime to govern content posted on online platforms. The details were released in an initial response to last year’s online harms white paper, with a full response expected this spring. While some changes have been made to the white paper proposals, seemingly in response to concerns raised by industry and other stakeholders, the government has confirmed that it will introduce an active “duty of care” on organisations to prevent certain content from appearing on their platforms.

The proposed new regime mirrors similar steps taken in other jurisdictions, e.g., Australia, to protect against harmful content online. It is also in-line with the direction of travel of platform regulation at a European level, taking into account, for example, changes to the AVMS Directive (EU) 2018/1808 (AVMSD) to regulate video-sharing platform services (VSPs) in relation to protection of minors and harmful content, and the planned EU Digital Services Act, which is likely to introduce changes to EU law regarding the liability of platform providers for content posted using their services.

Duty of Care

A key component of the UK government’s white paper was the introduction of a duty of care to take steps to keep users safe and tackle illegal and harmful activity. This represents a significant divergence from the approach under existing EU law, whereby (under the E-Commerce Directive) online intermediaries are often able to argue that they effectively have no general responsibility to actively sweep or police content. The government’s response confirms that a duty of care will indeed be introduced, although seemingly only in respect of illegal content and not in relation to a separate (and more subjective) category of harmful For the latter, the new regime will instead require organisations to be transparent about what non-illegal content they consider to be acceptable on their platforms and to have policies, procedures, and systems in place in order to enforce that determination effectively. This change from the white paper is likely to be of some relief to large-scale platform operators such as social media providers, which had been facing potential liability for their own judgments around which harmful content they decided to keep off their platforms.

The interplay between the UK government’s online harms regime and its implementation of the AVMSD is important to bear in mind. Of particular relevance are the AVMSD rules that require EU Member States to ensure that VSPs take appropriate measures to protect minors from harmful content and to protect the public from content containing incitement to violence, hatred, terrorism, and other criminal offences under EU law. The government originally planned to implement the majority of the requirements of the AVMSD pertaining to VSPs at the same time as (and as part of) the new online harms regime. However, the government has since confirmed that — as an interim approach and in time for the 19 September 2020 AVMSD transposition deadline — it will appoint Ofcom as the national regulatory authority under the AVMSD, prior to implementation of the online harms regime.

Which Organisations Are in Scope?

Of less comfort to industry than the apparent reduction in scope of the duty of care, will be the level of doubt surrounding which organisations will be caught by the new regime. The UK government’s response confirms that the legislation will target operators of websites which include functionality for “user generated content” or “user interactions” (rather than specific sectors or types of organisation) and adds, “It would be the social media platform hosting the content that is in scope, not the business using its services to advertise or promote their company”. While the government estimates that less than 5% of UK businesses will be in scope, it is unclear on what basis it reached that conclusion (including the applicable jurisdictional test to be applied).

The government’s response provides some examples of activities which will not (of themselves) pull organisations into scope, such as having a social media page, and also confirms that business-to-business services will be out of scope. This suggests a move towards organisations which have a focus on user content or interaction, rather than those which provide such services incidentally, although the details remain unclear. The government has also reiterated that “private communications” (e.g., group messaging services) will be subject to a differentiated regime from “public communications” (e.g., public web forums), although it again remains unclear what that differentiation will look like in practice. The reference in the government’s response to “websites” rather than simply “online platforms” also arguably calls into question whether mobile applications will be in scope. In short, many organisations will be left wondering whether or not — and to what extent — they could be caught by the new regime.

Enforcement and Redress

One of the headlines behind the white paper was the threat of substantial (i.e., potentially GDPR-level) fines and senior management responsibility for non-compliance. This would clearly bring the new regime straight to boardroom level, and unsurprisingly, there has been pushback from industry. The government’s response steers clear of addressing whether (and to what extent) the government will follow through on those proposals. It does, however, confirm that the government is “minded to appoint” Ofcom as the new regulator, suggesting that experience and an established presence have been favoured over the benefits of a new regulator. If confirmed, this decision means that there will be two significant, independent regulators in the UK online space, given the existing role of the Information Commissioner’s Office in relation to data protection enforcement.

In terms of redress mechanisms, the regulator will not adjudicate on individual complaints, instead leaving it to organisations to ensure that they have appropriate mechanisms in place. Many large-scale platform operators already have some form of complaints process in place. It is unclear though if (and how) the proposed introduction of “super-complaints” (i.e., actions brought by organisations on behalf of one or more individuals) will work. While the government’s response acknowledges the feedback received regarding “super-complaints”, the suggestion is that it is considering its position on this point.

Proactive Engagement

On a more proactive note, the government’s response notes the appetite within industry to explore innovative and collaborative approaches to addressing online harms. These include the potential use of common reference data sets (i.e., test data which organisations could use to model processes and procedures) and a “sandbox” approach similar to that rolled out in the financial services and data protection fields (i.e., a controlled environment within which potential solutions can be developed and tested, under the protection of a regulator).

Next Steps

The government has committed to publishing a full response this spring, which is expected to address key open issues, such as the position on fines and senior management responsibility. Legislation is being developed in parallel.

In the meantime, there are some clear steps which organisations potentially caught by the new regime can start to take, in order to ensure they are prepared for the new legislation ultimately taking effect. These include:

  • Remaining abreast of (and ensuring compliance with) imminent codes of practice on content relating to terrorism and child abuse and exploitation — these will be presented by the government as voluntary, interim solutions but are likely to be in line with the substance of the legislation.
  • Preparing (or reviewing existing) policies, procedures, and systems regarding the designation and moderation of ‘acceptable’ content on (and the removal of ‘unacceptable’ content from) their platforms and the processes for enabling and addressing user complaints.