Maeve Walsh FRSA of Carnegie UK Trust guides us through the parliamentary developments surrounding the Online Harms White Paper of April 2019, as well as Carnegie’s work on a statutory duty of care and how it intersects with human rights and freedom of expression within the digital realm.
Since Karen Bradley first sought views in 2017’s Internet Safety Strategy Green Paper, four secretaries of state have grappled with “what to do” about internet regulation. Bradley’s successor, Matt Hancock, committed in May 2018 that regulation would be introduced to address online harms. His successor, Jeremy Wright, published the Online Harms White Paper in April 2019, putting forward the proposal for a statutory duty of care. Since then – well, Nicky (now Baroness) Morgan was appointed in July and not very much has happened as other political and parliamentary events have got in the way.
It appears that regulation remains on the new government’s agenda, though the Queen’s Speech commitment only to “develop” legislation in this parliamentary session means its introduction is still some way off. And the government’s proposals, when they do emerge, will undoubtedly be contested given the range of potential issues to be tackled: from illegal material, such as terrorist or child sexual exploitation content, to disinformation, to abuse targeted at minorities or public figures.
At the time of publication, the White Paper’s proposals appeared to many (including those who developed the original “duty of care” proposal) to point too heavily towards a notice-and-takedown regime, enforced via a series of content-specific codes – rather than a systemic, risk-based approach that would bite at design and business operations level, as envisaged by the Carnegie UK Trust proposal which I have been working on with Professor Lorna Woods (University of Essex) and William Perrin FRSA (Carnegie UK Trustee).
In short, a risk-based approach would put the same onus on social media companies as on those in any other regulated sector: to risk assess the design and operation of their services sufficiently such that any reasonably foreseeable risk of harm to users can be identified and mitigated. This is not about having responsibility for every individual piece of content posted on their platforms, or becoming arbiters of free speech – but for these companies to be held accountable for the system and design features that enable, indeed encourage, illegal or harmful content to spread or be shared.
The challenge of how to regulate companies operating in the online environment without impacting on fundamental freedoms, such as freedom of expression, has since been addressed in a comprehensive new paper by Professor Woods. The paper includes a review of the human rights framework from a British perspective and then considers how the various design techniques and business choices that may lead to online harm could be assessed under a rights-based perspective. Her analysis concludes that concerns about human rights do not preclude the imposition of a duty of care, as envisaged by the Carnegie UK Trust work. We would very much welcome comments and feedback on this proposal from RSA Fellows and others.
In tandem with work to “fill in the gaps” on the operation and objectives of a statutory duty of care, there is a need to focus minds and speed up the timetable for regulation. Without it, the prevalence of online harms will continue to grow. To that end, Carnegie UK Trust have recently published a draft Online Harm Reduction Bill which sets out: a definition of a duty of care; who the duty applies to and risk management steps that a company should take; and tasks the appointed regulator (OFCOM) to work with industry, civil society, other regulators, the Secretary of State, and victims of harm in drawing up codes of practice.
With Carnegie UK support, Lord McNally has introduced a paving Bill into the House of Lords this month, which will require the government to appoint OFCOM as the Interim Regulator with a duty to prepare for the new regime and to start the consultative process of drawing up codes of practice to implement it. Parliament has asked broadcasting regulators over the years to work with TV and radio companies to ensure that they have systems to prevent harm, while working within human rights law and national customs; this is a modern version of that task, as set out by Baroness Grender in the Lords in November 2018. Bringing forward a paving Bill not only means that OFCOM can hit the ground running in preparing for the new regulatory regime, but also means that the legislature can get on with scrutinising more closely how the system will operate, which is crucial given the range of rights involved.
There is significant cross-party consensus on the need for action on online harms and a groundswell of support for the duty of care model from parliamentary committees and campaigners. But the real engagement will only get going when the detail is in the open, action is imminent and parliamentarians can see the purpose and design of the regulation. Four secretaries of state on since the government first floated the idea, maybe 2020 is the year it will happen.
Maeve Walsh is an RSA Fellow, a Carnegie UK Trust Associate and a member of the advisory group for the RSA’s disinformation project. To find out more about the duty of care project, visit Carnegie’s website or email firstname.lastname@example.org.