Tackling harmful online content is a huge challenge. Jake Jooshandeh looks at what is still missing from the government’s proposals for internet regulation.
In recent years the UK government has finally begun to build its regulatory guiding principles for the digital economy and for digital culture. In 2019, the Online Harms White Paper set out the government's plan on how to regulate the digital world to protect users (especially children) from 'online harms', such as child sexual exploitation, terrorist activity, cyberbullying and disinformation.
The law is specifically aimed at websites that are built to include user-generated content, such as self-publishing or commenting (think Facebook, Twitter, Reddit, but also sites with comment sections). If websites allow online harms to flourish, the government proposed firstly fining the site, then imposing liability on senior managers, and in extreme cases blocking access to the site.
After much discussion across the public, private and third sectors, the government has now released their initial response to the public consultation on this paper.
What's new? Ofcom given powers as an online regulator
The biggest news story in the government’s response is that Ofcom will likely be the regulator in charge. How this will work in detail and how well it will work remains to be seen, but it is a necessary early step.
The government stated that this regulatory role will build on Ofcom's expertise. While it is true that Ofcom do complete consumer research on attitudes (especially child and parental) and current uses of online media, they do not have any remit in the digital world beyond regulating the supply of broadband. Ensuring consumer safety when purchasing broadband subscription does not equate to the necessary technical expertise to understand both how websites and platforms work in detail and what is technologically feasible when regulating these sites. As is often the case, asymmetry in knowledge and ability does not allow for good regulation.
Next, the government has stepped back from regulating harmful but not illegal content, such as cyberbullying and disinformation. Due to the risk to freedom of expression, the government says it will now instead seek to ensure that online firms enforce their own standards better and consistently. This, in effect, means Facebook will still self-regulate on huge problems such as political disinformation.
Lastly, we learnt that only approximately five percent of UK businesses will be affected by the legislation. This is welcome for the majority of UK businesses, which were for a time worried that the legislation may affect anyone with an online presence. However, presumably this statistic hides the effect on US online titans – such as Google, Twitter, Facebook – that are the primary targets of this regulation. And while the government’s proposal for enforcement is sensible, the prospect of fines for these behemoths (as Areeq Chowdhury writes) will not phase them.
What’s missing? Private chat regulation and digital literacy
While there are many things still to be thought through after the response, and many important elements to discuss, there are two glaring gaps which are fundamental to protecting against online harms.
First, what to do with online private chats? Some of the greatest online harms happen in these private spaces: terrorists and child sex offenders do not use public forums to chat and plot their actions. Equally, as has been seen in countries such as India and Brazil, there is the potential for mass disinformation campaigns through encrypted chats such as WhatsApp. But these spaces are also problematic to regulate; many would argue that neither the government nor private firms have a right to intrude on innocent civilians' private conversations, and there is a danger that marginalised and at-risk groups may also lose out on some of their few safe spaces.
The government has pulled away from regulating private chats (not least because of the technical difficulties) and has suggested that the firms themselves should have a duty of care for their users. Again, how this works effectively without tech firms snooping on innocent people (eg, Facebook starting to store and analyse all WhatsApp chats) is not clear.
This issue hasn’t caught the headlines, partly because we simply do not know what happens on these chats and also because the kind of mass WhatsApp disinformation campaigns seen in India and Brazil have not yet happened in the UK. But the government’s indecision around private chats is really just kicking the can further down the road.
Second, much more needs to be known about the government's digital literacy strategy. This will clearly come with time, but a hugely important aspect of creating a safe digital world is having an educated citizenry. There is only so much that governments and private firms can and should be able to do. We have a right to private online spaces. Besides regulation, the only other way of making them safe is educating both young people (which has begun, but in an uncoordinated way) and adults (which hasn't happened yet at all).
What’s next? Our vision for good regulation
While those on the fringes might decry any form of regulation as being akin to censorship, we will continue to argue for good regulation of the internet over the coming weeks, months and years. Without either a fundamental change in the design of the internet itself, or implementing smart and fair regulation, we will never see any improvement to the many social and political problems that have characterised the 2010s.
This week we will host our first workshop which will bring together experts and stakeholders from a diversity of fields to debate and create solutions to the problem of disinformation. Leave a comment below to share your thoughts on this topic, or get in touch if you’re interested in finding out more: firstname.lastname@example.org.