Online Safety

Facebook (who own Instagram and WhatsApp), Snapchat, Twitter and some other companies have been heavily criticised for causing serious harm to many of their users and others, including by

Follow this link for more detailed criticisms of Big Tech.

Very little, if any, of this sort of material is illegal in the sense that its disseminators break criminal law. Individuals and organisations are therefore currently free to disseminate such material within the UK  It is noticeable, though, that such material almost never finds its way into the mainly self-regulated mainstream media in the UK, although 'shock jocks and Fox News openly disseminate similar material via US media. There is a therefore a considerable head of steam behind efforts to force Big Tech to accept that they have a duty of care to certain of their vulnerable users in the UK, which might be enforced by a regulator. Some, but not all, would extend this to a duty to eradicate fake news.

But there is also considerable concern that the state should not get involved in censoring either the activities or the content of what are essentially communications companies.  This web page summarises the recent history of this debate, focussing on the UK.

  1. I start with a further discussion if the tension between censorship and free speech.
  2. I begin with information about the two main campaigns which are encouraging the UK government to legislate and empower a regulator.
  3. I then summarise UK parliamentary and government reports etc.
  4. Finally I mention some other comments and development including in the EU..

1.  Censorship and Free Speech

The right to free speech is not unlimited. No-one, it is often pointed out, has the right to shout 'FIRE" in a crowded cinema.  And the right to say what you like does not mean that you have the right to insist that others hear it, nor insist that intermediaries repeat it.  'Freedom of speech' is not 'freedom of reach'.

Facebook, YouTube, Twitter and the rest are privately owned companies. They can choose what content they allow on their sites, just as can a newspaper or TV station.

Ex-President Obama summarised the issues very well in an interview with Jeffrey Goldberg in 2020:

Obama: I don’t hold the tech companies entirely responsible, because this predates social media. It was already there. But social media has turbocharged it. I know most of these folks. I’ve talked to them about it. The degree to which these companies are insisting that they are more like a phone company than they are like The Atlantic, I do not think is tenable. They are making editorial choices, whether they’ve buried them in algorithms or not. The First Amendment doesn’t require private companies to provide a platform for any view that is out there. At the end of the day, we’re going to have to find a combination of government regulations and corporate practices that address this, because it’s going to get worse. If you can perpetrate crazy lies and conspiracy theories just with texts, imagine what you can do when you can make it look like you or me saying anything on video. We’re pretty close to that now...

Goldberg: It’s that famous Steve Bannon strategy: flood the zone with shit.

Obama: If we do not have the capacity to distinguish what’s true from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our democracy doesn’t work. We are entering into an epistemological crisis.

The issue came to a bit of a head in 2018 when - after considerable delay - YouTube and other channels removed material featuring the revolting conspiracy theorist Alex Jones. Twitter was even slower to act but eventually did so. It was nevertheless sadly true that social media channels had enabled him to build a huge following. By the time YouTube reacted, his videos had been viewed 15 billion times. Then, in February 2019, Facebook banned Stephen Yaxley-Lennon (aka Tommy Robinson) - the founder of 'the English Defence League' - followed in April 2019 by a wider ban of a number of British far right leaders and groups.

This issue has a good way to run before it is resolved to most people's satisfaction. but we will need to take care not to over-react to hate speech, false news and the rest. It could be that - in the long term - most of us will learn to discount or ignore it. "Sticks and stones may break my bones but words will never hurt me"? On the other hand - in the long term, we are of course all dead.

Here is a link to a 2018 blog by Mike Masnick which summarises the issues very well. He made the following points particularly strongly:

2. The Two Campaigns

David Anderson QC asked "Who Governs the Internet?" in May 2018, noting that "subjecting tech colossi to the rule of law while defending expressive freedoms online is a formidable task legislators have hardly begun - but change is in the air". These and other comments encouraged the following campaigns - as well as concern that we should tread carefully before prohibiting or criminalising that which is currently legal.

Supported by the Carnegie UK Trust, William Perrin and Professor Lorna Woods argue that significant media platforms occupy a public space and should - as an occupier - be under a statutory duty of care to take reasonable steps to prevent foreseeable harm. They draw upon concepts in the 1861 Offences Against The Person Act and UK health and safety legislation. Such legislation was until relatively recently used only to prosecute and/or regulate activities which endangered physical health. But society, legislators and the courts nowadays recognise psychiatric injury caused by domestic violence, harassment and hate crimes. Extension to the harm done by material in social media would not appear to be a dramatic extension of these concepts.

Once a duty of care has been established, there is a wide range of ways in which it might be policed, including through suing for financial compensation in the courts. But Perrin and Woods believe that it would make sense to establish a regulator along the lines of the Health and Safety Executive. Such a regulator would need to be highly independent of government and probably funded by a targeted tax or levy. A new body would be ideal but - failing that - Ofcom might be a good choice, although this would risk over-burdening that already very stretched organisation.

Perrin and Woods latest thinking (as of January 2019) is here.   They then published Whose Duty Is It Anyway? in August 2019, answering some common questions about their duty of care proposal.

The NSPCC published Taming the Wild West Web which builds on the Perrin/Woods proposals but focuses on tackling online grooming and child sexual abuse imagery.

And the Information Commissioner has published a draft Children's Code which aims not to protect children from the digital world, but instead to protect them within it by ensuring online services are better designed with children in mind.

Internet lawyer Graham Smith has published two excellent blogs commenting on the 'duty of care' concept. In his first "Take care with that social media duty of care" he pointed out that there is no duty on the occupier of a physical space (such as a pub) to prevent visitors making incorrect statements to each other, nor is such an occupier under any duty to draw attention to obvious risks. On the other hand, night club owners have a duty (and employ security staff who might search customers) to reduce violence and drug taking. So he agrees that the duty of care will vary with the circumstances including the type of harm and the persons at risk.

Graham Smith's second blog "A Lord Chamberlain for the internet? Thanks, but no thanks" was strongly opposed to asking a state entity such as a regulator to police the boundaries within which a social media platform might operate. He hates the apparently attractive idea of "a powerful regulator, operating flexibly within broadly stated policy goals". Such regulators are fine when asked to control the economic power of huge monopolies and oligopolies but "discretion, nimbleness and are vices, not virtues where rules governing speech are concerned" - especially when such discretion is given to a "rule-maker, judge and enforcer all rolled into one".

Doteveryone is leading some interesting thinking about the fast developing relationship between society and the internet. Their paper, Regulating for Responsible Technology, suggested the creation of a new body or bodies which would

Then, in February 2019, Doteveryone commented on the Perrin/Woods 'duty of care' proposal, saying that it "has merits as a pragmatic and collaborative approach. It rebalances the relationship between industry and the state by giving parliament and a regulator responsibility for setting the terms for what the UK, as a society, considers harmful and wants to eradicate. And it puts the onus on business to find the mechanisms to achieve these outcomes, with penalties if it fails to do so. " However ... "It’s important to remember that a duty of care is only designed to address one small part of the current gaps in the regulatory landscape. As Doteveryone highlighted in Regulating for Responsible Technology​, all regulators across all sectors are struggling to have the remit, capacity and evidence base to address the disruptive impacts of digital technologies. Without a coherent response to the underlying business models of technology, the algorithmic processes and design patterns within technology and the impacts of technology on social infrastructure, a duty of care can only be a symptomatic treatment of the consequences of tech on one aspect of life.

And there’s a danger that duty of care sucks up all the available political capacity for regulation and leaves the broader landscape untouched. Doteveryone would encourage policymakers to think beyond the noisy headlines and ensure they address the fundamental changes needed to regulate in a digital age."

3. Parliamentary and Government Reports

Here are some interventions that happened in advance of the publication of the government's proposals:

The Government published its Online Harms White Paper in April 2019, seeking comments by the end of June. It focussed on countering child sexual exploitation and terrorism, with a somewhat softer section on cyber-bullying. But much of the detailed definition of 'harm' was to be left to the future publication of various codes of practice.

The White Paper was welcomed by the 'duty of care' campaigners mentioned in '2' above but was met with some concern by those worried about censorship.

The Information Commissioner's Office subsequently published a consultation document Age Appropriate Design: A Code of Practice for Online Services setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data. The draft code set out 16 standards of age appropriate design for online services like apps, connected toys, social media platforms, online games, educational websites and streaming services, when they process children’s personal data. It was not restricted to services specifically directed at children, which led to suggestions that its coverage was impracticably wide.

And then - around the end of 2019 - legislation began to appear much more likely, hugely assisted by the realisation that there would probably be no need for anything approaching censorship.  The state of play, and the debate, as of early January 2020 is summarised in this article.

The Government's Proposals ...

... were published in December 2020, very much along the lines foreshadowed in January.  Companies would be required to prevent the proliferation of illegal content and activity online, and ensure that children who use their services are not exposed to harmful content.  The largest tech companies would be held to account for what they say they are doing to tackle activity and content that is harmful to adults using their services.

Duty of Care:-   To meet the duty of care, companies in scope would need to understand the risk of harm to individuals on their services and put in place appropriate systems and processes to improve user safety.  Ofcom will oversee and enforce companies’ compliance with the duty of care.  Companies and the regulator will need to act in line with a set of guiding principles. These include improving user safety, protecting children and ensuring proportionality.

Differentiated Expectations:-  The new regulatory framework will take a tiered approach.

Disinformation:-  The duty of care will apply to content or activity which could cause significant physical or psychological harm to an individual, including disinformation and misinformation. Where disinformation is unlikely to cause this type of harm it will not fall in scope of regulation. Ofcom should not be involved in decisions relating to political opinions or campaigning, shared by domestic actors within the law.

Comment:  The above proposals seemed welcome, but contained two snags. 

4. Other Comments and Activity

Should children be taught to be more careful?  It is occasionally argued that regulation would be unnecessary if children were taught how to safely navigate the internet just as they are taught to cross busy roads.  But this ignores the fact that the road rule is essentially simple:  "This is a kerb.  Don't cross it unless holding my hand/Take great care crossing it.".  The internet kerb is invisible, as are the dangers beyond.  So it is much harder to teach a child to avoid the dangers hiding in social media.

The Cairncross Review - A sustainable future for journalism - published in early 2019, focussed on the diversion of advertising away from mainstream journalism and suggested, inter alia, that online platforms should have a ‘news quality obligation’ to improve trust in news they host, overseen by a regulator. The government responded by asking the Competition and Markets Authority to look into possible abuses within the advertising market, but it is hard to imagine anything happening that will reverse the decline in non-internet advertising spend. Separately, mainstream media is increasing its parallel presence on the web, partly funded by advertising and sometimes also behind pay-walls.

The European Commission fired a shot across Facebook's and Twitter's bows in September 2017 when it issued a proclamation that the companies must do more to remove 'illegal content inciting hatred, violence and terrorism', and threatening additional measures as necessary. But Bird & Bird's Graham Smith pointed out that the EU-preferred systems relied upon 'trusted flaggers' of illegal etc. content, but did not include provisions to ensure that the trusted flaggers were making good decisions and/or should be trusted with such censorship power.

Maybe an international convention is the way forward, as suggested by The Good European:

The notion that the internet is 'beyond jurisdictions' has passed its sell by date. It was always a nonsense and by giving it credence a quarter of a century's worth of legislators, especially in America, have done the potential of the world wide web a great disservice. That's twenty five wasted years and twenty five years of damage to what should have been one of the great inventions, now too much a tainted and lawless badlands instead of being the wholesome resource of promise. What is needed is for every jurisdiction across the planet to sign up to and jointly police a basic convention. To those who say that would be impossible, I offer exhibit A: International Maritime Law. Its been around in one form or another since at least Roman times and is how we regulate both inshore waters and the open oceans. These days it's overseen by the International Maritime Organization, a sub division of the United Nations and headquartered in London UK. Every jurisdiction has a seat at the table.

 

Martin Stanley

Spotted something wrong?
Please do drop me an email if you spot anything that is out-of-date, or any other errors, typos or faulty links.