Facebook (who own Instagram and WhatsApp), Snapchat, Twitter and some other companies have been heavily criticised for causing serious harm to many of their users and others, including by
- hosting, and hosting algorithms drawing attention to, material which encourages hatred, violence, terrorism, suicide, self harm etc.
- hosting 'fake news' and other deliberately untruthful material supplied by political actors including hostile states
- hosting lies and scientific falsehoods supplied by well-meaning but dangerous campaigners such as anti-vaxers.
Follow this link for more detailed criticisms of Big Tech.
Very little, if any, of this sort of material is illegal in the sense that its disseminators break criminal law. Individuals and organisations are therefore currently free to disseminate such material within the UK It is noticeable, though, that such material almost never finds its way into the mainly self-regulated mainstream media in the UK, although 'shock jocks and Fox News openly disseminate similar material via US media. There is a therefore a considerable head of steam behind efforts to force Big Tech to accept that they have a duty of care to certain of their vulnerable users in the UK, which might be enforced by a regulator. Some, but not all, would extend this to a duty to eradicate fake news.
But there is also considerable concern that the state should not get involved in censoring either the activities or the content of what are essentially communications companies. This web page summarises the recent history of this debate, focussing on the UK.
- I start with a further discussion if the tension between censorship and free speech.
- I begin with information about the two main campaigns which are encouraging the UK government to legislate and empower a regulator.
- I then summarise UK parliamentary and government reports etc.
- Finally I mention some other comments and development including in the EU..
1. Censorship and Free Speech
The right to free speech is not unlimited. No-one, it is often pointed out, has the right to shout 'FIRE" in a crowded cinema. And the right to say what you like does not mean that you have the right to insist that others hear it, nor insist that intermediaries repeat it. 'Freedom of speech' is not 'freedom of reach'.
Facebook, YouTube, Twitter and the rest are privately owned companies. They can choose what content they allow on their sites, just as can a newspaper or TV station.
Ex-President Obama summarised the issues very well in an interview with Jeffrey Goldberg in 2020:
Obama: I don’t hold the tech companies entirely responsible, because this predates social media. It was already there. But social media has turbocharged it. I know most of these folks. I’ve talked to them about it. The degree to which these companies are insisting that they are more like a phone company than they are like The Atlantic, I do not think is tenable. They are making editorial choices, whether they’ve buried them in algorithms or not. The First Amendment doesn’t require private companies to provide a platform for any view that is out there. At the end of the day, we’re going to have to find a combination of government regulations and corporate practices that address this, because it’s going to get worse. If you can perpetrate crazy lies and conspiracy theories just with texts, imagine what you can do when you can make it look like you or me saying anything on video. We’re pretty close to that now...
Goldberg: It’s that famous Steve Bannon strategy: flood the zone with shit.
Obama: If we do not have the capacity to distinguish what’s true from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our democracy doesn’t work. We are entering into an epistemological crisis.
The issue came to a bit of a head in 2018 when - after considerable delay - YouTube and other channels removed material featuring the revolting conspiracy theorist Alex Jones. Twitter was even slower to act but eventually did so. It was nevertheless sadly true that social media channels had enabled him to build a huge following. By the time YouTube reacted, his videos had been viewed 15 billion times. Then, in February 2019, Facebook banned Stephen Yaxley-Lennon (aka Tommy Robinson) - the founder of 'the English Defence League' - followed in April 2019 by a wider ban of a number of British far right leaders and groups.
This issue has a good way to run before it is resolved to most people's satisfaction. but we will need to take care not to over-react to hate speech, false news and the rest. It could be that - in the long term - most of us will learn to discount or ignore it. "Sticks and stones may break my bones but words will never hurt me"? On the other hand - in the long term, we are of course all dead.
Here is a link to a 2018 blog by Mike Masnick which summarises the issues very well. He made the following points particularly strongly:
- Platforms have a wide range of options open to them short of taking down objectionable content,
- They could for instance minimise add warning flags, or allow the use of user-created filters.
- It's not good if online mobs can 'demand' that something be removed - especially if their success would likely lead to another mob decrying censorship.
- It's odd that many seem to want Facebook to cut off 'hate speech', so granting the power to define hate speech to a company that most of them seem to hate.
- Counter-programming might be a good way forward - adding links to alternative views, for instance, to pages which deny the holocaust.
- And/or the underlying content could be left untouched, but Facebook's feed could be programmed to favour high quality content.
2. The Two Campaigns
David Anderson QC asked "Who Governs the Internet?" in May 2018, noting that "subjecting tech colossi to the rule of law while defending expressive freedoms online is a formidable task legislators have hardly begun - but change is in the air". These and other comments encouraged the following campaigns - as well as concern that we should tread carefully before prohibiting or criminalising that which is currently legal.
Supported by the Carnegie UK Trust, William Perrin and Professor Lorna Woods argue that significant media platforms occupy a public space and should - as an occupier - be under a statutory duty of care to take reasonable steps to prevent foreseeable harm. They draw upon concepts in the 1861 Offences Against The Person Act and UK health and safety legislation. Such legislation was until relatively recently used only to prosecute and/or regulate activities which endangered physical health. But society, legislators and the courts nowadays recognise psychiatric injury caused by domestic violence, harassment and hate crimes. Extension to the harm done by material in social media would not appear to be a dramatic extension of these concepts.
Once a duty of care has been established, there is a wide range of ways in which it might be policed, including through suing for financial compensation in the courts. But Perrin and Woods believe that it would make sense to establish a regulator along the lines of the Health and Safety Executive. Such a regulator would need to be highly independent of government and probably funded by a targeted tax or levy. A new body would be ideal but - failing that - Ofcom might be a good choice, although this would risk over-burdening that already very stretched organisation.
Perrin and Woods latest thinking (as of January 2019) is here. They then published Whose Duty Is It Anyway? in August 2019, answering some common questions about their duty of care proposal.
The NSPCC published Taming the Wild West Web which builds on the Perrin/Woods proposals but focuses on tackling online grooming and child sexual abuse imagery.
And the Information Commissioner has published a draft Children's Code which aims not to protect children from the digital world, but instead to protect them within it by ensuring online services are better designed with children in mind.
Internet lawyer Graham Smith has published two excellent blogs commenting on the 'duty of care' concept. In his first "Take care with that social media duty of care" he pointed out that there is no duty on the occupier of a physical space (such as a pub) to prevent visitors making incorrect statements to each other, nor is such an occupier under any duty to draw attention to obvious risks. On the other hand, night club owners have a duty (and employ security staff who might search customers) to reduce violence and drug taking. So he agrees that the duty of care will vary with the circumstances including the type of harm and the persons at risk.
Graham Smith's second blog "A Lord Chamberlain for the internet? Thanks, but no thanks" was strongly opposed to asking a state entity such as a regulator to police the boundaries within which a social media platform might operate. He hates the apparently attractive idea of "a powerful regulator, operating flexibly within broadly stated policy goals". Such regulators are fine when asked to control the economic power of huge monopolies and oligopolies but "discretion, nimbleness and are vices, not virtues where rules governing speech are concerned" - especially when such discretion is given to a "rule-maker, judge and enforcer all rolled into one".
Doteveryone is leading some interesting thinking about the fast developing relationship between society and the internet. Their paper, Regulating for Responsible Technology, suggested the creation of a new body or bodies which would
- give regulators the capacity to hold technology to account;
- inform the public and policymakers with robust evidence on the impacts of technology; and
- support people to seek redress from technology-driven harms.
Then, in February 2019, Doteveryone commented on the Perrin/Woods 'duty of care' proposal, saying that it "has merits as a pragmatic and collaborative approach. It rebalances the relationship between industry and the state by giving parliament and a regulator responsibility for setting the terms for what the UK, as a society, considers harmful and wants to eradicate. And it puts the onus on business to find the mechanisms to achieve these outcomes, with penalties if it fails to do so. " However ... "It’s important to remember that a duty of care is only designed to address one small part of the current gaps in the regulatory landscape. As Doteveryone highlighted in Regulating for Responsible Technology, all regulators across all sectors are struggling to have the remit, capacity and evidence base to address the disruptive impacts of digital technologies. Without a coherent response to the underlying business models of technology, the algorithmic processes and design patterns within technology and the impacts of technology on social infrastructure, a duty of care can only be a symptomatic treatment of the consequences of tech on one aspect of life.
And there’s a danger that duty of care sucks up all the available political capacity for regulation and leaves the broader landscape untouched. Doteveryone would encourage policymakers to think beyond the noisy headlines and ensure they address the fundamental changes needed to regulate in a digital age."
3. Parliamentary and Government Reports
Here are some interventions that happened in advance of the publication of the government's proposals:
- The Home Affairs Select Committee published a report on Hate Crime in early 2017. The committee strongly criticised social media companies for failing to take down and take sufficiently seriously illegal content – saying they were "shamefully far" from taking sufficient action to tackle hate and dangerous content on their sites. The Committee recommended that the Government should assess whether failure to remove illegal material is in itself a crime and, if not, how the law should be strengthened. They recommended that the Government should also consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict time frame.
- The 2018 interim report Disinformation and ‘fake news’ by the House of Commons Digital, Culture, Media and Sports Committee also contained numerous recommendations which involve various forms of regulation. And then their final report was published in early 2019.
- There were reports in September 2018 that the government was developing plans for an internet regulator, responsible for policing 'online social harm', probably focussing on penalising companies if they fail quickly to remove certain content which has been reported to them, possible copying bits of German legislation.
- And Ofcom signaled, also in September 2018, that it strongly supported future regulation of Facebook, YouTube and Twitter. They should be forced to remove inappropriate content "quickly and effectively". Ofcom's Chief Executive Sharon White said that the UK had a "standards lottery" that allows social media platforms to take advantage of lax regulation while traditional broadcasters have to follow tough rules on protecting audiences - for example children under the age of 18.
- The Internet Association chipped in in February 2019 with a letter to key government Ministers. But the letter contained mere generalities ("Regulatory Principles") and made no attempt seriously to address the concerns of those calling for greater regulation, nor to suggest a possible way forward.
- The House of Lords Select Committee on Communications published its report Regulating in a digital world - in March 2019.
The Government published its Online Harms White Paper in April 2019, seeking comments by the end of June. It focussed on countering child sexual exploitation and terrorism, with a somewhat softer section on cyber-bullying. But much of the detailed definition of 'harm' was to be left to the future publication of various codes of practice.
The White Paper was welcomed by the 'duty of care' campaigners mentioned in '2' above but was met with some concern by those worried about censorship.
The Information Commissioner's Office subsequently published a consultation document Age Appropriate Design: A Code of Practice for Online Services setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data. The draft code set out 16 standards of age appropriate design for online services like apps, connected toys, social media platforms, online games, educational websites and streaming services, when they process children’s personal data. It was not restricted to services specifically directed at children, which led to suggestions that its coverage was impracticably wide.
And then - around the end of 2019 - legislation began to appear much more likely, hugely assisted by the realisation that there would probably be no need for anything approaching censorship. The state of play, and the debate, as of early January 2020 is summarised in this article.
The Government's Proposals ...
... were published in December 2020, very much along the lines foreshadowed in January. Companies would be required to prevent the proliferation of illegal content and activity online, and ensure that children who use their services are not exposed to harmful content. The largest tech companies would be held to account for what they say they are doing to tackle activity and content that is harmful to adults using their services.
Duty of Care:- To meet the duty of care, companies in scope would need to understand the risk of harm to individuals on their services and put in place appropriate systems and processes to improve user safety. Ofcom will oversee and enforce companies’ compliance with the duty of care. Companies and the regulator will need to act in line with a set of guiding principles. These include improving user safety, protecting children and ensuring proportionality.
Differentiated Expectations:- The new regulatory framework will take a tiered approach.
- All companies within the scope of the legislation (Categories 1 and 2) will be required to take action with regard to relevant illegal content and activity.
- And all such companies will be required to assess the likelihood of children accessing their services. If they assess that children are likely to access their services, they will be required to provide additional protections for children using them.
- A small group of high-risk, high-reach services will be designated as ‘Category 1 services’. They will be required to take action with regard to legal but harmful content and activity accessed by adults.
- (This is because services offering extensive functions for sharing content and interacting with large numbers of users pose a significantly increased risk of harm from legal but harmful content. The approach will protect freedom of expression and mitigate the risk of disproportionate burdens on small businesses. It will also address the current mismatch between companies’ stated safety policies and many users’ experiences online which, due to their scale, is a particular challenge on the largest social media services.)
Disinformation:- The duty of care will apply to content or activity which could cause significant physical or psychological harm to an individual, including disinformation and misinformation. Where disinformation is unlikely to cause this type of harm it will not fall in scope of regulation. Ofcom should not be involved in decisions relating to political opinions or campaigning, shared by domestic actors within the law.
- The vast majority of disinformation and misinformation is legal, but only potentially harmful, and need not be removed . As an example, this would include content which suggests that users should go against established medical advice, such as avoiding vaccinations.
- There may also be some cases where disinformation is illegal and could cause significant harm to individuals – for example, disinformation which directly incited harm against individuals. In these cases, companies would be expected to remove such content.
- Some types of legal but harmful disinformation and misinformation are likely to be proposed in secondary legislation as categories of priority harm that companies must address in their terms and conditions.
- Companies must also risk assess for categories of emerging harm. As with other legal but harmful content, companies providing Category 1 services will need to make clear what is acceptable on their services for such content in their terms and conditions and will be required to enforce this. Companies whose services are likely to be accessed by children will also need to take steps to protect children from disinformation and misinformation which could be harmful to them.
- As the pandemic has demonstrated, there may be instances when urgent action is required to address disinformation and misinformation during emergency situations. Where disinformation and misinformation presents a significant threat to public safety, public health or national security, the regulator will have the power to act. In such situations, Ofcom will be able to take steps to build users’ awareness and resilience to disinformation and misinformation, or require companies to report on steps they are taking in light of such a situation.
Comment: The above proposals seemed welcome, but contained two snags.
- Companies were to be expected to prevent children from accessing age-inappropriate content, but it remained far from clear whether an effective age-verification system could be devised.
- It was also unclear how companies such as WhatsApp, offering end-to-end encryption, could be expected to ensure compliance with the new rules.
4. Other Comments and Activity
Should children be taught to be more careful? It is occasionally argued that regulation would be unnecessary if children were taught how to safely navigate the internet just as they are taught to cross busy roads. But this ignores the fact that the road rule is essentially simple: "This is a kerb. Don't cross it unless holding my hand/Take great care crossing it.". The internet kerb is invisible, as are the dangers beyond. So it is much harder to teach a child to avoid the dangers hiding in social media.
The Cairncross Review - A sustainable future for journalism - published in early 2019, focussed on the diversion of advertising away from mainstream journalism and suggested, inter alia, that online platforms should have a ‘news quality obligation’ to improve trust in news they host, overseen by a regulator. The government responded by asking the Competition and Markets Authority to look into possible abuses within the advertising market, but it is hard to imagine anything happening that will reverse the decline in non-internet advertising spend. Separately, mainstream media is increasing its parallel presence on the web, partly funded by advertising and sometimes also behind pay-walls.
The European Commission fired a shot across Facebook's and Twitter's bows in September 2017 when it issued a proclamation that the companies must do more to remove 'illegal content inciting hatred, violence and terrorism', and threatening additional measures as necessary. But Bird & Bird's Graham Smith pointed out that the EU-preferred systems relied upon 'trusted flaggers' of illegal etc. content, but did not include provisions to ensure that the trusted flaggers were making good decisions and/or should be trusted with such censorship power.
Maybe an international convention is the way forward, as suggested by The Good European:
The notion that the internet is 'beyond jurisdictions' has passed its sell by date. It was always a nonsense and by giving it credence a quarter of a century's worth of legislators, especially in America, have done the potential of the world wide web a great disservice. That's twenty five wasted years and twenty five years of damage to what should have been one of the great inventions, now too much a tainted and lawless badlands instead of being the wholesome resource of promise. What is needed is for every jurisdiction across the planet to sign up to and jointly police a basic convention. To those who say that would be impossible, I offer exhibit A: International Maritime Law. Its been around in one form or another since at least Roman times and is how we regulate both inshore waters and the open oceans. These days it's overseen by the International Maritime Organization, a sub division of the United Nations and headquartered in London UK. Every jurisdiction has a seat at the table.