Regulation -

Responding to Innovation

It is beginning to feel as though the world may be entering another significant Industrial Revolution, bridging the gap between the physical, biological and digital worlds through new technologies like machine learning, big data, robotics and gene editing. Here are some initial notes on associated regulatory issues that are beginning to attract attention.

In general, regulatory frameworks need to be pro-competitive so that innovators can test their ideas. It is a mistake to have such a restrictive framework that everything is challenged before it can be put into practice. There was a relevant Policy Exchange roundtable in July 2017 which considered how regulation might keep up with disruptive regulation. Follow this link to download a note of the discussion.

Please send me further information and articles etc. which might help other readers keep track of interesting regulatory developments in these or other areas. If and when the notes get too long, I will create separate pages to carry the detail. This has happened already for the discussion of the regulation of the Technology Giants such as Google and Facebook.

Gene Editing

CRISPR technology is now widely available. Tweaking individual letters of genetic code, it takes just hours to adjust what evolution has fashioned over billions of years.

A Mississippi dog breeder has already been given permission to use gene editing to fix a mutation that makes Dalmations prone to kidney disease. But future biohackers may have less acceptable objectives, including terrorism.

Self-Driving Vehicles

Lots of interesting issues here. Autonomous vehicles seem certain to be much safer (on average) than those controlled by humans. But will we hold them to higher standards? For instance:

The government announced in November 2017 that self-driving cars would be in use in the UK by 2021, and that insurers would be required to cover injuries to all parties whether or not a human driver had intervened before his or her vehicle was involved in a collision.

Follow this link to read about the psychology involved in our attitude to Risk and Regulation.

Bitcoin and other Crypto-Currencies

Decentralized digital currencies, which use blockchain technology, feel like they are only a small and attractive step from where we are now.

Apart from my share in our house and car, all my significant assets are represented by bits and bytes in the IT systems of various financial institutions. I trust those institutions, of course, partly because they are so heavily regulated, and backed by the Government in the form of the Financial Services Compensation Scheme. But then I think about the financial crisis, and the way in which the true value of my financial assets is affected by interest rates and inflation, over which I have zero control. I remember all too clearly how the value of my financial assets fell by nearly one-fifth following the Brexit referendum.

Crypto-currencies, too, are no-more than bits and bytes, but they are registered in a peer-to-peer database that is controlled by no-one and with which no-one can meddle. That can't be bad. On the other hand, their value, too, currently fluctuates wildly in response to real world events such as Brexit.

The key difference, I guess, is that Bitcoin and the rest are truly international. Unlike Sterling, the Dollar or the Remnimbi, they are not linked to any one country or influenced by any one government. If their use continues to grow, will governments seek to regulate them or their users? And could they succeed? Many say not.

The Technology Giants

Companies such as Google/YouTube, Facebook, Amazon, Airbnb and Uber benefit from strong network effects - the phenomenon through which their services become increasingly attractive as more people use them, to the extent that it becomes near impossible for any other company to compete - or for governments to challenge them. They also share the American 'see you in court' approach to regulation. The companies' huge size and international reach certainly make it near impossible for any individual regulator to tackle them with any prospect of success, partly because their businesses are so complex and partly because their resources enable them to out-gun all but the most persistent and well-funded regulators. But their critics are becoming more vociferous. Nick Srnicek has described data as the modern equivalent of oil - essential to the modern economy and maybe needing something like the 1911 anti-trust break-up of Rockefeller's Standard Oil.

There is a related issue in that Google/YouTube, Facebook, Twitter etc. claim to be mere platforms, passively hosting content that they are unwilling to assess. In practice, their algorithms to some extent choose what their readers see, and the companies are financed by advertising, much like traditional media companies. But they are very concerned that active moderation, as distinct from responsive moderation, will expose them to substantial legal liabilities. They also deploy the strong argument that any restriction of their behaviour threatens their customers' right to free speech. Others wonder whether this really does justify their possibly encouraging a man to kill his daughter and then himself whilst live-streaming his actions on Facebook Live.

There is also the danger that algorithmic news poses a risk to democracy as 1.2 billion daily Facebook users, for instance, mainly listen to louder echoes of their own voices - the so-called filter bubble. And the Home Affairs Select Committee has strongly criticised social media companies for failing to take down and take sufficiently seriously illegal content. It was interesting, therefore, to find that, in the summer of 2017, the Giants suddenly found that they could indeed censor their content when forced to do so (a) by advertisers who didn't want their content placed alongside content which promoted hate speech, and (b) by the Premier League who insisted that illegal streaming and video clips should be removed from websites.

All these issues are explored in more depth here.

The Gig Economy

Information technology is facilitating new ways of ordering goods and services to be delivered to the door, including books (and much more) from Amazon, taxis (in particular from Uber), food etc. (from supermarkets), and meals (Deliveroo etc.).This is to be welcomed. (See for instance 'The Ridesharing Revolution' .) But it is also facilitating new ways of employing those who prepare and deliver those goods and services. They can be required to enter into contracts under which

These arrangements can be tax efficient for both 'employer' and 'employee' and they suit many individuals very well. But they can also be exploitative, leaving workers without essential protections. It is far from clear that one-sided contracts are in the long term interests of the individuals or society. Numerous cases are working their way through the courts as lawyers seek to define the boundary between being a 'worker' and being truly self-employed.

The government's attempts to remove the artificial tax benefits also catch the genuinely self-employed, who arguably do deserve the tax breaks.

See also my web page commenting on weaknesses in HMRC.

There is a separate issue concerning the companies' willingness to adjust to local culture and regulation. The BBC commented in September 2017 that "Throughout its short, tempestuous life, Uber has clashed with regulators around the world - and more often than not it has come out on top. Its tactic has often been to arrive in a city, break a few rules, and then apologise when it's rapped over the knuckles. Some regulators have backed down, others have run the company out of town."

Algorithms & AI

There is a bit of a theme running through some of the above issues. Modern technologies, including Artificial Intelligence (AI) and algorithms cut costs and facilitate activities (such as internet searches and autonomous driving) which would otherwise be impossible. But they remove human involvement from the decision-making. For algorithms, all decisions are binary:- a big contrast (in the UK at least) from our tradition of having law enforcement moderated by human police officers, jury-members and judges. Katia Moskvitich commented, with some force, that 'our society is built on a bit of contrition here, a bit of discretion there'. Follow this link for a further discussion of this subject.

And then there is the related issue that algorithms are written by humans, who will almost certainly (though accidentally) import their own false assumptions, generalisations, biases and preconceptions. How easy is it to challenge decisions made by such algorithms? Does it matter, for instance, that recruitment decisions (including to the civil service) are nowadays often made by algorithms whose logic is held in a 'black box' inaccessible to anyone other than its designer - and maybe not to the client?

One interesting (worrying?) example is Durham Police's use of an AI system to help their officers decide whether to hold suspects in custody or release them on bail. Inputs into the decision-making include gender and postcode. The force stresses that the decision is still taken by an officer, albeit 'assisted by' the AI, but the Law Society has expressed concern that custody sergeants will in practice delegate responsibility to the algorithm, and face questions from senior officers if they choose to go against it.

In the US, a federal judge ruled that a 'black box' performance algorithm violated Houston teachers' civil rights. But Eric Loomis, in Wisconsin, failed to persuade a judge that it was unfair that he was given a hefty prison sentence partly because the Compass algorithm judged him to be at high risk of re-offending. This was despite his lawyer arguing that such a secret algorithm was analogous to evidence offered by an anonymous expert whom one cannot cross-examine.

The ability of algorithms and AI to work together to the disadvantage of consumers is also beginning to cause concern. There is more detail in the discussion on my Cartels web page.

AI predominates modern financial markets. A JP Morgan analyst has estimated that a mere 10 per cent of US equity market trading is actually now conducted by discretionary human traders; the rest is driven by various rules-based automatic investment systems, ranging from exchange traded funds to computerised high-speed trading programs. The FT's Gillian Tett argues that we are seeing the rise of self-driving investment vehicles, matching the auto world. But while the sight of driverless cars on the roads has sparked public debate and scrutiny, that has not occurred with self-driving finance.

Karen Yeung offers an interesting academic review of Algorithmic Regulation and Intelligent Enforcement on pp 50- of CARR's 2016 discussion paper Regulation scholarship in crisis?. She notes AI's 'three claimed advantages. Firstly, by replacing the need for human monitors and overseers with ubiquitous, networked digital sensors, algorithmic systems enable the monitoring of performance against targets at massively reduced cost and human effort. Secondly, it operates dynamically in real-time, allowing immediate adjustment of behaviour in response to data feedback thereby avoiding problems arising from out-of-date performance data. Thirdly, it appears to provide objective, verifiable evidence because knowledge of system performance is provided by data emitted directly from a multitude of behavioural sensors embedded into the environment, thereby holding out the prospect of 'game proof' design.' But 'All these claims ... warrant further scrutiny' which she proceeds to offer.

Above all, though, it is important to remember Stephen Cave's warning that our "biggest misapprehension about AIs is that they will be something like human intelligence. The way they work is nothing like the human brain. In their goals, capacities and limitations they will actually be profoundly different to us large-brained apes." An emerging class of algorithms make judgments on the basis of inputs that most people would not think of as data. One example is a Skype-based job-interviewing algorithm that assesses candidates' body language and tone of voice via a video camera. Another algorithm has been shown to predict with 80% accuracy which married couples will stay together - better than any therapist - after analysing the acoustic properties of their conversation.

And we may never understand AIs. No-one in Google, for instance, can tell you exactly why AlphaGo made the moves that it did when it started beating the best Go players in the world.

The Digital Poorhouse?

Increased unregulated use of AI may also have profound social consequences. Virginia Eubanks argues that 'We all live under this new regime of data analytics, but we don’t all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.'

Her full Harpers article is here.


We have all grown up believing that, although our physical behaviour can easily be constrained and dominated by others, our minds, thoughts, beliefs and convictions are to a great extent beyond external constraint. As John Milton said "Thou canst not touch the freedom of my mind". But advances in neural engineering, brain imaging and neuro-technology mean that the mind may soon not be such an unassailable fortress. Elon Musk and others are developing tools such as

This suggests that we will, at the very least, require improvements to laws around data analysis and collection. But some scientists argue that human rights law will need to be updated to take into account the ability of governments not only to peer into people's minds but also alter them.

Protecting Key Infrastructure

Remember the stories about the Russian hacking of Western databases, and the Stuxnet attack on Iranian nuclear industry centrifuges? Much Western infrastructure is nowadays in private hands, so whose responsibility is it to defend it? Government is understandably reluctant to take on such a massive task, but industry is understandably unwilling to foot the bill. The answer, in the UK at least, is that the owners of designated Critical National Infrastructure have a legal duty to safeguard it, advised and monitored by the Centre for the Protection of National Infrastructure or the National Cyber Security Centre.

Sex Robots

The Foundation for Responsible Robotics published an interesting report Our Sexual Future with Robots in July 2017. The report discussed whether increasingly lifelike robots, such as Sophia on the right, might:

The pace of change in this area certainly seems likely to require some form of regulatory response before too long.

Data Protection

New EU rules come into force in May 2018 but one wonders whether any regulations can adequately protect the interests of consumers faced with increasing monetisation of personal data. The following extracts from a letter to the FT summarised concerns very well:

... The consumer will never own the data or the algorithms. ... Every moment, your data relating to browsing, calling, online, social media, location tracking and so on is being churned through a multiverse of data warehouses. If you have been browsing about a certain medicine, correlating to a call to an oncologist and a search for a nearby pharmacy, this can consequently be packaged as a data intelligence report and sold to your medical insurance company. This is just one of the myriad ways monetisation is being unleashed on unsuspecting consumers across the world.

The data protection regulations, although a step in the right direction, are usually still heavily tilted in favour of the corporate giants and still focused on cross-border transfers than on the real risks of monetisation. The fines imposed on the Silicon Valley giants are minuscule compared with the money they have made from data monetisation efforts. And this is all achieved in the age that is still a forerunner to the era of artificial intelligence and quantum computing.

The very concept of data privacy is archaic and academic. The tech giants are moving faster than this philosophical debate about data privacy. All the sound-bites from the tech giants are mere smoke and mirrors. Unless we revisit our concepts of what is data privacy for this new age of data monetisation, we will never really grapple with the real challenges and how to enforce meaningful regulation that really sets out to protect the consumer.

Syed Wajahat Ali

Martin Stanley