Modern technologies, including Artificial Intelligence (AI) and algorithms, cut business costs and facilitate activities (such as internet searches and autonomous driving) which would otherwise be impossible. But they remove human involvement from the decision-making.
Their abilities are also often misunderstood - or at least mis-characterised. Algorithms are not intelligent. They don't know what they are doing. AI applications are merely statistical machine-learning algorithms. They can produce novel outputs, but it would be a stretch to call them creative as they themselves have no idea of the artistic or other value of their output.
It is, I hope, ridiculous (or at least far too soon) to regard AI as an ideology:- "at heart a suite of technologies, designed by a small technical elite, [which] should become autonomous from and eventually replace ... not just individual humans but much of humanity". (An essay in Wired 2020). But it is one of a number of technologies that cannot be understood by most people, leading to an increasing sense of alienation. And John Tasioulas was surely right when he said that "AI has a transformative potential for many parts of life, from medicine to law to democracy. It raises deep ethical questions – about matters such as privacy, discrimination, and the place of automated decision-making in a fulfilling human life – that we inevitably have to confront both as individuals and societies".
For algorithms, all decisions are essentially binary. When deployed in law enforcement, this would be a big change (in the UK at least) from our tradition of having law enforcement moderated by human police officers, jury-members and judges. Katia Moskvitich commented, with some force, that 'our society is built on a bit of contrition here, a bit of discretion there'. Follow this link for a further discussion of this subject.
And then there is the related issue that algorithms are written by humans, who will almost certainly (though accidentally) import their own false assumptions, generalisations, biases and preconceptions. How easy is it to challenge decisions made by such algorithms? Does it matter, for instance, that recruitment decisions (including to the civil service) are nowadays often made by algorithms whose logic is held in a 'black box' inaccessible to anyone other than its designer - and maybe not to the client?
Even theoretically unbiased software - such as face recognition - can easily (and maybe always) misidentify too may 'suspects'. It has an inherent weakness in that, unlike us, it doesn't look at an image as a whole. It essentially does a statistical analysis of the pixels so as to identify features. So it can easily be fooled And even if it is 99% accurate, it will flag up 1,000 innocent people in a crowd of 100,000. Is that a price worth paying if one criminal is arrested? And it seems strange - to use a mild word - that the identities of 117 million Americans are held in face recognition databases, whilst Britain (as of 2018) has 5.9 million CCTV cameras, including 500,000 in London alone.
Self-learning AI can be just as dangerous. I was struck by this report in The Times in October 2018:
Amazon inadvertently built itself a sexist recruitment assistant in a failed experiment that demonstrated why artificial intelligence does not necessarily lead to artificial wisdom. The company set out to create a virtual hiring tool that would sift thousands of job applications far more efficiently than people can. Unfortunately, the AI algorithm taught itself to discriminate against women based on the fact that many more men had applied for and got jobs in the past. The new system began to penalise CVs that included the word “women’s”, as in “women’s chess club captain”. It downgraded applications sent by graduates of two all-female universities and prioritised applications that featured verbs more commonly found in male engineers’ CVs, such as “executed” and “captured”.
Another report in late 2019 noted that US commercial facial recognition systems falsely identified 10 to 100 times more African-American and Asian faces than Caucasian faces.
These are but part of a much wider problem that it is impossible to tell what is happening within a neural (self-teaching) network. Such AI typically identifies lots of small clues that help it reach decisions which are apparently correct - or at last as likely to be correct as similar decisions taken by humans. But some of those clues may be inadvertently prejudicial or discriminatory.
AI predominates modern financial markets. A JP Morgan analyst has estimated that a mere 10 per cent of US equity market trading is actually now conducted by discretionary human traders; the rest is driven by various rules-based automatic investment systems, ranging from exchange traded funds to computerised high-speed trading programs. The FT's Gillian Tett argues that we are seeing the rise of self-driving investment vehicles, matching the auto world. But while the sight of driverless cars on the roads has sparked public debate and scrutiny, that has not occurred with self-driving finance.
And it is important to remember - when wondering what can go wrong - that artificial intelligence applies software to data sets. Either or both can be faulty.
There are reports, for instance, that over-stretched social services teams are looking to algorithms to help them make those terrible decisions about whether 'at risk' children should be removed from their parents. The rate of child abuse in the general population is so low that false positives (wrongly removing a child) are inevitable. Is any evidence base strong enough - and free enough of crude stereotypes - to support automated decisions? If it were your child that was being taken into care, would you prefer the decision to be taken by an algorithm or a human?
And older doctors are concerned that their younger colleagues may be placing too much reliance on technology, and not enough on patient-reported symptoms. A senior surgeon, Mr Skidmore, wrote this to the FT in 2018:
Time and again, the errors being made by a younger generation of medical specialists in many disciplines are due to excessive reliance on scans and other images when these reports are at variance with the history given by the patient, and to increasingly cursory clinical examinations. There remains no substitute whatsoever for a doctor who has been trained by good teachers carrying out a meticulous bedside assessment of a patient and thereby constructing a provisional diagnostic matrix. ... Can AI assess abdominal rigidity in a patient with peritonitis? AI cannot smell odours that accompany disease. AI cannot assimilate or validate pain on an analogue scale. ... Disease processes do not change. Meticulous assessment of a patient’s symptoms and signs remain just as relevant today. AI can be used to confirm the clinical diagnosis but should never be allowed to refute it. Unfortunately, with errors in communication and failure of continuity of care responsibility, excessive and unquestioning reliance on AI can lead to clinical delay in patient management, with disastrous consequences.
Competition authorities are also concerned that algorithms can harm competition, for example by reducing the choice in shopping and travel search results or allowing companies to promote their own products above those of rivals. Complex algorithms can also aid collusion between businesses without companies directly sharing information. The CMA has called for advice from academics and industry experts in the field.
Tasneem Azad offered the following more detailed comment:-
Behavioural insights are increasingly being harnessed to inform Artificial Intelligence systems across a wide range of uses. For example, pricing algorithms are being programmed to improve based on the search and purchase behaviour of customers. As these systems ‘learn by doing’, not only do they meet better the needs of consumers but they begin to provide critical insights that can improve the efficiency of production systems and help streamline complex supply chains.
A challenge in this world, however, is where AI systems of this nature can themselves appear to act anti-competitively. For example, where price comparisons sites observe or track other price comparison sites they over-estimate demand for certain products or services, in turn leading to the AI raising prices. Where these AI systems raise prices at the same time, it may raise similar concerns to those around tacit co-ordination. The manner in which competition authorities to address such machine-learning will be an area to watch with close interest.
Last, and certainly not least, Kenneth Payne argues that AI swings the logic of warfare to attack rather than deterrence. AI's pattern recognition makes it easier to spot defensive vulnerabilities and allows more precise targeting. Its distributed swarms of robots/drones can concentrate rapidly on critical weaknesses so a well-planed, well-coordinated first strike could defeat all a defender's retaliatory forces. Superior AI capabilities could thus increase the temptation to strike quickly and decisively at North Korea's small nuclear arsenal, for instance.
The Different Systems
Alice Irving's 2019 paper Rise of the algorithms includes a useful description of the various types of algorithm, and their limitations.
Some are rule-based and merely apply a pre-programed (and maybe complex) decision tree. In contrast, machine learning involves a computer system training itself to spot patterns and correlations, and to make predictions accordingly.
Some are fully automated. Others have a human in the loop, although the human can default into merely rubber stamping the algorithms suggested decision.
And the decision making can be (and usually is) opaque in one of three ways:
- intentionally opacity - to protect intellectual property
- Illiterate opacity - complexity only understandable by experts
- Intrinsic opacity - where the algorithm is so complex that even a tech expert cannot understand its workings; a black box in other words.
Predictor Values - or Prejudices?
Durham Police's are using an AI system to help their officers decide whether to hold suspects in custody or release them on bail. Inputs into the decision-making include gender and postcode. The force stresses that the decision is still taken by an officer, albeit 'assisted by' the AI, but the Law Society has expressed concern that custody sergeants will in practice delegate responsibility to the algorithm, and face questions from senior officers if they choose to go against it. One problem, for instance, might be that the system uses postcode data as one of its 'predictor values' - but postcodes can also be indicators of deprivation, thus possible creating a sort of feedback loop as officers increasingly focus on deprived areas.
Kent Police are using an Evidence Based Investigation Tool to forecast the probability that certain crimes might be solved. As a result, they now investigate only 40% of crimes (previously 75%) and yet the number of charges and cautions has remained the same. There is concern, however, that - as the technology bases its predictions on past investigations - previous biases maybe reinforced. But the researchers behind the technology have at least incorporated the automatic inclusion of some low-probability scores into each day's investigations, so allowing a blind test of the algorithm's effectiveness. One wonders, however, how long such checks and balances would survive a general role out of the technology?
There are already some 'no go' areas for algorithms. It would not be acceptable (in the UK) for Durham Police - or anyone else - to use ethnicity as one of its predictor values. And insurance companies are not allowed to offer cheaper car insurance to women, even though they are on average much safer drivers than men. But why then should other predictor values be acceptable? Age and gender are, for instance, used by Durham police.
In the US, a federal judge ruled that a 'black box' performance algorithm violated Houston teachers' civil rights. But Eric Loomis, in Wisconsin, failed to persuade a judge that it was unfair that he was given a hefty prison sentence partly because the COMPAS algorithm judged him to be at high risk of re-offending. This was despite his lawyer arguing that such a secret algorithm was analogous to evidence offered by an anonymous expert whom one cannot cross-examine - and one analysis of the system had suggested that it was twice as likely to incorrectly predict that a black person would re-offend, than a white person.
What About Morality?
There will presumably come a time when AI systems are required to take moral decisions as distinct from immoral decisions. Indeed, that time might be quite near in the case of autonomous vehicles which might need to choose who to harm if involved in a collision - its own passenger or a third party for instance.
But it would be unwise to assume that there is a 'right answer' for every moral decision. We can start by trying to maximise the greatest happiness of the greatest number, or doing unto others as you would be done by. But it would be easy enough for those two rules to conflict before we add in moral imperatives to safeguard children, keep promises and so on. And that's before we try to allow for differing individuals' different temperaments, characters and upbringing
Are We Entitled to Explanations?
Onora O'Neill stresses the perhaps obvious point that we shouldn't trust in general. We should trust the trustworthy . So we need to be able to ascertain which algorithms we can trust. So algorithms that affect our rights - and particularly those used by the public sector, courts and so on, need to be open and testable.
The EU's General Data Protection regulation (GDPR) requires organisations to tell us when automated decisions affect our lives, and we can challenge those outcomes requiring companies to give meaningful information about the decision, including by explaining the logic of decision-making. But it is not yet clear how informative, in practice, such explanations will be. The companies that design the algorithms will in particular want to defend their intellectual property, and some AI systems learn from experience, so even their designers may be unable to explain what has happened.
Stephen Cave warns that our "biggest misapprehension about AIs is that they will be something like human intelligence. The way they work is nothing like the human brain. In their goals, capacities and limitations they will actually be profoundly different to us large-brained apes." An emerging class of algorithms make judgments on the basis of inputs that most people would not think of as data. One example is a Skype-based job-interviewing algorithm that assesses candidates' body language and tone of voice via a video camera. Another algorithm has been shown to predict with 80% accuracy which married couples will stay together - better than any therapist - after analysing the acoustic properties of their conversation.
US expert David Gunning adds that the best performing systems may be the least explainable.. This is because machines can create far more complex models of the world than most humans can comprehend. Counterfactual testing - such as changing the ethnicity of a subject to see whether the decision changes - will not work when there is a complex stream of data feeding the decision making.
The UK's Information Commissioner's Office plans to require businesses and other organisations to explain decisions made by AI. Its proposals were put out to consultation in December 2019.
But we may never fully understand how some AI systems learn and work. No-one in Google, for instance, can tell you exactly why AlphaGo made the moves that it did when it started beating the best Go players in the world.
Further Reading
The ability of algorithms and AI to work together to the disadvantage of consumers is also beginning to cause concern. There is more detail in the discussion on my cartels web page.
Karen Yeung offers an interesting academic review of Algorithmic Regulation and Intelligent Enforcement on pp 50- of CARR's 2016 discussion paper Regulation scholarship in crisis?. She notes AI's 'three claimed advantages. Firstly, by replacing the need for human monitors and overseers with ubiquitous, networked digital sensors, algorithmic systems enable the monitoring of performance against targets at massively reduced cost and human effort. Secondly, it operates dynamically in real-time, allowing immediate adjustment of behaviour in response to data feedback thereby avoiding problems arising from out-of-date performance data. Thirdly, it appears to provide objective, verifiable evidence because knowledge of system performance is provided by data emitted directly from a multitude of behavioural sensors embedded into the environment, thereby holding out the prospect of 'game proof' design.' But 'All these claims ... warrant further scrutiny' which she proceeds to offer.
There is much to ponder in Joanna Bryson's IPR blog Tomorrow comes today: How policymakers should approach AI. She says, for instance, that:
- AI is already the core technology of the richest corporations on both sides of the great firewall of China.
- Already [AI is] far better at predicting individuals' behaviour than individuals are happy to know, and therefore than companies are happy to publicly reveal.
- The government's present policy of outlawing adequate encryption is a severe threat to the UK on many levels, but particularly with respect to AI.
- AI and ICT more generally have become sufficiently central to every aspect of our wellbeing that they require dedicated regulatory bodies just as we have for drugs and the environment.
- This is not the same as saying that AI cannot have proprietary intellectual property or must all be open source. Medicine is full of intellectual property, yet it is well regulated.
Dr Bryson's blog also says interesting things about regulation of the Technology Giants.
The House of Lords published a detailed report in 2018 AI in the UK: ready, willing and able? which included some interesting regulatory recommendations such as:
- The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
- The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
- Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
- It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
Parliament's Science and Technology Committee published a thorough report - Algorithms in Decision Making - in 2018, in particular pressing the government to require algorithm operators to be transparent. The government's response was also pretty thorough although it currently had no plans to introduce legally-binding measures to allow challenges to the outcome of decisions made using algorithms.
Increased unregulated use of AI may also have profound social consequences. Virginia Eubanks talks of 'the Digital Poorhouse' and argues that
'We all live under this new regime of data analytics, but we don’t all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.'
Her full Harpers article is here.
[Other lively regulatory issues - especially in response to innovation - are summarised here.]