The Explainability Turn
I want to explore what is at stake when we ask: what are the effects of the disruptive technologies, networks and values that are hidden within the black box of a computational system? In this paper I aim to explore how, when we ask this question – which calls for an explanation – we inadvertently highlight the contradictions within the historically specific form of computation that emerges in late capitalism. These contradictions are continually suppressed in computational societies but generate systemic problems borne out of the need for the political economy of software to be obscured so that its functions and operations and the value they generate are hidden from public knowledge. Why should this fundamental computational political economy be concealed? One of the reasons is that an information society requires a form of public justification in order to legitimate it as an accumulation regime and further to maintain trust. Trust is a fundamental requirement of any system, and has to be stabilised through the generation of norms and practices that create justifications for the way things are. This is required, in part, because of computation’s rapid out-of-control growth into a central aspect of a nation’s economy, real or imaginary. We might also note the way in which computation destabilises the moral economy of capitalism, creating vast profits from exchange and production processes that might be considered pre-capitalistic or obscenely inegalitarian, such as intensive micro-work or fragmented labour in the gig economy. Further, many of the sectors effected by computation are increasingly predicated on the illegal manipulation or monopolisation of markets or are heavily data extractive. These effects threaten individual liberty, undermining a sense of individual autonomy, and destroying even that bulwark of the neoliberal system, consumer sovereignty. Profit from computation also often appears to require the mobilisation of persuasive technologies that cynically but very successfully manipulate addictive human behaviour. We might therefore need to re-phrase our question and ask: how much computation can society withstand?
Google, at least at one point, internally understood this distinction in terms of what it called a “creepy line”. Within the line, public acceptance of computation generates huge profits (good computation) and outside of which computation is able to create effects that would be politically or economically problematic or even socially destructive but which might generate even larger profits (bad computation). The founders of Google, Larry Page and Sergey Brin, gestured towards this in their famous paper from 1998, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” where they warned that if the “search engine were ever to leave the ‘academic realm’ and become a business, it would be corrupted. It would become ‘a black art’ and ‘be advertising oriented.’” As Carr describes,
That’s exactly what happened — not just to Google but to the internet as a whole. The white-robed wizards of Silicon Valley now ply the black arts of algorithmic witchcraft for power and money. They wanted most of all to be Gandalf, but they became Saruman (Carr 2019).
Peter Thiel, a PayPal co-founder and chairman of Palantir, described a similar process in which he identified the importance of software companies securing a monopoly, which he termed as a movement from zero to one. The one, of course, representing the successful monopolisation of a niche or sector of the economy. Whilst this is not necessarily a surprise, the candour with which the Silicon Valley elite advocate for these economic structures, which are contrary to neoliberalism, let alone social democracy, should give us pause for thought. Indeed, Thiel goes so far as to argue that he “no longer believe that freedom and democracy are compatible” (Thiel 2009). But whilst the exploitative organisation of a capitalist economy is not new, of course, what is new is that these processes are now intensified at a level not seen before and at all levels of society. It is no longer just the workers who are subject to processes of automation but also the owners of capital themselves and inevitably their private lives. That the millionaires and billionaires of the technology industry should feel a need to protect their own families from the worst aspects of computation, with Steve Jobs famously withholding computers from his children and Larry Page, one of the co-founders of Google, managing to keep his personal life and even his children’s name secret, is ironic given Google’s mission “to organise the world’s information and make it universally accessible and useful.” Unfortunately, this disconnectionism is not an option available to the majority of the population across the world – even as it becomes a bourgeoise aspiration through digital detox camps and how-to-disconnect-guides in national newspapers.
The contradictions generated by this new system can be observed in discourse. Concepts carry over from the computational industries and spread as explanatory ideas across society. This might be most clearly seen in the way in which computation is described simultaneously as both transparent and opaque, open and closed, augmentation and automation, creating freedom and subjugation, resistance and hegemonic power, the future of the economy and its destruction. Indeed, we see principles from software engineering offered up for social engineering, with open source identified as an exemplar principle of organisation, platforms as future models for governance, calculation substituted for thought, and social media networks replacing community.
If we focus on the important difference between two of these discursive categories, augmentation and automation, we can see how they are used to orient and justify further computation. As far back as 1981, Steve Jobs, then CEO of Apple, famously called computers "Bicycles for the Mind", implying that they augmented the cognitive capacities of the user, making them faster, sharper and more knowledgeable. He argued, when humans "created the bicycle, [they] created a tool that amplified an inherent ability.... The Apple personal computer is a 21st century bicycle if you will, because it's a tool that can amplify a certain part of our inherent intelligence.... [It] can distribute intelligence to where it's needed." This vision has been extremely compelling for technologists and their apologists who omitted to explain that these capacities might be reliant on wide-scale surveillance technology. But whilst this vision of bicycles for the mind might have been true in the 1980s, changes in the subsequent political economy of our societies means that computers are increasingly no longer augmenting our abilities, but might rather be automating them. Algorithms then become Weberian "iron cages" in which citizens are trapped and monitored by software, with code that executes faster than humans can think, overtaking their capacity for thought. This distinction between (i) augmentation, which extends our capacity to do things, and (ii) automation, which replaces our capacity to do something, are key legitimating concepts for understanding this struggle for the future of society.
Information economies are founded on an attempt to make thought subject to property rights – principles of reasoning, mathematical calculation, logical operations and formal principles likewise become owned and controlled. But these forms of thought also become recast as the only legitimate forms of reason, feeding back into a new image of thought. Data is increasingly associated with wealth and power, linked explicitly with the computational resources to submit this data to rapid computation and pattern matching algorithms through machine-learning and related techniques. Humans can now purchase thinking capacity whether through pattern-matching algorithms or the augmentation possibilities of personal devices. Information processing is now so fast that it can be performed in the blink of an eye, and the results used to augment, if you can afford it, or else persuade and potentially manipulate others who cannot. Depending on the price one is willing to pay, digital technologies can either increase or undermine reasoning capacities, substituting artificial analytic capacities that bypass the function of reason. For some, they literally buy better algorithms, better technologies, better capacities for thought. For the rest, algorithms overtake human cognitive faculties by shortcutting individual decisions by making a digital "suggestion" or "nudge". When it comes to cognitive functions, such as thinking, reasoning and understanding, the process of computational automation may replace important human capacities of those who cannot afford to defend themselves against it. A new inequality thus emerges in a moment of neuro-diversity created by augmenting or automating thought itself, potentially undermining democratic and public values.
It will not come as a surprise to anyone here that the actually existing informational economy is built increasingly on software that has steered capitalism towards a data-intensive form of extractive economy – what Zuboff has termed surveillance capitalism and Stiegler has identified as the Automatic Society. This has been achieved through surveillance, arbitrage, and the manipulation of markets but also crucially through facilitating monopolies of knowledge – whether through digital rights management, copyright or patents. But the contradictions at the heart of computational capital cannot be continually kept in check without the mobilisation of a set of justificatory discourses, and ideology.
One way in which this has happened is through what comes to be seen as two aspects for knowing computation. These are represented by an epistemology of computation that fetishises the surface – the surface refers to knowledge in and through the interface of a computer, in effect from the computed results of computation which may be represented visually, aurally or through haptics and which becomes commonly accepted as the computational. The definitive representation of computation has become the network, which obscures as much as it reveals. We might understand the network as an “apparatus of the dark” comparable to the lightning which Emily Dickinson memorably described as generating ignorance of what lies behind in “mansions never quite revealed”. In response to the poverty of the network, attempts to understand the mechanisms of computation has signalled a turn to stacks, infrastructure, materiality, code, software and algorithms to try to uncover aspects of the computational that have been hidden. However, I argue that the illegibility of the information society’s systems is necessary for it to function and must be generally accepted as a doxa of modern society – even as a desirable outcome. It has certainly justified the proprietary structure of copyrights and functions through notions of object-oriented design, in which knowledge must be kept obscured or hidden in software through a technical division between source code and execution. I argue that these two aspects of knowing computation are a result of this underlying political economy which generates a surface and mechanism split – a fundamental bifurcation in knowledge.
This division in knowledge is often justified through concepts of simplicity, ease of use or as convenience – most notably by the technology industries, especially the so-called FAANG companies (Facebook, Apple, Amazon, Netflix, Google). I argue that one of the outcomes of this is the turn to “smartness” as a justificatory discourse through “operational functionality”, that is that “smart” results justify the opacity of the dark aspect of this epistemology. Smartness and opacity are therefore directly linked through an epistemological framework that establishes a causal link between data and “truth”, but not through a veracity that requires the material links in the chain of computation to be enumerated or understood. In other words, ignorance of computational processes is, under this epistemology, celebrated as a means to the ends of smartness. One of the results is to locate data as the foundation of computational inequities or computational power. Injustice is strongly linked to data problems, which can be addressed by more data, ethical data or democratising data sources. Much effort, both social, political and technical is then spent on ensuring the minimisation of bias in data, or in the presentation of data results in a manner that takes care of the data. We can therefore summarise this way of thinking through the notion of bad data in, bad data out, or as commonly understood in technology circles, garbage in, garbage out. As a result, this often means that it is generally difficult for a user to verify or question the results that computers generate even as we increasingly rely on them for facts, news and information. This confusion affects our understanding of not just an individual computer or software package, but also when the results are generated by networks of computers, and networks of networks. Thus the black box is compounded into a black network, a system of opacity which nonetheless increasingly regulates and maintains everyday life, the economy and media systems of the contemporary milieu.
One response has been to link computation intimately to the user through the computed results, through the presentation of information painted onto their screens. Computers and smartphones are not just information providers, but increasingly also windows into marketplaces for purchasing goods, newspapers and magazines, entertainment centres, maps and personal assistants, etc. This has increasingly become intensified as an intimate relationship between ourselves and our device, our screen, our network. But the way in which the personal interface of the smartphone or computer flattens the informational landscape also has the potential for confusion between different functions and information sources.
Secondly, there is also a temptation for the makers of these automated decision systems to use the calculative power of the device to persuade people to do things, whether buying a new bottle of wine, selecting a particular politician, or voting in a referendum or election. Whilst the contribution of data science, marketing data and persuasive technologies to the Trump election and the Brexit referendum remains to be fully explicated, even on a more mundane level computers are active in shaping the way we think. The most obvious example of this is Google Autocomplete on the search bar, which attempts to predict what we are searching for, but similar techniques have been incorporated into many aspects of computer interfaces through design practices that persuade or nudge particular behavioural outcomes.
Thirdly, the large quantity of data collected, the ease with which it is amassed, and combined with new systems of computation, means that new forms of surveillance are beginning to emerge which are relatively unchecked. When this is combined with their seductive predictive abilities, real potentials for misuse or mistakes are magnified. For example, in Kortrijk, Belgium, and Marbella, Spain, the local police deployed "body recognition" technology which use the walking style or clothing of individuals to track them and across the European Union at least ten countries have a police force that uses face recognition. Even with 99% accuracy in face recognition systems, the number of images in police databases makes false positives inevitable. Indeed, a 1% error rate means that 100 people will be flagged as wanted out of 10,000 innocent citizens. In the Netherlands, the police has access to a database of pictures of 1.3 million persons, many of which were never charged with a crime, in France, the national police can match CCTV footage against a file of 8 million people and in Hungary, a recent law allows police to use face recognition in ID checks. The lack of transparency in these systems and the algorithms they use is a growing social problem.
Fourthly, we see the emergence of systems of intelligence through technologies of machine learning and artificial intelligence. These systems do not merely automate production and distribution processes but have the capacity to also automate consumption. The full implications of this are not to proletarianise labour, but to proletarianise the cognitive workers in a society, making many formerly white collar jobs redundant, but they also directly undermine and overtake the human capacity of reason. We also see this in both the monopolisation of vertical and horizontal dimensions of the market, which elsewhere I have explored through the notion of infrasomatization – the creation of cognitive infrastructures that automate value-chains, cognitive labour, networks and logistics into new highly profitable assemblages built on intensive data capture.
These technologies use the mobilisation of processes of selecting and directing activity, often through the automation of stereotypes, clichés and simple answers (Malabou 2019: 52, Noble 2018: 50). But the underlying processes that calculate the textual results, and the explanation as to how it was done is hidden from the user, whether they are, for example, denied a loan, insurance cover, or welfare benefits. This explanatory deficit is a growing problem in our societies as the reliance on algorithms, some poorly programmed, creates potential situations that are inequitable and unfair, but also with little means of redress for citizens. Unless addressed, this will be a growing source of discontent in society, but also serve to delegitimate political and administrative systems which will appear as increasingly remote, unchecked and inexplicable to members of society.
Understanding the way in which the computational generates and magnifies uncertainty and a feeling of rising social risk and instability is also in my mind connected to a social desire for tethering knowledge, of grounding it in some way. We see tendencies generated by the liquidation of information modalities in “fake news”, conspiracy theories, social media virality, and a rising distrust towards science and expertise and the rise of relativism. This is also to be connected to new forms of nationalism, populism, and the turn to traditional knowledge to provide a new, albeit misplaced, ground for social epistemology. I also think this is linked to the temptation for explanations using new metaphysics of the computational which rely on formalism and mathematics as an attempt to understand computation through axioms, mathematical or computational notations and rules far removed from concrete experience. This new search for ground or foundations whether through identity, tradition, formalism or metaphysics is, to my mind, symptomatic of the difficulty of understanding and connecting computation and its effects across scales of individual and social life. As a result of these problems becoming politicised and matters of concern for the wider public, if not addressed, computation would begin to suffer a legitimacy crisis.
As a result of new forms of obscurity in the use of artificial intelligence, automated decision systems and machine learning systems, a a new explanatory demand has resulted in a very fascinating constellation which we might understand as the social right to explanation. This has come to be called explainability.
The European Union General Data Protection Regulation 2016/679, known as the GDPR, is key to help us to understand this. This regulation creates the right "to obtain an explanation of [a] decision reached after such assessment and to challenge the decision" (GDPR 2016, Goodman et al 2016). The GDPR creates a new kind of subject, the "data subject" to whom a right to explanation (amongst other data protection and privacy rights) is given. Additionally, it has created a legal definition of processing through a computer algorithm (GDPR 2016 Art. 4). In consequence, this has given rise to a notion of explainability which creates the right "to obtain an explanation of [a] decision reached after such assessment and to challenge the decision" (GDPR 2016 Recital 71). When instantiated in national legislation, such as the Data Protection Act 2018 in the UK, it creates what we might call the social right to explanation. Thus, I argue that explainability is not just an issue of legal rights, it has also created a normative potential for this social right to explanation. Explainability can potentially challenge algorithms and their social norms and hierarchies and has a potential to contest these platforms.
In the context of computational systems, the first important question we need to consider is what counts as an explanation? Explanations are assumed to tell us how things work and thereby giving us the power to change our environment in order to meet our own ends. Thus, explainability and the underlying explanation are linked to the question of justification. I call this the "Explainability Turn".
Hempel and Oppenheim (1988) argue that an explanation seeks to "exhibit and to clarify in a more rigorous manner" with reference to general laws. Some of the examples they give include, a mercury thermometer which can be explained using physical properties of the glass and mercury (Hempel and Oppenheim 1988:10). Similarly, they present the example of an observer of a row boat where part of the oar is submerged under water and appears to be bent upwards (Hempel and Oppenheim 1988: 10). An explanation therefore attempts to explain with reference to general laws. As Mill argues, "an individual fact is said to be explained by pointing out its cause, that is, by stating the law or laws of causation, of which its production is an instance" (Mill 1858). Similarly, Ducasse argued in 1925 that "explanation essentially consists in the offering of a hypothesis of fact, standing to the fact to be explained as case of antecedent to case of consequent of some already known law of connection" (Ducasse 2015: 37). Hempel and Oppenheim therefore argue that an explanation can be divided into its two constituent parts, the explanadum and the explanans (Hempel and Oppenheim 1988: 10). The explanandum is a logical consequence of the explanans. The explanans itself must have empirical context, which creates conditions for testability. In this sense of explanation, science is often supposed to be the best means of generating explanations (Pitt 1988: 7).
However, this causal mode of explanation is inadequate in fields concerned with purposive behaviour, as in computational systems, where the goals sought by the system are required in order to provide an explanation (Ruben 2016). Therefore we should ask: How long did it take? Was it interrupted at any point? Who gave it? When? Where? What were the exact words used? For whose benefit was it given? It makes sense to ask: Who created it first? Is it very complicated? (Ruben 2016: 6).
In the case of computational systems it has become more common for reference to purposive behaviour, such as in so-called machine behaviour, to be given in relation to "motivations" and therefore for teleological rather than causal explanation. Thus, the goals sought by the system are required in order to provide an explanation. Teleological approaches to explanation may also make us feel that we really understand a phenomenon because it is accounted for in terms of purposes, with which we are familiar from our own experience of purposive behaviour. One can, therefore, see a great temptation to use teleological explanation in relation to AI systems, particularly by creating a sense of an empathetic understanding of the "personalities of the agents." In relation to explanation, therefore, explainability needs to provide an answer to the question "why?" to close the gap in understanding. This raises a new potential for critique.
Crucially, this connection between an explanatory product and the legal regime that enforces it has forced system designers and programmers to look for explanatory models that are sufficient to provide legal cover, but also at a level at which they are presentable to the user or data subject. This of course leads to the temptation of creating persuasive rather than transparent explanations or a "good enough" explanation. The concept of explainability, and the related practices of designing and building explainable systems, also have an underlying theory of general explainability and a theory of the human mind. These two theories are rarely explicitly articulated in the literature, and there is an urgent need to bring them together to interrogate how explainability cannot be a mere technical response to the contemporary problem of automated decision systems. We need, therefore, to place explainability within a historical and conceptual milieu through a deeper understanding of the political economy of the information society. Many current discussions of explainability tend to be chiefly interested in an explanatory product, whereas I argue that historical understanding of the explanatory process will have greater impacts for education, society and politics.
I argue that the concept of explainability can help to critique a particular form of thought which is justified through the universalisation of a historically specific form of capitalist computation: where thought becomes computation. We must continually remind ourselves that the current information economy is historical. It owes its success and profitability to a legislative assignment of creative rights to the automatic operation of computers that emerges in capitalism (as automation). This legal structure is required to make the information economy profitable through copyright, and to a lesser extent patents. Other computations are possible, and different assemblages of computation and law might generate economic alternatives that mitigate or remove the current negative disruptive effects of computation in society. It is crucial to recognise that there is no “pure” or metaphysical computation. Indeed, through the mobilisation of concepts such as explainability the underlying contradictions of computational capitalism can be laid manifest and more importantly challenged and changed. This suggests that a rethinking of computation is needed to move it away from its current tendencies, from what I have called neo-computationalism, or right computationalism, which is geared towards some of the worst excesses of capitalism, and rethought within a new conception of left computationalism.[1] This would need to be developed through education and the capacity-building of explanatory publics that in using explainability as a critical concept create the conditions for greater democratic thought and practice in computational capitalism.
This is uncorrected notes of a paper that was given at International, Internation, Nations, Transitions: Penser les Localités dans la Mondialisation, ENMI 2019, l’Institut de Recherche et d’Innovation, Centre Pompidou, Grande Salle, Paris, France, 17-18 Dec 2019.
Notes
Selected Bibliography
Carr, N. (2019) Larry and Sergey: a valediction, http://www.roughtype.com/?p=8661%0D
Darpa (n.d.) Explainable Artificial Intelligence (XAI), https://www.darpa.mil/program/explainable-artificial-intelligence
Ducasse, C. J. (2015) Explanation, Mechanism and Teleology, in Truth, Knowledge and Causation, Routledge.
GDPR (2016) General Data Protection Regulation, Regulation (EU) 2016/679 of the European Parliament.
Goodman, B. & Flaxman, S. (2016) European Union regulations on algorithmic decision-making and a "right to explanation", https://arxiv.org/abs/1606.08813
Hempel and Oppenheim (1988) Studies in the Logic of Explanation, in Pitt, J.C. (ed.) Theories of Explanation, OUP.
Kuang, C. (2017) Can A.I. Be Taught to Explain Itself?, The New York
Times, https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html
Noble, S. U. (2018) Algorithms of Oppression, NYU Press.
Mill, J. S. (1858) Of the Explanation of the Laws of Nature, in A System of Logic, New York.
Pitt, J.C. (1988) Theories of Explanation, OUP.
Ruben, D. H. (2016) Explaining Explanation, Routledge.
Sample, I. (2017) Computer says no, The Guardian, https://www.theguardian.com/science/2017/nov/05/computer-says-no- why-making-ais-fair-accountable-and- transparent-is-crucial
Thiel, P. (2019) The Education of a Libertarian, Cato Unbound, https://www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian
Comments
Post a Comment