Judging Computation: Calculation and Judgement in Artificial Intelligence

Artificial intelligence (AI) is now seen to be one of the most important global challenges of the 21st century. The rising policy importance of this challenge is demonstrated by President Biden’s Executive Order on AI and the UK government’s AI Summit at Bletchley Park, both in 2023. These moments raise important issues directly by exploring the foundational concepts of “judgement” and calculation” in relation to understanding the grounding of the idea of artificial intelligence and machine learning from their early formation. The question I want to look at here is not only how can we understand better the historical and theoretical ideas of artificial intelligence, but how can we offer a response to an implicit notion of a decision-making capacity, often deployed in explaining artificial intelligence, even today. 

The importance and timeliness of these questions are highlighted by the rapid deployment of AI systems and the public concern with bias, transparency, responsibility and trust in relation to these systems. However, little work has been done on placing these notions within the context of the debates begun in early AI research programmes from the 1960s. This article seeks to examine some of these lacunae in the literature and in wider public debate and start the work of deeply engaging with this aporia. 

To understand these contrasting ideas of how AI becomes a specific project to create a capacity for calculable (or “computational”) “decisions” I want to suggest it that it is crucial that we map the constellations of concepts that were articulated historically by different theorists, programmers and projects within this history of AI. Key to this project is the idea within AI and machine learning that it is possible to translate “judgement” into classifications that can be either programmed or “learned” by a machine learning system. Indeed, AI was greatly influenced by notions of calculability in fields such as cybernetics, statistics, cognitive science and operations research particularly in relation to the problematic of pattern recognition and machine learning (Simon, 1982). I want to suggest that by focusing on key early AI thinkers we are better able to understand how the specific idea of a particular distinction between judgement and calculation emerges within a technical imaginary. 

This will enable us to trace what I call "AI anxiety" in individuals and society, and how it is often implicitly based on a sedimented notion of human judgement. By working through a number of case studies, my aim is to trace the connections between judgement and decision-making and the ways in which they connect digital processes, representations and techniques through their operationalisation. What I hope will emerge is an account of AI that extends recent work about the relation between artificial intelligence, humans and values to examine how AI becomes an experimental field for the distribution of ideas of “judgement” and “calculation”, particular in recent debates about "AI alignment", and the methods, theories, objects and affects that stabilise them. 

The key starting probe into this research is through the work and scattered writings of Joseph Weizenbaum. Working at MIT in the 1960s, Weizenbaum became famous for the creation of the ELIZA chatbot software in 1966 (see our book project here). The reaction of users to the system who believed that the computer was intelligent greatly concerned Weizenbaum and was later described as the “ELIZA effect” (Turkle, 1997). Weizenbaum described being treated as a “heretic” in AI research as he began to question the direction of AI research and its problematic theorisation of humans, with, for example, Marvin Minsky describing the brain as a “meat machine” (Long, 1985, p. 47). Weizenbaum wrote, “the knowledge of behaviour of German academics during the Hitler time weighed on me very heavily. I was born in Germany, I couldn't relax and sit by and watch [MIT] behaving in the same way” (ben-Aaron, 1985, p. 2). 

Uniquely, as a computer scientist, Weizenbaum was awarded a fellowship at Stanford, where he wrote Computer Power and Human Reason: From Judgment to Calculation. Weizenbaum (1976) noted in his book exposure to the ideas of writers such as Mumford (“authoritarian technics”), Horkheimer (“instrumental reason”), Arendt (“calculation”), Dreyfus (“whole/part”), Jonas (“responsibility”), Anders. (“obsolescence”), Chomsky (“understanding”) and Freud (“psychoanalysis”). He linked notion of "calculability" or "probability" (what Weizenbaum called technological ideology) to a scientific knowing in contrast to judgement that leads to morally obligatory ways of living one’s life. As Weizenbaum wrote,

''The scientific man has above all things to strive at self­ elimination in his judgments," wrote Karl Pearson in 1892.6 Of the many scientists I know, only a very few would disagree with that statement. Yet it must be acknowledged that it urges man to strive to become a disembodied intelligence, to himself become an instru­ ment, a machine. So far has man's initially so innocent liaison with prostheses and pointer readings brought him. And upon a culture so fashioned burst the computer (Weizenbaum 1976: 25-26).

As Weizenbaum explained, “computers can perform impressive feats of calculating... what we have to fear is that inherently human problems – for example, social and political problems – will increasingly be turned over to computers for solutions” (Long, 1985, p. 50). By beginning with Weizenbaum's work and tracing backwards to his influences (such as archival personal letters to Mumford and others) and then moving forward to the intense debate that has been generated by his work (e.g. with McCarthy/Minsky), I aim to reconstruct this important theoretical dichotomy. This will enable the interrogation and critique of both the binary structure of this distinction (judgement vs calculability), but also its deep sedimentation in more contemporary discourse around AI, particularly recent machine learning systems, like ChatGPT (e.g. LLMs). 

The fear that “judgement” mimicked by a computer program could become normalised, either using the tricks that Weizenbaum described in relation to ELIZA, or by re-conceptualising judgement as calculation or a series of calculations, is a key question for Weizenbaum and my own research. As Weizenbaum wrote, describing computers,

"judg­ment" is not the proper word, for their decision would be reached by the application of logic only. It would, in effect, be nothing more than a determined calculation, a logical process which could have only one outcome (Weizenbaum 1976: 44).

Indeed, artificial intelligence has been described as “not a monolithic paradigm of rationality but a spurious architecture made of adapting techniques and tricks” (Pasquinelli and Joler, 2021). Weizenbaum, who was obsessed with tricks and games, particularly con games would probably agree. Part of the unease with which thinkers, such as Weizenbaum, approached the problem of judgement, was the possibility of using computers in scenarios where it was not appropriate to make decisions due to the lack of understanding of the decision problem (Berry 2023b). As he wrote,

Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisti­cated ways than can the most patient human being. The point is that they ought not be given such tasks. They may even be able to arrive at "correct" decisions in some cases-but always and necessarily on bases no human being should be willing to accept... the relevant issues are neither techno­ logical nor even mathematical; they are ethical.... The limits of the applicability of computers are ultimately statable only in terms of oughts. What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom (Weizenbaum 1976: 227).

Even in contemporary AI systems, such as ChatGPT, the chain of reasoning of the system is often opaque, partly due to the hidden data on which they were created, but also the difficulty of following the processing within LLMs, and to which explainability has been offered as a possible solution (Berry 2023a). But even with this opacity it is clear that the system is not exercising judgement in the form that we think of as human judgement, rather it is a computed decision, the result of a cascading flow of combinatorial logical functions flowing through digital code.1 In my work I find it helpful to think about the distinction between calculation and judgment within the context of social philosophy, and to connect this to the social problem of the computable. In this sense I argue that “the computable” should be understood in the critical sense of understanding what kinds of problems are appropriate for computation – what ought to be done, rather than the more technical or mathematical definition of the limits of computation – that which refers to what can or cannot be computed by a computer. In this context, it might be helpful to think of this as the question of the "uncomputable" and understand the limits of computation not as a technical question but an ethical one – as a set of human or social issues that society deems as inappropriate for computation and thus ethically, even if not strictly technically, uncomputable. 

Footnotes

1. The view that human judgement should be replaced by computer calculation is related to what I call the two dogmas of computation. The first is a belief that computation is universally applicable, that is that it can be applied to anything – a theory of the generalisability of computation. This is the idea that computation can represent or is able to be used as a mediator or controller of all processes and activities. The other dogma is based on a reductionism, that is that ultimately everything is computable – a theory of the computability of the actual

Bibliography

ben-Aaron, D. (1985) ‘Weizenbaum examines computers and society’, The Tech, 17 February.

Berry, D.M. (2023a) ‘The Explainability Turn’, Digital Humanities Quarterly, 017(2).

Berry, D.M. (2023b) ‘The Limits of Computation: Joseph Weizenbaum and the ELIZA Chatbot’, Weizenbaum Journal of the Digital Society, 3(3).

Finding Eliza (2024) Home Page, https://findingeliza.org/ 

Long, M. (1985) ‘Turncoat of the Computer Revolution’, New Age Journal, December, pp. 47–51.

Pasquinelli, M. and Joler, V. (2021) ‘The Nooscope manifested: AI as instrument of knowledge extractivism’, AI & Society, 36(4), pp. 1263–1280.

Simon, H.A. (1982) Models of bounded rationality. MIT Press.

Turkle, S. (1997) Life on the screen: identity in the age of the Internet. Touchstone.

Weizenbaum, J. (1976) Computer power and human reason: from judgment to calculation. Freeman. 

Comments

Popular Posts