Computational Determinism
The idea of “computational determinism” presents an interesting parallel to technological determinism, yet with crucial distinctions that deserves careful examination. Indeed, whilst technological determinism argues that technology drives historical and social development, computational determinism suggests that the mathematical foundations of computer science inexorably shape how computers operate, both in theory and in the wider social world. This perspective, however, often fails to account for the complex interplay between computational theory, materiality and social context, particularly the political economic factors that are crucial for understanding social phenomena (see Berry, 2014).
I argue that at the heart of computational determinism lies a set of foundational theories in computer science that have been generalised far beyond their original scope. The seductive simplicity of these models has led some theorists to make a problematic leap: from mathematical abstraction to ontological claim. They suggest that all phenomena, including human cognition and social processes, can be reduced to computational operations. This reductionist view aligns with what I call mathematical romanticism, a fusion of mathematical axiomatics with a developmental or organic unfolding of processualism (Berry, 2023). Mathematical romanticism also sometimes becomes transformed into its computational parallel as a form of computational romanticism, often derived from Gödelian incompleteness (see below). These kinds of theories often rely upon the Church-Turing thesis – a cornerstone of computer science that posits the equivalence of various models of computation. This relies on Alan Turing's concept of the Turing machine, which provided a universal model of computation that could theoretically perform any algorithmic task, combined with Alonzo Church's lambda calculus. However, these kinds of theories ignore the crucial distinction between the abstract world of mathematical models and the reality of material reality. Such approaches often fail to draw inferences about how computation functions and tend to lack an understanding of computation’s history, mediations, and qualifications. This results in a theoretical tendency to view computation as an autonomous force, divorced from its human origins and societal context.
Marx's insistence on the primacy of political economy in understanding social phenomena is particularly relevant here. Just as Marx argued that one cannot understand the workings of society without examining the underlying economic structures and power relations, we cannot fully grasp the role of computation in society without considering its political economic context. The development and deployment of computational systems are deeply embedded in capitalist modes of production and exchange, shaping and being shaped by existing power structures (Berry, 2014).
The determinism implicit in various forms of digital theory is further complicated by the inherent limitations of computational systems, as demonstrated by Turing's halting problem and Gödel's incompleteness theorems (Berry 2024). Turing proved that no algorithm is able to determine whether an arbitrary program will halt or run forever, while Gödel showed that within any consistent formal system powerful enough to encode arithmetic, there are statements that are true but unprovable within the system. Paradoxically, these limitations are often invoked to support computational determinism. The undecidability of the halting problem, for instance, is sometimes cited as evidence of the inherent indeterminacy of computation. Similarly, Gödel's theorems are used to argue for the transcendence of human intelligence over formal systems. However, these interpretations conflate mathematical undecidability with real-world unpredictability, ignoring the crucial role of human decision-making and social context in the development and deployment of computational systems. This conflation reveals a contradiction within computationalism – we can describe this as a simultaneous embrace of rational foundationalism combined with a need for processual explanation. The dual nature of computation, as both a fixed logical structure and a dynamic process, complicates these reductionist deterministic explanations.
The danger of computational determinism extends beyond theoretical considerations. In practice, it can lead to the valorisation of the mathematisation of thought, where the formalisation of knowledge through computation is seen not just as one approach to thinking, but as the exemplary one (Berry, 2014). This perspective risks reducing complex social phenomena to computational models, potentially distorting our understanding and limiting our ability to address these phenomena effectively. Additionally, computational determinism can serve to legitimise and entrench existing power structures. By presenting computational outcomes as inevitable or “natural,” it obscures the human decisions and value judgments embedded in algorithmic systems. This is particularly concerning in an era of increasing algorithmic governance, where computational systems play a growing role in shaping social, economic, and political realities.
Here, we see a clear parallel with Marx's critique of commodity fetishism. Just as Marx argued that the apparent “natural” laws of the market obscure the social relations of production, computational determinism can mask the human labour and decision-making processes embedded in algorithmic systems. This fetishisation of computation can lead to a reification of computational labour, where the social relations involved in producing and maintaining computational systems are hidden behind a veil of technical neutrality (Berry, 2014).
The ideological value of computational determinism becomes particularly apparent when we consider how it is often used to rationalise and justify self-serving practices, especially in the context of Silicon Valley and Big Tech. By presenting their products and services as the inevitable outcome of technological progress, tech companies can often deflect criticism and avoid taking responsibility for the social consequences of their actions. A deterministic narrative serves to naturalise the monopoly concentration of power and wealth in the hands of a few tech giants, presenting it as the unavoidable result of computational logic rather than the outcome of specific business practices and policy decisions.
For instance, the idea that artificial intelligence will inevitably surpass human intelligence in all domains (often referred to as the “singularity” or Artificial General Intelligence (AGI)) is frequently used to justify massive investments in AI research and development, regardless of the potential social costs or ethical implications. Similarly, the notion that social media algorithms simply reflect user preferences, rather than actively shaping them, is used to absolve platforms of responsibility for the spread of misinformation or the erosion of privacy. These narratives not only oversimplifies the technical challenges and limitations of AI but also obscures the human labour, decision-making, and social factors that shape AI development and deployment. They can present AI as an unstoppable force of nature rather than a human-created tool embedded in specific social, economic, and political contexts.
Similarly, in the field of machine learning (ML), computational determinism often manifests in the belief that ML algorithms can objectively extract “truth” from data, free from human bias or influence. This perspective ignores the ways in which human decisions – from data collection and cleaning to algorithm design and interpretation of results – fundamentally shape ML outcomes. It also overlooks the potential for ML systems to reinforce and amplify existing societal biases and inequalities (Noble, 2018; Amoore et al., 2024; Benjamin, 2024). The deterministic view of AI and ML can have serious consequences. It can lead to uncritical acceptance of algorithmic decision-making in sensitive areas such as criminal justice, healthcare, and finance, without adequate consideration of potential biases or errors. It can also foster a sense of technological fatalism, discouraging critical engagement with these technologies and dampening efforts to shape their development in ways that align with societal values and needs (Berry, 2014; Sample, 2017; Suleyman and Bhaskar, 2023).
Moreover, computational determinism in AI and ML often serves the interests of Big Tech companies and other powerful actors in the tech industry. By presenting their AI and ML products as inevitable outgrowths of computational progress, these companies can deflect responsibility for negative outcomes and resist calls for regulation or public oversight. This narrative also helps to justify massive investments in AI and ML research and development, regardless of potential social costs or ethical implications (Sadowski, 2020).
To help us understand this tension we can look at the distinction sometimes made between idiographic and nomothetic approaches to knowledge, to further illuminate the limitations of computational determinism.
Nomothetic approaches seek to establish general laws or principles. These align closely with the formalisms of computer science and the quest for universal computational models. This tendency towards nomothetic thinking is evident in the way computational formalism attempts to create general laws for computation, often framing these laws as contingent yet formal explanations for the effects of computation in the world. However, this approach is fundamentally deficient, particularly from the perspective of critical theory. It fails to account for the idiographic aspects of computational systems – the unique, contextual, and historically situated nature of their development and deployment.
We could describe this as a nomothetic bias in computational thinking. As Adorno and Horkheimer argued in their critique of instrumental reason, the drive to subsume particular phenomena under general laws often leads to a flattening of social reality and a blindness to the specific historical and material conditions that shape human experience (Adorno and Horkheimer, 2016). In the context of computation, this nomothetic tendency can lead to oversimplified models that ignore the complex interplay between technology and society, the role of human agency, and the power dynamics embedded in computational systems.
A critical approach, instead, would insist on examining the particular manifestations of computational systems, their specific societal impacts, and the concrete power relations they embody or reinforce. This is an idiographic approach which is crucial for understanding computation not as an abstract, universal force, but as a set of practices and technologies deeply embedded in and shaped by human societies.
The debate between idiographic and nomothetic approaches has deep roots in the philosophy of science, going back to Wilhelm Windelband's 1894 Rectorial address where he distinguished between these two modes of inquiry (Windelband, 1980). Windelband argued that while natural sciences typically employ nomothetic methods to uncover general laws, the human sciences often require idiographic approaches to understand unique, non-repeatable events. This distinction has profound implications for a “digital theory” that claims that formal approaches to the study of computation can yield true knowledge. The nomothetic bias in computer science, inherited from its mathematical foundations, often leads to a privileging of formal, generalisable models over contextual understanding. As mentioned above, this tendency aligns with the mathematisation of thought, where formalisation through computation is seen as the exemplary mode of knowledge production. However, this approach falls short when confronted with the complexities of socio-technical systems.
Adorno and Horkheimer warned against the dominance of instrumental reason, which they saw as reducing knowledge to a set of technical operations divorced from substantive human concerns. In the case of digital theory, purely formal approaches to computation, while powerful within their disciplinary sphere, cannot capture the full range of computational phenomena as they exist out in the world. The insistence on formal methods as the primary or sole path to true knowledge about computation risks obscuring the social, cultural, and political dimensions of digital technologies. A critical digital theory would therefore need to interrogate nomothetic insights from computer science with idiographic investigations into the specific manifestations and effects of computational systems in various contexts. This dual approach would recognise that while formal models can provide some valuable insights, they must be complemented by critical, contextual analyses that situate computation within broader societal frameworks (Polanyi, 2009).
By utilising ideology critique, we can see why Big Tech and Silicon Valley would find nomothetic styles of formalist explanation for computation appealing, even when addressing contingency. The embrace of nomothetic, formalist explanations for computation by Big Tech and Silicon Valley serves as a powerful ideological tool that aligns with their economic interests and power structures. This approach, which seeks to establish general laws and principles for computation, including explanations for contingency, provides these tech giants with a veneer of scientific objectivity and inevitability that obscures their agency and responsibility in shaping digital technologies and their societal impacts.
By favouring nomothetic styles of explanation, Big Tech companies can present their products, services, and the very logic of their business models as the natural and inevitable outcomes of computational processes. This narrative conveniently aligns with reification of computational labour where the social relations involved in producing and maintaining computational systems are hidden behind a veil of technical neutrality. The complex human labour, decision-making processes, and power dynamics that underpin the development and deployment of technologies are thus rendered invisible, replaced by seemingly objective computational laws (Berry, 2014).
This ideological stance serves multiple purposes for Silicon Valley. Firstly, it deflects criticism and accountability. If the effects of their technologies – be they privacy violations, the spread of misinformation, or the exacerbation of social inequalities – can be framed as the result of general computational principles rather than specific design choices or business strategies, then the companies can position themselves as mere conduits of an unstoppable technological progress rather than active participants in the digital world. Ultimately, the nomothetic, formalist approach to explaining computation, even when accounting for contingency, can easily serve to naturalise and legitimise the power structures in the tech industry. It presents the dominance of a handful of tech giants as the logical outcome of computational principles rather than the result of specific business practices, regulatory environments, and political economic conditions.
Secondly, the nomothetic approach reinforces the tech industry's claims to expertise and authority. By presenting computation in terms of abstract, generalisable principles, Big Tech positions itself as the interpreter and arbiter of these principles, further consolidating its power and influence. Marx identified the tendency of the ruling class to present its interests as universal, natural, and rational – and in this case, the interests of Big Tech are often conflated with the supposed universal laws of computation (Marx, 1981).
Moreover, the formalist explanation for contingency in computation can serve a particularly problematic ideological function. By acknowledging and incorporating contingency into their nomothetic frameworks, tech companies can present themselves as having accounted for the unpredictability and complexity of real-world applications of their technologies. This creates an illusion of comprehensiveness and responsibility, while still maintaining the fundamental determinism that underpins their narrative. For example, Silicon Valley often has a “move fast and break things” ethos. If contingency is framed as an inherent and formally explicable aspect of computation, then the unintended consequences of rapidly deployed technologies can be rationalised as inevitable side effects rather than the results of inadequate testing, foresight, or ethical consideration.
This ideology critique reveals how computational determinism, manifested through nomothetic explanations, functions as a form of what Marx would call false consciousness (Marx, 1981). It obscures the real social relations and power dynamics at play in the digital economy, presenting the interests of Big Tech as aligned with the inexorable logic of computation itself (Berry, 2014). Recognising and challenging this ideological construct is crucial for developing a more critical, nuanced, and socially aware approach to understanding and shaping our digital future.
To counter the pitfalls of computational determinism, we must develop what Berry calls a critical digital humanities (Berry, 2023). This approach calls for a historicisation of the digital and computation, focusing on their materiality, specificity, and political economy. It requires us to examine not just the technical aspects of computational systems, but also their social, economic, and historical contexts. Furthermore, a critical approach to computation demands that we remain attentive to the ways in which computational concepts and methods may lead to new forms of control and the privileging of computational rationality. We must not only map these challenges but also propose new ways of reconfiguring research, teaching, and knowledge representation to safeguard critical and rational thought in a digital age.
The theories of Turing, Church, and Gödel, while groundbreaking in their own right, have been misappropriated to support a deterministic view of computation that fails to account for the complexities of human society and the political economic context in which computation operates. By critically examining the assumptions and implications of computational determinism, and by recognising its ideological function in justifying existing power structures, we can develop a more nuanced understanding of the role of computation in society. This understanding should recognise both the power and the limitations of computational approaches, while always situating them within their broader social, economic, and historical contexts. As can be seen above, whilst computational determinism shares some features with technological determinism, it presents unique challenges drawn as it is from the mathematical foundations of computer science, which can have implications for the politics, economics and culture of an information society.
David M. Berry
Bibliography
Adorno, T. and Horkheimer, M. (2016) Dialectic of Enlightenment. Verso Books.
Amoore, L. et al. (2024) ‘A world model: On the political logics of generative AI’, Political Geography, 113, p. 103134. Available at: https://doi.org/10.1016/j.polgeo.2024.103134.
Benjamin, R. (2024) The New Artificial Intelligentsia, Los Angeles Review of Books. Available at: https://lareviewofbooks.org/article/the-new-artificial-intelligentsia.
Berry, D.M. (2014) Critical Theory and the Digital. Available at: https://www.bloomsbury.com/uk/critical-theory-and-the-digital-9781441166395/ (Accessed: 11 April 2024).
Berry, D.M. (2023) ‘Critical Digital Humanities’, in J. O’Sullivan (ed.) The Bloomsbury Handbook to the Digital Humanities. London: Bloomsbury Publishing Plc, pp. 125–135. Available at: https://www.bloomsbury.com/uk/bloomsbury-handbook-to-the-digital-humanities-9781350232112/ (Accessed: 31 October 2022).
Berry, D.M. (2024) ‘Critical Digital Theory’, Stunlaw. Available at: https://stunlaw.blogspot.com/2024/10/critical-digital-theory.html
Marx, K. (1981) Capital: a critique of Political Economy. Penguin.
Noble, S.U. (2018) Algorithms of oppression: how search engines reinforce racism. New York: University Press.
Polanyi, M. (2009) The Tacit Dimension. Revised ed. edition. Chicago: University of Chicago Press.
Sadowski, J. (2020) Too Smart: How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World. Cambridge, Massachusetts: MIT Press.
Sample, I. (2017) ‘Computer says no: why making AIs fair, accountable and transparent is crucial’, The Guardian, 5 November. Available at: https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial (Accessed: 9 January 2023).
Suleyman, M. and Bhaskar, M. (2023) The Coming Wave: AI, Power and the Twenty-First Century’s Greatest Dilemma. Bodley Head.
Windelband, W. (1980) ‘History and Natural Science’, History and Theory, 19(2), pp. 169–185. Available at: https://doi.org/10.2307/2504798.
Comments
Post a Comment