The Third Man Argument and Computational Metaphysics: From Formalist Abstraction to Materialist Critique

This article examines foundationalist explanatory claims about computation. I argue that much of our thinking about digital systems remains wedded to a form of computational Platonism. This position, which I term computational formalism, seeks to ground computation in abstract mathematical forms and idealised models, separating it from its material and social conditions of possibility. Drawing on the classical critique of Platonic forms known as the Third Man Argument, I show how computational formalism generates intractable philosophical difficulties and fails to account for the complex realities of software development and deployment. In response, I propose a computational materialism that understands computation as always already embedded in concrete practices and power relations. Examining how this approach reframes our understanding of key issues in artificial intelligence, software engineering, and interface design, I argue for a critical theory of the digital that eschews idealisation in favour of immanent critique.

First, I examine how computational formalism is latent in contemporary discourses around artificial intelligence, unpacking its Platonic resonances and idealising tendencies. Second, I propose the Third Man Argument as a philosophical tool for critiquing this formalist paradigm, demonstrating how it generates infinite regresses and lacks explanatory power. Finally, I develop computational materialism as an alternative theoretical framework, showing how it might transform our understanding of key issues in software design and deployment.

Computational Formalism

Today there is a growing interest in metaphysical formulations of the digital that seeks to ground computation in mathematical or formal foundations (cf Golumbia 2009; Berry 2014). This approach, which I have elsewhere termed computational formalism (Berry 2023a), bears striking similarities to Platonic metaphysics in its attempt to establish eternal, unchanging forms as the basis for understanding computational systems.[1] 

I use the term metaphysical here to refer to attempts to ground computation in abstract, universal principles that transcend specific material implementations and historical conditions. When I describe computational formalism as metaphysical, I point to its tendency to posit eternal, unchanging foundations for computational systems, divorced from their concrete social and technical conditions of possibility. This metaphysical language is used in various ways: through attempts to reduce computation to pure mathematical forms, through claims about universal properties of algorithmic systems, or through appeals to abstract notions of intelligence or reasoning that supposedly exist independently of their material instantiations. Such metaphysical thinking represents what I have elsewhere termed computational ideology, where the concrete achievements of technical labour and social practice are mystified into seemingly natural or eternal forms. This tendency towards metaphysical abstraction is particularly evident in AI, where complex sociotechnical systems are frequently presented as universal truths rather than contingent technical achievements. 

Contemporary artificial intelligence research offers striking examples of computational formalism's continued influence. We can see this, for example, in recent work on large language models, where researchers frequently appeal to emergent capabilities through mathematical scaling laws (Kaplan et al. 2020). Companies like OpenAI, Anthropic and DeepMind routinely describe their models through abstract mathematical properties, suggesting that increasing parameters and computational resources leads inevitably to new cognitive capacities emerging from the underlying mathematical forms. I argue that this represents a contemporary version of computational Platonism, where abstract mathematical properties are presumed to generate concrete capabilities.

Similarly, current research into neural scaling laws exemplifies computational formalism's metaphysical tendencies. Researchers argue that model behaviour can be understood through power laws and mathematical regularities that transcend specific implementations (Wei et al. 2022). This approach treats these mathematical relationships as abstract forms that explain the behaviour of actual AI systems, rather than examining how such regularities emerge from specific material practices and technical conditions. The notion of "emergence" in particular often functions as a contemporary version of Platonic participation, where concrete computational systems are said to somehow participate in abstract mathematical forms through scaling relationships.

These tendencies are particularly visible in recent work on artificial intelligence (AI), where researchers frequently appeal to abstract notions of "intelligence" or "reasoning" that their systems are meant to approximate or instantiate. The debate over whether large language models truly "understand" or merely "pattern match" demonstrates how contemporary AI research often recapitulates classical philosophical problems about forms and particulars (Marcus and Davis 2020). Here again we see metaphysical problem emerge: if understanding is treated as an abstract form that AI systems might instantiate, we face an infinite regress in explaining how concrete implementations relate to these abstract capabilities.

The Third Man Argument

However, just as Plato's theory of forms encountered the serious critique known as the Third Man Argument (TMA), I argue that if we apply the TMA to these kinds of computational formalisms they face analogous philosophical difficulties that reveal the limitations of purely formal approaches to understanding computation. The Third Man Argument, originally formulated by Plato in the Parmenides dialogue, demonstrates a fatal regress in the theory of forms (Cohen 1971: 448).[2] If we claim that particular instances of something (say, computational processes) participate in or resemble an ideal form (computation itself), we must then account for the similarity between the particulars and the form. This similarity would seem to require a higher form to explain it, leading to an infinite regress (Fine 2004: 203-238, cf Pelletier and Zalta 2000).[3] In the case of computation, we can see this problem manifesting in attempts to ground computational systems in formal mathematical models or abstract notions of universal computation (see Chalmers 1996: 40, 50).

I have argued previously (see Berry 2014, 2023a) that when media theorists attempt to explain the relationship between actual computational systems and idealised mathematical models of computation, they often implicitly rely on a kind of Platonic metaphysics that posits computation as an eternal, unchanging form (see Piccinini 2015: 50, 105). However, this raises the question of how concrete computational systems participate in or instantiate this ideal form. Indeed, the relationship between the particular and the universal in computation cannot be explained without introducing a third term, analogous to the "third man" in Plato's argument (Vlastos 1954). Similarly, computational formalists often attempt to explain the relationship between a particular algorithm implemented in code and the abstract mathematical function it computes (Turing 1950). They argue, in effect, that the concrete implementation participates in or instantiates the ideal mathematical form. But this raises the question of how we understand the similarity between the implementation and the mathematical model (Berry 2014; 2023; Berry and Marino 2024). This similarity would seem to require its own explanation, leading us to posit a higher level form to account for it. But then we face the same problem at this higher level, generating an infinite regress.

The Third Man Argument's relevance to computation becomes particularly clear when we examine contemporary developments in artificial intelligence. While traditional computational systems already posed challenges for formal metaphysical accounts, the emergence of machine learning systems introduces new layers of complexity to the problem of computational forms (Berry 2023b). These systems do not simply implement predefined mathematical models but rather generate their own internal representations through training processes. This creates a peculiar doubling of the Third Man problem: not only must we account for the relationship between concrete implementations and abstract forms, but we must also explain how machine-generated representations relate to both their training data and their mathematical foundations. This multiplication of representational layers makes the regress identified by the Third Man Argument not merely a philosophical concern but an urgent practical problem for understanding contemporary computational systems.

This difficulty becomes particularly acute when we consider machine learning systems that generate their own internal representations and models (Mackenzie 2017). These systems operate at multiple levels of abstraction simultaneously: the physical hardware, the implemented algorithms, the learned representations, and the abstract mathematical models describing their operation. Attempting to ground this complexity in abstract mathematical forms leads to precisely the kind of regress that the Third Man Argument identifies. The problem stems from treating computation as a purely formal, abstract entity divorced from its material and social conditions of possibility (Berry 2011, 2014; Dreyfus 1974, 2012). Just as Plato's forms existed in an eternal realm separate from the material world, computational formalists often treat computation as something that transcends its concrete implementations. But this metaphysical move generates intractable philosophical problems (see Kittler 1997; Chun 2011).

I argue instead that a more productive approach would be to understand computation as always already embedded in material practices and social relations (see Stiegler 2016; Dreyfus 2012). Rather than seeking abstract mathematical foundations, we should examine how computational systems emerge from and transform concrete historical conditions. This requires attention to both the material specificities of computational systems and the social practices through which they are developed and deployed. Consider how source code functions in actual programming practice (Berry 2011; Marino 2020). While it may seem to represent abstract computational processes, code is always written in specific programming languages with their own historical development and embedded assumptions. The relationship between code and computation cannot be reduced to simple participation in mathematical forms, but must be understood through the complex mediations of languages, tools, context and practices (Marino 2020).

Similarly, when we take the time to carefully examine how algorithms operate in real computational systems, we find they are not pure mathematical entities but are actually shaped by engineering constraints, optimisation requirements, and social needs (Berry 2014; Kitchin 2016). By using TMA, we can see how attempts to ground these systems in abstract forms inevitably lead to serious explanatory difficulties.

The Third Man Argument thus reveals fundamental problems in how contemporary technical practices attempt to ground themselves in formal abstractions. Examples such as, software engineering methodologies like "clean code" or "design patterns" seek to identify universal principles of good programming that transcend specific implementations. These attempts to establish platonic ideals of software development inevitably encounter the same regress identified by the Third Man Argument: how do we explain the relationship between these abstract principles and concrete coding practices? As can be seen above, contemporary artificial intelligence research frequently appeals to scaling laws and architectural principles as if they represented eternal mathematical truths they are distracting attention from contingent technical achievements emerging from specific material conditions and labour practices. As I have shown, the attempt to ground machine learning in pure mathematical abstractions faces the same philosophical difficulties as Plato's theory of forms. When OpenAI claims that increased model scale inevitably leads to emergent capabilities in ChatGPT, they treat these capabilities as abstract forms that systems can participate in through mathematical scaling relationships without explaining how concrete implementations actually relate to these supposed universal properties. What is needed instead is an approach that begins from the material specificity of computational systems and examines how abstractions emerge from concrete technical practices rather than existing as eternal mathematical forms. This suggests the need for computational materialism as an alternative theoretical framework.

Computational Materialism

This critique suggests the need for an alternative approach that I call computational materialism. This would examine computation in terms of its concrete material implementations and social relations rather than seeking metaphysical foundations (Berry 2014, 2023). This approach would attend to how computational systems are actually built, maintained, and transformed in practice. Computational materialism should examine the specific ways that computational systems mediate social relations and transform material practices. Rather than treating computation as an abstract formal system, I argue it should investigate how computational technologies reshape human experience and social organisation. This includes examining how computational systems encode and enforce particular social relations and forms of rationality (see Noble 2018).

The implications extend beyond philosophical critique to practical questions of how we design and deploy computational systems (see Fuller 2003). Rather than seeking mathematical purity or formal elegance, I argue that we should examine how computational systems function within concrete social and material contexts. This suggests the need for new approaches to understanding software and code that attend to social and material specificity rather than abstract formalism. Rather than treating algorithms as pure mathematical objects, we should examine how they encode particular forms of social rationality and power relations (Berry 2014, 2023a). This includes investigating how computational systems embody specific historical forms of instrumental reason and how they transform human cognitive capacities (Hayles 2012). For instance, the practice of pair programming, where two developers work together at a single computer, represents an implicit acknowledgement that software development is an inherently social rather than purely technical practice. Similarly, the growing adoption of "documentation as code" practices, where system documentation is treated as integral to rather than separate from development, reflects how software meaning is produced through concrete practices rather than abstract specifications (see Berry and Marino 2024).

Similarly, when considering machine learning systems, I argue that we should attend to how they encode particular forms of statistical reasoning and how these shape social practices (see Eubanks 2018). Rather than treating them as implementations of abstract mathematical models, we should examine how they transform human experience and social organisation in concrete ways. This materialist approach also suggests new ways of thinking about programming languages and software development practices (see Marino 2020 for one example of this). Rather than treating them as formal systems for implementing mathematical specifications, we should examine how they encode particular forms of social rationality and shape human cognitive practices. For example, computational materialism could examine how the system's abstractions are produced through specific material practices: the construction of training datasets, the development of model architectures, the technical constraints of hardware, and the social practices of software development.

The materialist approach reveals that what appears as an abstract mathematical form in computational systems is actually a concrete achievement of social and technical labour. The apparent universality of computational forms is produced through standardisation practices, technical protocols, and shared development methodologies rather than existing as platonic ideals. This understanding dissolves the Third Man problem by showing that there is no mysterious relationship between concrete and abstract that needs explaining through appeal to higher forms.

It shows how the very distinction between concrete implementation and abstract model is itself a product of specific historical developments in computer science and software engineering. The layered architectures of contemporary computing systems, with their separation of hardware, operating systems, programming languages, and applications, represent not metaphysical necessity but practical solutions to engineering problems (Berry 2011). Understanding this historical specificity helps avoid the reification of these distinctions into eternal forms requiring philosophical explanation.

This approach particularly illuminates how machine learning systems, rather than representing a special challenge for materialist analysis, actually demonstrate its explanatory power. The apparent mystery of how neural networks learn representations becomes comprehensible when examined through the lens of the specific technical practices, data infrastructures, and social relations that make machine learning possible. The materialist approach reveals these systems not as implementations of platonic ideals but as concrete assemblages of hardware, software, data, and human labour whose capabilities and limitations reflect their material and social conditions of production. For example, rather than focusing solely on model architecture and parameters, a materialist approach examines the entire sociotechnical apparatus required for their development: the labour of data labellers and content moderators, the material infrastructure of data centres, the environmental costs of computation, and the social practices of prompt engineering. This reveals how apparently abstract capabilities emerge from specific configurations of human labour, technical systems and social relations. 

Conclusions

As I have shown above, although the Third Man Argument is useful for showing fundamental problems with attempting to ground computation in abstract forms (following Vlastos 1954; Cohen 1955), more is needed to understand computation. Just as Plato's theory of forms generates an infinite regress when trying to explain the relationship between particulars and universals, computational formalism faces similar difficulties in explaining how concrete computational systems relate to abstract mathematical models. Moving beyond computational formalism requires developing new theoretical approaches that can account for the material and social specificity of computational systems (Berry 2014, 2023a). Indeed, Hubert Dreyfus writing in 1974 articulated this clearly when he argued,

the belief in the possibility of Al, given present computers, is the belief that all that is essential to human intelligence can be formalised. This formalist aim has dominated philosophy since Plato, who set the goal by limiting the real to the intelligible and the intelligible to that which could be made fully explicit so as to be grasped by any rational being. Leibniz pushed this position one step further by conceiving of a universal logical language capable of expressing everything in explicit terms which would permit thinking to achieve its goal of becoming pure manipulation of this formalism. Digital computers and information theory have given us the hardware and the conceptual tools to implement Leibniz's vision. We are now witnessing the last act wherein this conception of [humans] as essentially rational – and rationality as essentially calculation – will either triumph or else reveal its inherent inadequacies (Dreyfus 1974: 23). 

This suggests the need for careful attention to how computation actually functions in practice rather than attempting to ground it in abstract mathematical forms.

Critical theory of the digital offers one potential path forward (Berry 2014, 2023a). By examining how computational systems emerge from and transform concrete material practices and social relations, it avoids the metaphysical difficulties identified by the Third Man Argument while opening new avenues for understanding and critiquing contemporary computational systems.

However, this critique of computational formalism represents more than a purely theoretical intervention. Under conditions of algorithmic capitalism, the abstraction of computation from its material conditions serves to obscure both the labour that produces computational systems and the power relations they encode. When major technology corporations like Google, Amazon and Facebook present their systems as implementations of pure mathematical models, they systematically conceal the vast apparatus of human labour, technical infrastructure and environmental resources required for their operation. Moving beyond both the claims of computational formalism and the computational ideology that draws its support from it therefore becomes an urgent political task.

Computational materialism offers concrete possibilities for resistance and transformation. By revealing how computational systems emerge from specific configurations of labour, infrastructure and social relations, it creates opportunities for democratic contestation and control. This includes developing new forms of algorithmic literacy that enable citizens to understand and challenge automated systems, creating alternative infrastructures that prioritise alternative forms of life over profit maximisation, and fostering new forms of collective organisation among technical workers. The recent emergence of organised resistance within technology companies, from protests against military contracts to unionisation among content moderators, demonstrates how materialist critique connects to practical transformation.

The political stakes are particularly acute given the growing deployment of artificial intelligence systems across social life. These systems do not simply implement abstract mathematical models but actively reshape cognitive capacities and social relations. Understanding them through computational materialism reveals possibilities for intervention and transformation that remain obscured by formalist accounts. This includes developing what I elsewhere call explainable forms of life that enable democratic participation in shaping technical systems (Berry 2024). This creates the opportunity to develop alternative infrastructures that support rather than undermine critical thinking, and fostering new forms of solidarity among those whose labour and lives are increasingly mediated by computation.



Blogpost by David M. Berry

** Headline image generated using DALL-E in November 2024. The prompt used was: "Create an image to represent The Third Man Argument and Computational Metaphysics. Use paintings that use classical Athens motifs to inform the style."

Notes

[1] As Plato explains, "We are in the habit of positing a single Form for each plurality of things to which we give the same name" Rep. 596a, see also "There are certain forms, whose names these other things have through getting a share of them – as, for instance, they come to be like by getting a share of likeness, large by getting a share of largeness, and just and beautiful by getting a share of justice and beauty" Phaedo 130e-131a.

[2] See also Vlastos, G. (1954) The Third Man Argument in the Parmenides, Philosophical Review, 63, 319-349 and Sellars, W. (1955) Vlastos and the Third Man, Philosophical Review, 64 (1955) 405-437.

[3] As Fine (2004) explains, "Plato describes two regress arguments, each of which has been called a Third Man Argument, although Plato himself never so calls them. Plato's regress arguments are so called because Aristotle in various places mentions an argument that he calls the Third Man" (Fine 2004 2004: 203). She goes on to explain "Plato believes that knowledge is possible and that knowledge... requires explanation. Plato believes that one can know that something is F only if one knows the form of F, which involves explaining its nature. Since, by SP [self-predication], any form of F is F, explaining the nature of a form of F involves explaining why it is F. Suppose that it is F in virtue of a further form of F, and so on ad infinitum. Plato seems to think that in that case, we could never know that anything is F; in order to know that something is F, there must, in his view, be something that is self-explanatorily F, F in virtue of itself. But if the TMA is sound, nothing is F in virtue of itself. The TMA thus challenges not only U [uniqueness assumption] but also the possibility of knowledge" (Fine 2004 2004: 204). Fine also argues that the "P-TMA [Plato's Third Man Argument from Parmenides 132a1-b2] and the Resemblance Regress [from Parmenides 132d1-133a3] are logically the same argument" (Fine 2004 2004: 215), so I will not outline the Resemblance Regress in full here but see Fine (2004: 211-215).

Bibliography

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, Bloomsbury.

Berry, D. M. (2023a) Critical Digital Humanities, in J. O’Sullivan (ed.) The Bloomsbury Handbook to the Digital Humanities. Bloomsbury, pp. 125–135. https://www.bloomsbury.com/uk/bloomsbury-handbook-to-the-digital-humanities-9781350232112/

Berry, David M. (2023b) The Explainability Turn. Digital Humanities Quarterly 017 (2). http://www.digitalhumanities.org/dhq/vol/17/2/000685/000685.html

Berry, D. M. (2024) Algorithm and code: explainability, interpretability and policy, in Handbook on Public Policy and Artificial Intelligence. Edward Elgar, pp. 134-146.

Berry, D.M. and Marino, M.C. (2024) Reading ELIZA: Critical Code Studies in Action, Electronic Book Review. https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/

Chalmers, D. J. (1996) The Conscious Mind: In Search of a Fundamental Theory, Oxford University Press.

Chun, W. H. K. (2011) Programmed Visions: Software and Memory, MIT Press.

Cohen, S. M. (1971) The Logic of the Third Man, The Philosophical Review, 80(4), 448-475.

Dreyfus, H.L. (1974) ‘Artificial Intelligence’, The Annals of the American Academy of Political and Social Science, 412, pp. 21–33.

Dreyfus, H.L. (2012) ‘A History of First Step Fallacies’, Minds and Machines, 22(2), pp. 87–99. Available at: https://doi.org/10.1007/s11023-012-9276-0.

Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor,  St. Martin's Press.

Fine, G. (2004) On Ideas: Aristotle's Criticism of Plato's Theory of Forms, Clarendon Press.

Fuller, M. (2003) Behind the Blip: Essays on the Culture of Software. Autonomedia.

Golumbia, D. (2009) The Cultural Logic of Computation, Harvard University Press.

Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling Laws for Neural Language Models, https://arxiv.org/abs/2001.08361

Hayles, N. K. (2012) How We Think: Digital Media and Contemporary Technogenesis, University of Chicago Press.

Kitchin, R. (2016) Thinking critically about and researching algorithms, Information, Communication & Society, 20(1), 14-29. https://doi.org/10.1080/1369118X.2016.1154087 

Kittler, F. (1997) Literature, Media, Information Systems, Routledge.

Mackenzie, A. (2017) Machine Learners: Archaeology of a Data Practice, MIT Press. https://doi.org/10.7551/mitpress/10302.001.0001

Marcus, G., & Davis, E. (2020) Rebooting AI: Building Artificial Intelligence We Can Trust, Knopf Doubleday Publishing

Marino, M. C. (2020) Critical Code Studies, MIT Press.

Noble, S. U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism, NYU Press.

Pelletier, Francis Jeffry & Zalta, Edward N. (2000) How to say goodbye to the third man, Noûs 34 (2):165–202. https://philpapers.org/rec/PELHTS 

Piccinini, G. (2015) Physical Computation: A Mechanistic Account, Oxford University Press.

Stiegler, B. (2016) Automatic Society, Volume 1: The Future of Work, Polity.

Turing, A. M. (1950) Computing Machinery and Intelligence, Mind, 59(236), 433-460.

Vlastos, G. (1954) The Third Man Argument in the Parmenides, The Philosophical Review, 63(3), 319-349.

Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., et al. (2022) Emergent Abilities of Large Language Models." Transactions on Machine Learning Research (TMLR), https://arxiv.org/abs/2206.07682

Comments

Popular Posts