Involution and Artificial Intelligence
David M. Berry
the primary effect of pattern is ... to check development, or at least to limit it. As soon as the pattern form is reached further change is inhibited by the tenacity of the pattern.. . . But there are also instances where pattern merely sets a limit, a frame . . . within which further change is permitted if not invited. Take, for instance, the decorative art of the Maori, distinguished by its complexity, elaborateness, and the extent to which the entire decorated object is pervaded by the decoration. On analysis the unit elements of the design are found to be few in number; in some instances, in fact, the complex design is brought about through a multiplicity of spatial arrangements of one and the same unit. What we have here is pattern plus continued development. The pattern precludes the use of another unit or units, but it is not inimical to play within the unit or units. The inevitable result is progressive complication, a variety within uniformity, virtuosity within monotony. This is involution. A parallel instance ... is provided by what is called ornateness in art, as in the late Gothic. The basic forms of art have reached finality, the structural features are fixed beyond variation, inventive originality is exhausted. Still, development goes on. Being hemmed in on all sides by a crystallized pattern, it takes the function of elaborateness. Expansive creativeness having dried up at the source, a special kind of virtuosity takes its place, a sort of technical hairsplitting… (Geertz 1963: 81).
This pattern of elaborate ornamentation within fixed constraints seems remarkably relevant for understanding AI development. This involution effect can be seen across the AI sector. For example, whilst GPT to GPT4 represented a qualitative leap in LLM capability, subsequent developments, including GPT-4 Turbo, Claude 3.5, Gemini Pro demonstrate only incremental improvements on previous versions that paradoxically require exponentially greater material resources. This appears to reproduce the involutional pattern of intensification without transformation, where competitive pressures drive ever-more elaborate technical arrangements whilst fundamental limitations remain unaddressed.
The competitive dynamics driving this kind of technological involution might be said to increasingly resemble what Chinese social theory calls neijuan (内卷). This concept is possibly derived from Geertz's agricultural analysis but applied in the Chinese context to educational and workplace competition where increased effort yields diminishing returns whilst trapping participants in zero-sum dynamics (Wakabayashi 2025).
It’s the circle of life in China’s business world. A promising technology or product emerges. Chinese manufacturers, by the dozens or sometimes the hundreds, storm into that nascent sector. They ramp up production and drive down costs. As the overall market grows, the competition becomes increasingly cutthroat, with rival companies undercutting one another and enduring razor-thin profit margins or even losses in the hope of outlasting the field... While most governments encourage vigorous competition and low prices, China is going in the opposite direction. It is trying to rein in “involution,” a sociological phrase widely used in China to describe a self-defeating cycle of excessive competition and damaging deflation (Wakabayashi 2025).
I am interested in whether the AI sector's current trajectory also reflects a form of neijuan as companies pursue ever-larger models requiring exponentially greater computational resources whilst achieving limited improvements in the user experience (Liu 2024). This parallel proves particularly apt given the central role of Chinese companies in driving current AI competition. The pursuit of artificial general intelligence (AGI) through scaled models seems to share the intensive development patterns that characterise this competitive spiral with endless optimisation within constrained interfaces rather than exploration of genuinely alternative, or even disruptive, approaches. But understanding AI's involutional forces requires examining not merely the scale of computational investment, but the interface paradigms that channel and constrain this development.
The current Chatbot conversational interface might help us to see the deeper structural implications of this constraint as an involutional effect. Despite the revolutionary potential of large language models, interaction remains mediated through what Suchman might recognise as a profoundly impoverished form of situated action manifested in the reduction of human-machine interaction to linear text exchanges within chat windows (Suchman 2007). This chatbot interface paradigm, whilst initially democratising access to complex LLM technologies, might constrain AI development and limit the kinds of qualitative leaps that are needed to move AI forward – a kind of innovators dilemma for the AI age (Christensen 2024).
I argue that the chatbot format creates multiple involutional moments. For example, it helps make AI capabilities more legible to venture capital investment through familiar metrics (such as response quality, user engagement, conversation length) and therefore more amenable to venture capital and other forms of investment. Indeed, it might also help maintain the illusion of human-like intelligence, which is something that makes the technology appear cutting-edge and exciting, whilst avoiding more challenging questions about alternative forms of LLM processing. Perhaps most importantly, it may channel development efforts into scaling competitions of who has the most GPUs or largest data centre, rather than fundamental architectural innovation. This can be seen in recent announcements made by Mark Zuckerberg about Meta's Hyperion data centre project, whose footprint will be large enough to cover most of Manhattan and "expects to supply its new AI lab with five gigawatts (GW) of computational power" (Zeff 2025). Indeed, it also appears to be the case in the massive investments over the past couple of years in capital projects involving GPU based data centres and cloud computing capacity, such as xAI's Colossus data center for its Grok chatbot (Evanson 2025).
I argue that this might be due to the assumption that all meaningful intelligence must be expressible through linguistic exchange and displayed through text windows which then causes an infrastructural frenzy of development. The chatbot interface forces sophisticated computational systems into a kind of cognitive straightjacket that serialises the LLM's parallel processing capabilities into linear text streams, creating a potential mismatch between the computational infrastructure platform and the chatbot interface paradigm.[2]
The chatbot's persistence, despite its obvious limitations, also reveals a kind of technological retention where new technologies tend to be shaped by the constraints of their predecessors rather than their own possibilities (Stiegler 1998). The conversational paradigm embedded within browsers and app textual interfaces assumes the primacy of linguistic communication and the adequacy of this form for representing complex multidimensional token-generating or diffusional processes. It also echoes the work of Joseph Weizenbaum on the ELIZA chatbot from 1966 within a 21st century political economy (see Berry and Marino 2024).[3]
ELIZA was named for George Bernard Shaw’s Eliza Doolittle from his sociolinguistic satire Pygmalion, though more directly from the cinematic musical adaptation My Fair Lady (released in 1964). Like its namesake, the ELIZA system can be instructed (or encoded) to speak after the fashion of all sorts of interactors... Most people know ELIZA through (and as) its most famous persona, DOCTOR, with which the system is often conflated. DOCTOR is a script that “runs” on ELIZA and which performs a simplified version of Rogerian psychoanalysis, asking questions and reflecting back answers in a clever pattern matching and transformation system. A script creates a conversational persona on the ELIZA system (Berry and Marino 2024).
As Suchman explains, "the design of the DOCTOR program... exploited the natural inclination of people to deploy what Karl Mannheim first termed the documentary method of interpretation to find the sense of actions that are assumed to be purposeful or meaningful... Very simply, the documentary method refers to the observation that people take appearances as evidence for, or the document of, an ascribed underlying reality, while taking the reality so ascribed as a resource for the interpretation of the appearance." (Suchman 2007: 48). Indeed, the concept of misunderstanding seems crucial to the development of AI's using these chatbot interfaces to access the underlying LLM systems, although I don't have space here to outline this further (see Suchman 2007: 50). We should also be aware of the importance of concealment and misrepresentation in allowing these systems to give the impression not just of their capabilities but also their "intelligence."[4] As Weizenbaum notes,
ELIZA in its use so far has had as one of its principal objectives the concealment of its lack of understanding. But to encourage its conversational partner to offer inputs from which it can select remedial information, it must reveal its misunderstanding. A switch of objectives from the concealment to the revelation of misunderstanding is seen as a precondition to making an ELIZA-like program the basis for an effective natural language man-machine communication system (Weizenbaum quoted in Suchman 2007: 49-50).
This historical precedent reveals how interface paradigms can become technological limits, a dynamic that appears increasingly relevant to recent AI development. Yet understanding ELIZA's legacy requires recognising how these historical forms can also serve contemporary economic imperatives. Of course, this interface could be said to serve crucial economic functions within the current AI development paradigm. It maintains compatibility with existing desktop metaphors, requires minimal user hardware innovation, and produces interaction patterns that can be easily monitored, captured and potentially monetised. The chatbot interface thus functions as a reterritorialising mechanism allowing controlled release of new AI technological capabilities whilst preventing genuine transformation that might be extremely difficult to manage or use on existing interfaces.[5]
I think Suchman's analysis of plans and situated actions is helpful for thinking about how the chatbot interface embodies precisely the kind of rigidity that involution theory suggests inhibits genuine development (Suchman 2007). As she explains,
by situated actions I mean simply actions taken in the context of particular, concrete circumstances... As a consequence our actions, although systematic, are never planned in the strong sense that cognitive science would have it. Rather, plans are best viewed as a weak resource for what is primarily ad hoc activity... Reconstructed in retrospect, plans systematically filter out precisely the particularity of detail that characterises situated actions, in favour of those aspects of the actions that can be seen to accord with the plan (Suchman 2007: 26).
Where Geertz identified ornamentation within fixed constraints, Suchman shows how technological interfaces can impose inflexible structures that filter out the contextual richness of actual interaction. The chatbot's conversational format represents exactly this kind of pre-structured format that, in Suchman's terms, reduces situated actions to predetermined conversational patterns. Such limiting structures potentially create an involutional cycle where increasingly sophisticated AI capabilities are forced into progressively elaborate variations of the same impoverished interface paradigm. Building on this analysis, she further argues, "the commitment to situated action orients us, however, always to the question of just how, and for whom, culturally and historically recognisable formations take on their relevance to the moment at hand" (Suchman 2007: 16).[6] These interface constraints reflect deeper cultural assumptions about the nature of intelligence and communication itself. The chatbot interface could be said to embody a Western, rationalist assumption about human-machine interaction. In other words, that complex cognitive collaboration can and should be reduced to explicit linguistic instruction-following, or conversation via textual interchange. This approach tends to ignore the embodied, contextual, and often non-verbal dimensions of human communication whilst forcing AI systems into similarly impoverished representational frameworks.
These issues are suggestive as they extend beyond technical optimisation towards questions about technological possibility under late capitalism and a paradigm of computing forming around AI. If current AI development has indeed entered an involutional phase, escaping this trap may require more than technical innovation. Indeed, it may demand new political economic and social arrangements capable of sustaining forms of artificial intelligence oriented towards genuine social benefit rather than continued intensification within historical interface paradigms.[7]
** Headline image generated using DALL-E 3 in June 2025. The prompt used was: "An oil painting on canvas captures three individuals from diverse backgrounds seated against a warm, ochre-toned wall, with the word 'INVOLUTION' boldly painted behind them. The soft, diffused lighting highlights their focused expressions and the texture of their surroundings, while subtle shadows add depth to their features and devices. To the left, an East Asian man holds a smartphone, absorbed in the screen, his wrinkled shirt and slightly disheveled hair illuminated by the light; the woman in the center, with voluminous curly hair, gazes at her tablet, dressed in a burnt-orange sweater; and on the right, a Caucasian man checks his smartwatch while working on a laptop, his brown blazer and unbuttoned shirt providing quiet contrast to the textured background." Due to the probabilistic way in which these images are generated, future images generated using this prompt are unlikely to be the same as this version.
Notes
[1] The term "involution" derives from the Latin involūtiō, meaning "rolling up" or a "spiral," formed from the verb involvere ("to roll into, envelop, surround"), itself composed of the prefix in- ("into") and volvere ("to roll"). First recorded in English in the late 14th century, the word originally described a condition of being "twisted or coiled; a fold or entanglement." Arnold Toynbee (1889-1975), a British historian, employed the term involution in his historical work to describe civilisations that turn inward and become increasingly complex without further creative growth. He was likely influenced by Henri Bergson's (1859-1941) use of the concept in his philosophy.
[2] The concept of "involution" carries multiple resonances beyond Geertz's agricultural analysis (see Liu 2021). In literary and philosophical contexts, particularly through the work of Deleuze and Guattari (1980), involution describes creative processes of "becoming" that operate through intensive difference rather than extensive development. This is what Gontarski terms a "creative involution" involving multiplicities, deterritorialisation, and the generation of new worlds through immanent forces rather than transcendent progression (Gontarski 2015). As Deleuze and Guattari argue, "the term we would prefer for this form of evolution between heterogeneous terms is 'involution,' on the condition that involution is in no way confused with regression. Becoming is involutionary, involution is creative. To regress is to move in the direction of something less differentiated. But to involve is to form a block that runs its own line 'between' the terms in play and beneath assignable relations" (Deleuze and Guattari 1980: 238-239). This, of course, raises interesting questions in relation to involution as a cycle of ornamentation (or excessive investment and capital expense) that I refer to in the body of this article. Indeed, Gontarski's analysis is interesting in how he argues that literary modernism embodies "a fearsome involution calling us toward unheard-of becomings" (Gontarski 2015: 240). This literary understanding, although outside the scope of this article, is notable as it highlights how apparent regression or constraint can paradoxically generate new forms of creative possibility, even as current AI development appears trapped within increasingly elaborate variations of constrained conversational paradigms. N. Katherine Hayles's work on computational media similarly provides some suggestive theoretical ways to think about this, particularly her analysis of how digital technologies create new forms of "technogenesis" that reshape human cognitive capabilities through recursive feedback loops between technological systems and embodied cognition (see Hayles 1999, 2017). Many thanks for Michael Jonik for his helpful suggestion to look at the work of Gontarski for literary notions of involution.
[3] We could describe this as the serialisation of AI capabilities into linear text streams. This raises interesting questions about literary quality and aesthetic judgement that resonate with longstanding debates in modernist criticism. Gontarski's analysis of Beckett's creative involution is interesting in thinking about how genuine literary innovation often emerges precisely through resistance to conventional representational frameworks. He terms this the "impossibility of representation" that drives artistic creation toward new expressive possibilities (Gontarski 2015: 133-4). Hayles's work on "machine reading" versus "human reading" is particularly interesting here, as she argues that computational text processing operates through fundamentally different aesthetic and cognitive principles than human literary engagement (Hayles 2012). The chatbot interface's privileging of conversational adequacy over literary quality thus represents a potential error of design, forcing systems capable of what we might term "machinic aesthetics" into frameworks designed for human linguistic competence. It is an interesting question as to whether we should be developing new criteria for evaluating machine-generated text that acknowledge rather than suppress the fundamental alterity of computational creativity, moving beyond conversational interfaces which limit the possibilities of LLM textual production. See the Fabula-NET: Computational Research of Literacy Fiction and Narratives for an interesting project around literary quality, https://chc.au.dk/research/fabula-net
[4] Also see https://findingeliza.org/
[5] The recent emergence of new browsers from AI companies, such as Comet by Perplexity, perhaps signals the difficulties AI companies are finding in creating or developing new interface paradigms for artificial intelligence systems (see also Vaughan-Nichols 2025 for a discussion related to OpenAI's rumoured new browser).
[6] See Marres's (2020) interesting proposal for a "situational analytics" that offers one potential direction for a response to these involutional problems. Marres argues that computational social science faces a methodological problem in that as researchers turn to computational settings to analyse social life, the social processes they study are affected by the computational architectures in which they occur (see Berry 2011, 2023 for a similar discussion). A kind of double hermeneutic problematic emerges in computational analysis and suggests the need, therefore, for reflexivity in research (Berry 2014; Connolly 2020). Her approach therefore extends Adele Clarke's qualitative situational analysis to computation, making the heterogeneously composed situation the unit of analysis. Situational analytics therefore focuses on surfacing which actants "make a difference" in a situation, even as computation might problematise the very notion of context itself. Many thanks to Michael Dieter for pointing me towards this paper and its argument.
[7] Some examples might include publicly funded AI research oriented towards social utility rather than towards competitive advantage, interface paradigms that embrace rather than constrain AI's computational alterity, and regulatory frameworks that prevent the concentration of AI capabilities within proprietary chatbot systems or platforms.
Bibliography
Berry, D.M. (2014) Critical Theory and the Digital. 1st edn. New York: Bloomsbury Publishing Plc. Available at: https://doi.org/10.5040/9781501302114.
Berry, D.M. (2011) ‘The Computational Turn: Thinking About the Digital Humanities’, Culture Machine, 12. Available at: https://culturemachine.net/the-digital-humanities-beyond-computing/ (Accessed: 28 October 2023).
Berry, D.M. (2023) ‘Critical Digital Humanities’, in J. O’Sullivan (ed.) The Bloomsbury Handbook to the Digital Humanities. London: Bloomsbury Publishing Plc, pp. 125–135. Available at: https://www.bloomsbury.com/uk/bloomsbury-handbook-to-the-digital-humanities-9781350232112/ (Accessed: 31 October 2022).
Berry, D.M. and Marino, M.C. (2024) ‘Reading ELIZA: Critical Code Studies in Action’, Electronic Book Review [Preprint]. Available at: https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/ (Accessed: 4 November 2024).
Christensen, C.M. (2024) The Innovator’s Dilemma, with a New Foreword: When New Technologies Cause Great Firms to Fail. La Vergne: Harvard Business Review Press.
Connolly, R. (2020) Why Computing Belongs Within the Social Sciences. Available at: https://cacm.acm.org/magazines/2020/8/246368-why-computing-belongs-within-the-social-sciences/fulltext (Accessed: 11 September 2022).
Deleuze, G. and Guattari, F. (1980) A Thousand Plateaus: Capitalism and Schizophrenia. University of Minnesota Press.
Evanson, N. (2025) ‘Musk’s Colossus data center for Grok is at the centre of an environmental row over air quality in South Memphis’, PC Gamer, 12 May. Available at: https://www.pcgamer.com/software/ai/musks-colossus-data-center-for-grok-is-at-the-centre-of-an-environmental-row-over-air-quality-in-south-memphis/ (Accessed: 22 July 2025).
Geertz, C. (1963) Agricultural involution: The process of ecological change in Indonesia. University of California Press.
Gontarski, S.E. (2015) Creative Involution: Bergson, Beckett, Deleuze, Edinburgh University Press.
Hayles, N. K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press.
Hayles, N. K. (2012) How We Read: Close, Hyper, Machine, University of Chicago Press.
Hayles, N. K. (2017) Unthought: The Power of the Cognitive Nonconscious, University of Chicago Press.
Liu, Y.-L. (2021) ‘China’s “Involuted” Generation’, The New Yorker, 14 May. Available at: https://www.newyorker.com/culture/cultural-comment/chinas-involuted-generation (Accessed: 22 July 2025).
Marres, N. (2020) ‘For a situational analytics: An interpretative methodology for the study of situations in computational settings’, Big Data & Society, 7(2), p. 2053951720949571. Available at: https://doi.org/10.1177/2053951720949571.
Stiegler, B. (1998) Technics and time, 1: The fault of Epimetheus. Stanford University Press.
Suchman, L. A. (2007) Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
Wakabayashi, D. (2025) ‘China’s Problem With Competition: There’s Too Much of It’, The New York Times, 22 July. Available at: https://www.nytimes.com/2025/07/22/business/china-involution-competition-deflation.html (Accessed: 22 July 2025).
Comments
Post a Comment