The Inversion and the Algorithmic Condition

What happens when the culture we experience and the forms of life that are predicated on this cultural context are no longer produced purely by human labour but become overwhelmed by the production of synthetic media from AI systems? This question has moved from the realm of science fiction to a pressing cultural and philosophical concern. As artificial intelligence systems like ChatGPT and DALL-E generate increasingly convincing texts, images, and other cultural artefacts at an unprecedented scale, we potentially confront a profound transformation in how meaning and human experience are constituted in societies (see Waddell 2019; Laas 2023; Ball 2024; Harris 2024; Placido 2024).[1] At stake is not simply the automation of cultural production, but a fundamental transformation in our relationship to reality itself, one that challenges traditional notions of human creativity, experience, and the very formation of social consciousness.[2] This article explores what I term the Inversion, a critical concept that helps us understand how AI systems are restructuring both our ways of seeing and our forms of life. Drawing on critical theory and the political economy of digital culture, I argue that we are witnessing not just technological change but a qualitative shift in how human experience and knowledge are produced and circulated in algorithmic capitalism.[3]

This scenario characterises what I call the algorithmic condition. This is a state where human cultural experience becomes increasingly mediated through and constituted by autonomous computational processes. Unlike previous forms of mechanical or digital mediation, the algorithmic condition is marked by systems that don't simply transmit or reproduce culture but actively generate and transform it through computational processes operating at scales and speeds beyond human perception. This condition is characterised by three key features: first, the progressive automation of cultural production through generative AI systems, second, the increasing difficulty in distinguishing between human-generated and machine-generated content, and lastly, the emergence of new forms of life structured around algorithmic mediation.[4] This represents a qualitative shift from what Stiegler termed “grammatisation,” the technical recording and reproduction of human gesture and cognition, to what I call “algorithmic genesis,” where computational systems don't just capture and reproduce human cultural production but actively generate new cultural forms through autonomous processes.[5] He writes,

in the course of the nineteenth century, technologies for grammatising audiovisual perception appear, through which the flows of the sensory organs are discretised. All noetic, psychomotor and aesthetic functions then find themselves transformed by grammatisation processes. Considered in terms of political economy, this amounts to the fact that it is the functions of conception, production and consumption which are grammatised—and which are thereby incorporated into an apparatus devoted to the production of tertiary retentions controlled by retentional systems (Stiegler 2010: 11).

Under the algorithmic condition, this grammatisation process itself becomes inverted and no longer simply discretising existing human gestures and cognition but generating new forms of cultural expression that have no prior human moment. The Inversion  becomes not just a technical possibility but an everyday reality, as our cultural experience is increasingly shaped by synthetic media whose origins and verification become increasingly difficult to ascertain.

This notion of the “Inversion" has recently emerged in technical circles and can, perhaps, help us begin to think about this. Originally identified by YouTube engineers in 2013 when AI bot traffic reached parity with human traffic, the Inversion represented a critical threshold where automated systems might begin treating algorithmic behaviour as “real” and human behaviour as “fake” (Read 2018). As Keller (2018) described, “YouTube had as much traffic from bots masquerading as people as it did from real human visitors, according to the company. Some employees feared this would cause the fraud detection system to flip,” a situation they described as an "inversion." However, beyond this technical description, the Inversion can be understood as a broader philosophical and cultural phenomenon that perhaps can be used to diagnose a signal, which might represent a fundamental shift in how reality and media operate in computational societies.

Figure 1: An example of chord inversions in music theory. This dominant seventh is an interestingly unstable chord in all inversions because in terms of tonal listening it tends towards resolution, that is, towards a stable triad, in this example towards an F major tonic triad.


The concept of musical inversion also offers a productive metaphorical framework for understanding the notion of algorithmic Inversion in digital culture. Musical inversion is an approach that maintains mathematical relationships while transforming the original material into new forms, and similarly algorithmic systems preserve certain structural features of cultural production while fundamentally reorganising their relationships. For example, in chord inversion, changing the bass note reconstructs the listener's experience of the harmony while maintaining the chord's identity and we might argue that algorithmic inversion maintains recognisable cultural forms while fundamentally reorganising their mode of production and reception. More striking is the parallel with what is called contrapuntal inversion, where multiple voices maintain their relationship while being turned upside down which mirrors how the Inversion maintains the appearance of cultural production while inverting the relationship between human and machine agency. But perhaps the most relevant is called melodic inversion, where intervals are systematically transformed into their opposites. This systematic transformation echoes how algorithmic systems invert the traditional relationship between original and copy, genuine and simulated, creator and audience. Just as a melodic inversion creates what musicians call derived material that is simultaneously the same and different, algorithmic systems generate cultural forms that maintain recognisable patterns while fundamentally transforming their ontological status. This suggests that the mathematical logic of musical inversion, what Schoenberg (1984) termed developing variation, might provide insights into how algorithmic systems transform cultural production while maintaining certain structural invariants.[6]

This systematic transformation of relationships, visible in both musical and algorithmic inversions, points toward a broader reconfiguration of cultural experience under computational capitalism. Where musical inversion demonstrates how structural relationships can be maintained even as their constituent elements are radically reorganized, the algorithmic Inversion suggests a similar but more profound transformation of cultural production and reception. The key difference is that while musical inversion operates within an established system of cultural meaning, algorithmic inversion transforms the very system through which meaning is produced and authenticated. The mathematical approach of musical inversion thus helps us grasp the systematic nature of this broader cultural transformation, while its limitations as a metaphor, such as its circumscription within existing musical structures, highlights the more radical nature of the algorithmic Inversion, which gestures to a fundamental transformation of how cultural elements are generated, circulated, and understood.

I therefore argue that the Inversion can be understood as marking both a technical and societal regime change, a change in ways of seeing, in which “everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real” (Read, 2018). This dialectical movement suggests how artificial intelligence may well have begun to restructure our very understanding of how reality functions.

The Inversion manifests what I have previously called “computational ideology,” the tendency to see computation as an independent force shaping social life rather than as human-created infrastructure (Berry 2014: 4). This reification of computational processes leads to what Marx identified as commodity fetishism, where social relations between humans appear as autonomous relations between things.[7] In the case of AI systems, this fetishism operates through a form of “mathematical romanticism.”[8] That is, an unstable fusion of formal mathematical logic with organic, developmental theories about machine intelligence (Berry, 2023). This romantic conception masks the material conditions of AI production while simultaneously attributing to it an almost mystical generative power.[9] This romantic conception echoes the German Romantic movement's attempt to reconcile mechanical materialism with organic vitalism, particularly visible in Novalis's search for a universal magical idealism. Just as the Romantics sought to infuse mechanical nature with spiritual life, contemporary discourse around AI often attributes quasi-organic properties to mathematical systems by often speaking of neural networks that “learn,” language models that “understand,” and algorithms that “create.” This form of romantic computation can be understood as the dream of a computer science that could capture both mechanical precision and organic becoming. The key difference is that where the Romantics sought to spiritualise mathematics (see Jahnke 1991), contemporary romantic computation computationalises spirit itself, inverting the relationship between the mechanical and the organic.[10]  

This inversional logic bears striking similarities to Marx's concept of false consciousness, though operating at a deeper infrastructural level. Where false consciousness describes how dominant ideologies obscure the real conditions of social relations, the Inversion appears to generate an “algorithmic consciousness” that not only mystifies these conditions but actively reconstructs them. Drawing on Jaeggi's (2018) critical theory of forms of life, understood as bundles of practices, patterns of action and interpretations, we can see how the Inversion penetrates into the very fabric of everyday existence (see Berry, 2024).[11] The implications of this inversional logic extend beyond technical systems into what I call computational forms of life, that is, forms of life as the digital patterns of social practice and interpretation that structure human experience. Under traditional false consciousness, workers misrecognise their real interests through ideological mystification, whereas under the Inversion, the very capacity to distinguish between real and synthetic experience becomes compromised and the very patterns of interpretation and action that constitute our forms of life become algorithmically mediated and reconstructed. 

The methodological challenges for ideology critique under conditions of the Inversion are considerable, requiring what I call an “inversional critique” that operates simultaneously at technical, cultural, and infrastructural levels. Traditional ideology critique sought to unveil the real conditions beneath ideological mystification, but when the very processes of cultural production and interpretation are algorithmically mediated, the distinction between surface appearance and underlying reality becomes increasingly difficult to maintain. An effective critique must therefore operate both symptomatically by examining the traces of algorithmic mediation in cultural artefacts and systematically, analysing the computational infrastructures that generate these artefacts. This requires developing new critical methods that can account for what I call the double articulation of algorithmic ideology. That is, both the traditional ideological content of AI-generated cultural products and the ideological implications of their algorithmic mode of production itself. Moreover, such critique must be reflexive about its own conditions of possibility, asking how can we be certain our own critical faculties haven't been shaped by the very inversional logic we seek to analyse? 

This suggests the need for what Benjamin termed constellational analysis, a method of philosophical enquiry that is by turns fragmentary and constellational (Benjamin 2002). This is an approach that examines multiple intersecting moments of technical, social, and cultural production simultaneously, while remaining attentive to how the Inversion might be restructuring our very capacity for critical thought. The aim would be not simply to unmask ideology but to trace the complex ways in which algorithmic systems are reconstructing the very grounds upon which ideology critique has traditionally operated.

When OpenAI's ChatGPT can generate academic papers about itself or DALL-E can create infinite variations of artistic styles, we confront not simply a misrecognition of reality but its algorithmic reconstruction at the level of social practice (see Thunström, 2022). The Inversion thus represents an intensification of false consciousness, what I call post-consciousness. Where false consciousness involved the mystification of real social relations and algorithmic consciousness described the mediation of experience through computational systems, post-consciousness marks a qualitatively new state where the very distinction between individual and synthetic consciousness becomes untenable. This is not simply a matter of being unable to distinguish between human and machine-generated content, but rather a fundamental transformation in how consciousness itself is constituted under algorithmic conditions. Under post-consciousness, the subject doesn't merely misrecognise reality (false consciousness) or have their perception mediated by algorithms (algorithmic consciousness), but experiences a form of consciousness that is itself partly synthetic, shaped by continuous interaction with and exposure to algorithmically-generated cultural forms. This represents what is, perhaps, a third-order simulation, over and beyond Baudrillard's (1994) simulacra, where not just reality but consciousness itself becomes subject to algorithmic generation and manipulation. This form of consciousness doesn't just obscure social relations but fundamentally reconstitutes our forms of life through algorithmic processes that blur the boundaries between human and machine experience.

These AI systems do not, therefore, simply mimic human cultural production they invert the traditional relationship between original and copy, authentic and simulated. This transformation also recalls Benjamin's concept of the “aura,” the unique aesthetic authority of an original artwork which he argued was diminished through mechanical reproduction, but AI pushes beyond it, as these systems generate works that have no clear original, no singular moment of original creation from which copies might derive. As I have argued elsewhere (Berry 2024a), they operate through vector representation and diffusion models that enable them to blend-in input context with internal representations to create variations that can only with difficulty be distinguished from intentional human cultural production.[12]  The Inversion suggests we are entering an era where the default assumption may be that cultural artefacts are computationally generated unless proven otherwise. This might represent a systemic inversion where being AI-generated is more profitable than being human-created.

Here I argue that critical theory can help us understand this not just as a technical phenomenon but as part of what I call as computational capitalism (Berry 2023). This is an accumulation regime where value extraction increasingly operates through the capture and manipulation of human cognitive and creative capacities and the circulation of culture in new value chains (via data extraction circuits). This dynamic produces a form of “synthetic audience,” an inversion of Dallas Smythe's concept of the audience commodity, where automated systems might generate, circulate and consume content in vast algorithmic feedback loops, often with minimal human involvement. The Inversion thus marks a qualitative shift in how capitalism subsumes human cultural production under computational logics.

An example of this is the emergence of what I term automimetric production on streaming platforms.[13] These are systems of “artificial streaming” where both cultural production and consumption are algorithmically automated, creating closed circuits of value extraction. Musicians and entrepreneurs have begun exploiting the political economy of streaming by creating autonomous systems that both generate and consume music, exemplifying a short-circuit of cultural transindividuation where human aesthetic experience becomes incidental to the process of value creation. Doing this they use closed algorithmic feedback loops that generate revenue. To create these loops they deploy bots to create endless variations of ambient or functional music through algorithmic composition, then use networks of automated listeners, essentially fake users, to stream these tracks continuously, generating micropayments from platforms like Spotify. In 2023, it was estimated that up to 10% of all streams might be generated by such "artificial" listening patterns (Kingley, 2023). This represents an interesting concrete example of the synthetic audience: the music is algorithmically generated, “listened to” by bots, with the entire system designed to extract value from the platform's payment infrastructure rather than serve any human aesthetic experience. What makes this case particularly revealing is how it inverts traditional models of cultural production and consumption as the “audience” is synthetic, the “creator” is increasingly algorithmic, and the actual human listeners become almost incidental to the value extraction process. This automated creation-consumption cycle demonstrates how the Inversion might restructure entire cultural industries, creating algorithmic value circuits that operate seemingly independent of human cultural experience while still generating real economic value.[14]

However, it is important to avoid seeing the Inversion as either purely technological determinism or as simple simulation of reality. Instead, it represents a “double aspect” requiring both technical and social analysis (Berry, 2024). The challenge is to develop approaches in what elsewhere I have called critical digital theory to map these inversional processes while maintaining space for human agency and critique. This again reinforces the need for new forms of critique and practice that can engage with computational systems while maintaining human flourishing as a central concern.[15]

Under conditions of the Inversion, constellational analysis would need to operate across multiple registers simultaneously: examining the technical architecture of AI systems, their cultural outputs, the political economy of their production, and the forms of life they engender. This might involve, for example, analysing an AI image generator through various interconnected moments, from the training data and its historical biases, the mathematical models and their embedded assumptions, the interface design and its behavioural implications, the labour relations it transforms, and then exploring the aesthetic forms it produces. The key methodological insight is that none of these moments alone captures the ideological operation of the system. It is only in their constellation that the full scope of the Inversion becomes visible. Moreover, this approach must be sensitive to algorithmic temporality, the way AI systems collapse historical time into a computational space through their training data, creating a dialectical image of cultural production under algorithmic capitalism.[16] This suggests that inversional critique requires new tools for mapping these complex relationships between technical systems, cultural forms, and social practices. These would be digital methods and tools that can capture both the synchronic operation of algorithmic systems and their diachronic transformation of cultural production.

This also suggests the need for new theoretical frameworks that can account for how the Inversion restructures fundamental categories of human experience and knowledge production, including its tendency to synthetic cognition.[17] This points to the importance of the need for “explainable forms of life” (Berry 2024a). For example, these could be interpretative frameworks that, like a well-constructed musical counterpoint, allow us to hear both the individual voices and their systematic transformation. The challenge ahead lies in developing critical reflexivity that can engage with these inversional processes while maintaining space for human agency and democratic participation in shaping technological futures.[18] We need new theoretical and practical approaches that can work with and through the Inversion without being subsumed by its logic, maintaining “the possibility of making them otherwise” (Berry 2024a).[19]




Blogpost by David M. Berry, 2024.




Notes

[1] The idea of “Dead Internet Theory” anticipates the concept of the Inversion by suggesting that much of the internet is now populated by bots, automated scripts, and AI-generated content rather than genuine human interaction (see Ball 2024; Placido 2024). It claims “that the internet has been almost entirely taken over by artificial intelligence” (Tiffany 2021). This idea that the internet died 2016 or early 2017 began on an imageboard called WizardChan, but today its proponents argue that an increasing portion of “content is mainly produced by bots and LLMs such as ChatGTP are curated by algorithms to 'manipulate the population and minimize organic human activity'” (Lovink 2024; also see Keller 2018; Tiffany 2021). Interestingly, Dead Internet Theory tends to argue that is the cyclical nature of repetitive context each year confirms the idea that the internet feels stale and dead. 
[2] The question of mediation becomes particularly complex under conditions of the Inversion. While Silverstone (1999) conceptualised mediation as a fundamentally dialectical process where media technologies both shape and are shaped by social practices, the rise of AI-generated culture introduces a form of algorithmic mediation. This form of mediation operates through what Silverstone termed double articulation, where media are both material objects and symbolic messages, but with a crucial difference: the mediating processes themselves are now increasingly autonomous and generative. Unlike traditional media which primarily circulated human-produced content, AI systems actively generate and transform cultural materials through computational processes that may operate independently of human intention or interpretation. This suggests we need to revise Silverstone's notion to account for a possible “triple articulation,” where media are simultaneously (1) material infrastructure, (2) symbolic content, and (3) generative computational processes. Such a reconceptualisation might help us understand how the Inversion fundamentally transforms the mediating role of digital technologies in contemporary culture.
[3] The problem of the Inversion connects interestingly to debates around algorithmic explainability and interpretability. Just as explainability attempts to bridge the gap between opaque computational systems and human understanding (Berry, 2024), the Inversion points to a broader crisis in our ability to interpret and authenticate cultural production in an age of synthetic media. The push for "explainable AI" might thus be read as a technical response to the broader philosophical problem raised by the Inversion, that is, how to maintain critical human agency and understanding in the face of increasingly autonomous computational systems. However, as I argue elsewhere, purely technical approaches to explainability may be insufficient without considering the broader political economic context and forms of life in which these systems operate (Berry 2014, 2024). We might therefore need infrastructural or social explanations that can account for how the Inversion restructures not just technical systems but entire patterns of cultural production and interpretation.
[4] The seeds of inversional logic were anticipated in the work of Joseph Weizenbaum (1976) whose reflections on ELIZA, one of the first natural language processing programs, led him to a cultural critique of artificial intelligence. Weizenbaum was disturbed to find users attributing understanding and emotional capacity to his simple pattern-matching program, leading him to warn that we risk transforming our society in such a way that it would be hospitable primarily to computer systems and only secondarily to human purposes. What Weizenbaum identified as a psychological tendency to anthropomorphise computational systems has, under conditions of the Inversion, become a structural feature of algorithmic culture. Weizenbaum's crucial distinction between decision and choice becomes relevant here, where decision can be reduced to computational calculation but choice requires judgment and wisdom. Under the Inversion, this distinction collapses as algorithmic systems increasingly make what appear to be choices about cultural production, while human choice becomes increasingly circumscribed by algorithmic decision-making. For more information on ELIZA see https://findingeliza.org/ 
[5] The transformation of creative labour under the algorithmic condition can be starkly observed in the graphic design and illustration sectors. With the emergence of AI image generation systems like Midjourney, DALL-E 2, and Stable Diffusion, work that previously required significant human skill, training, and creative labour can now be partially automated through text prompts. This represents more than mere automation, it fundamentally transforms the nature of creative practice itself. Professional illustrators and designers increasingly find themselves in a position where their labour is not simply aided by computational tools (as with earlier digital design software) but potentially displaced by generative systems that can produce countless variations of images in minutes. The consequences are already visible: design agencies incorporating AI generation into their workflows, clients requesting AI-generated concept art, and traditionally secure creative positions being restructured around prompt engineering rather than direct image creation. This shift exemplifies what I am calling here the Inversion, the traditionally human-centred processes of conceptualisation and creative execution become inverted as designers increasingly focus on curating and refining machine outputs rather than generating original works. Moreover, this transformation suggests broader implications for creative labour under algorithmic capitalism, where human creativity becomes increasingly embedded within and mediated through autonomous computational processes. The professional identity of the designer or illustrator thus shifts from primary creator to an algorithmic curator, a position that potentially deskills traditional creative practices.
[6] Simple triadic inversion, particularly what is called second inversion, can be very interesting in relation to this idea. For example, in the case of a standard C major triad, C-E-G, inverts to G-C-E. The second inversion puts the dominant (fifth note of the scale) as the bass note of the chord. This creates a powerful sense of tension/instability which can be thought of metaphorically as relating to the instability generated by AI. Music theorists have long argued the second inversion is in reality, structurally, a dominant chord. This means that qualitatively there may be more differences in the structural experience of triadic inversions than seventh chords given in figure 1. Many thanks to Prof. Ed Hughes for his advice and comments on this paragraph. 
[7] Lovink (2024) discusses the concept of the “platopticon” incorporating that notion of Plato's cave with that of Bentham’s panopticon which extends this analysis of reified social relations into the era of platform capitalism. Just as commodity fetishism masks social relations behind seemingly autonomous objects, we could say the platopticon masks algorithmic governance behind the appearance of voluntary social connectivity. When Lovink analyses how “content is mainly produced by bots and LLMs such as ChatGTP are curated by algorithms to manipulate the population and minimize organic human activity,” we see a contemporary example of computational ideology at work. The platform's surveillance infrastructure doesn't just observe but actively shapes social relations through algorithmic mediation, a form of platform fetishism, where social relations appear not just as relations between things but as relations between algorithmic processes. 
[8] I develop the concepts of mathematical romanticism or computational romanticism (see Berry 2023) to capture a distinctive ideological formation in computational culture that fuses two seemingly contradictory tendencies. On one hand, it embraces the formal rationality and axiomatic certainty of mathematical thinking, particularly in its approach to knowledge representation and algorithmic processing. On the other, it exhibits distinctly romantic characteristics in its attribution of organic, emergent, and even vital qualities to computational systems. This unstable synthesis can be seen in how AI systems are discussed, simultaneously as precise mathematical models and as quasi-organic entities that can learn, create, and evolve. This paradoxical fusion bears striking similarities to the German Romantic movement's attempt to reject mechanical materialism with organic vitalism, but now manifested in computational form.
[9] Mathematical romanticism functions ideologically by obscuring the material infrastructure and human labour that underlies computational systems while promoting a mystified view of algorithmic agency. Under the Inversion, this ideology becomes particularly powerful as it helps normalise the increasing autonomy of computational systems in cultural production. When developers describe language models as “emergent” or having “creativity,” they exemplify this mathematical romantic ideology that simultaneously emphasises technical precision while attributing almost magical generative powers to algorithms.
[10] The notion of romantic computation helps us understand a key aspect of the Inversion. Where Novalis and the German Romantics sought to discover the spiritual in the mathematical through what has been called “romantic mathematics,” romantic computation inverts this relationship by attempting to discover (or perhaps produce) the mathematical in the spiritual. This is visible in how contemporary AI discourse oscillates between technical precision (“transformer models,” “attention mechanisms,” and “parameter spaces”) and organic metaphors (“neural" networks,” deep learning,” and “artificial intelligence”). This instability isn't merely linguistic but might reflect a deeper inversional logic where computation is simultaneously presented as both purely mathematical and mysteriously vital. The shift from romantic mathematics to romantic computation thus marks not just a technical development but a fundamental transformation in how we conceptualise the relationship between mechanism and organism, calculation and creation, machine and spirit.
[11] Jaeggi's (2018) concept of forms of life offers a crucial theoretical framework for understanding how the Inversion operates at the level of everyday practice. For Jaeggi, forms of life are not merely cultural patterns or ways of living, but inertial bundles of social practices that represent historically developed solutions to societal problems. They are, in her terms, both normative and transformable, that is, resistant to change yet subject to rational critique and revision through immanent criticism. This approach is particularly valuable for analysing the Inversion because it helps us understand how algorithmic systems don't simply produce cultural artefacts but potentially transform the very practices through which we solve societal problems and make our world intelligible. When Jaeggi argues that forms of life are instances of problem-solving that can succeed or fail, she provides a way to critically evaluate how the algorithmic reconstruction of culture might enable or constrain human flourishing. Moreover, the emphasis on the bundles of practices that constitute forms of life helps us see how the Inversion operates not just at the level of individual artefacts or experiences but through the transformation of entire patterns of social practice and interpretation.
[12] Diffusion models represent a significant technical development in AI-generated media. They work through a two-stage process: first, they progressively add random noise to training images until they become pure noise, then they learn to reverse this process, reconstructing images from noise. When generating new images, they start with random noise and progressively “denoise” it according to text prompts or other inputs, guided by what they learned during training. This technical process itself becomes a striking metaphor for the Inversion as the original image is first destroyed through mathematisation (transformed into pure noise/data) before being reconstructed synthetically. Systems like DALL-E 2 and Stable Diffusion use this technique to generate highly realistic images.
[13] The phenomenon of automimetric production is perhaps most visible in contemporary social media ecosystems, particularly in algorithmic content farms. In the case of automated Instagram or TikTok networks this is where AI systems generate synthetic influencer content which ranges from computer-generated personalities like Miquela to automated accounts that endlessly recombine trending formats. Meta itself disclosed in 2022 that it took down 1.6 billion fake accounts in just the first three months of that year, many of which were part of automated content generation and engagement networks (Meta, 2022). These synthetic systems deploy machine learning to analyse engagement patterns, automatically generate content variations, and then use bot networks to create simulated engagement metrics, forming a closed circuit of algorithmic content production and consumption. The content these systems produce often exhibits an uncanny quality, they are neither entirely human nor entirely artificial, but existing in a liminal space characteristic of the Inversion. What's crucial to understand about this is that these systems don't simply simulate human cultural production but create new forms of cultural circulation that operate according to their own algorithmic logic, generating real economic value through advertising revenue and data extraction while potentially drowning out human-generated content in the platform economy.
[14] The emergence of automimetric production suggests new forms of alienation that, perhaps, go beyond Marx's classical formulation of workers' estrangement from their labour and its products. Under conditions of algorithmic capitalism, we experience a form of recursive alienation where human cultural experience becomes alienated not just from the products of labour, but from the very processes of cultural production and circulation themselves. This differs from traditional forms of cultural alienation in several ways. First, where earlier forms of mass media created what Adorno and Horkheimer (2016) termed pseudo-individuation, automimetric production removes the need for even this pretence of individual experience, indeed, these systems operate with little human engagement. Second, the alienation becomes infrastructural rather than merely superstructural as it is built into the very technical systems through which culture circulates. Drawing on Jaeggi's (2014) reconceptualization of alienation as a “relation of relationlessness,” we might say that automimetric production creates a situation where humans become structurally disconnected from cultural circulation, producing a kind of second-order alienation. This is alienation from the very possibility of non-alienated cultural experience. This suggests that any contemporary theory of alienation must account for how algorithmic systems don't just mediate human relations but potentially replace them entirely with synthetic processes of cultural production and consumption.
[15] The work of Ben Potter is particularly suggestive in relation to his development of the concept of what he calls “Synthetic Mediations.” He describes these as “a way of conceptualising our media flows when the creative and interpretive acts of meaningful self-formation happen autonomously in computers.” He argues that “synthetic mediations designate a type of media flow where actionable knowledge and language are generated via synthetic a priori calculations.” 
[16] The idea of algorithmic temporality I introduce here differs from Benjamin's notion of the dialectical image. Where Benjamin saw the dialectical image as a moment of historical awakening that could illuminate the present, algorithmic temporality tends to flatten historical difference into computational similarity. For example, when DALL-E or Stable Diffusion generate images “in the style of” various historical periods, they don't preserve what Benjamin called the historical index of images, that is, their specific relationship to a particular time, but rather abstract stylistic features into ahistorical parameters. This creates synthetic temporality where historical time becomes merely another dimension in the latent space of the model. The challenge for inversional critique is thus to develop methods that can recognize how algorithmic systems simultaneously preserve and transform historical materials, creating dialectical images at a standstill. That is, moments where the relationship between human historical consciousness and algorithmic temporal processing becomes visible.
[17] The Inversion suggests a fundamental transformation in forms of social rationality, where instrumental reason is not merely delegated to technical systems but becomes constitutive of social consciousness itself. A new algorithmic stratification is created which operates through cognitive infrastructures that proletarianise mental capacities in profoundly uneven ways (see Berry 2014: 176). The wealthy can afford premium algorithmic services that augment rather than replace human cognitive capabilities, while maintaining the option to opt out of pervasive computational mediation. Meanwhile, those with fewer resources become subject to rationalised systems that substitute computed automated capacities for human judgment. This creates not merely a digital divide but what could be understood as a new form of class consciousness, one structured through differential access to cognitive augmentation and computational agency. Those with means receive personalised AI interactions designed to enhance human capacity, while others face standardised interfaces optimised for behavioural modification and data extraction. When connected to existing educational and economic inequalities, this threatens to create what Schecter identifies as new forms of social pathology where instrumental rationality becomes the organising principle of consciousness itself (Schecter 2010). The Inversion thus intensifies what I have previously termed infrasomatisation, where cognitive infrastructures become constitutive of class position through control over not just the means of production but the very capacity for non-instrumental thought itself (Berry 2019).
[18] The geopolitical dimensions of the Inversion also require careful attention, particularly in relation to algorithmic sovereignty. While US-China competition currently dominates AI development, the former usually through corporate-state partnerships (OpenAI, Microsoft, Google), the latter through direct state control (although see Ernie), the European Union offers an alternative vision through its pursuit of digital humanism. This regulatory approach, exemplified by the AI Act 2024 and the GDPR, attempts to assert digital sovereignty that prioritises human rights and democratic values. However, all three approaches, US corporate dominance, Chinese state control, and European regulation, still operate within a framework of what I call computational realpolitik, where control over AI capabilities is increasingly understood within the state as critical for global competitiveness and power.
[19] The role of universities under this algorithmic condition requires careful theoretical elaboration. Following the implications of the Inversion, it seems clear that universities face the crucial task of developing critical digital reflexivity through the capacity to understand and contest how artificial intelligence systems reshape human cognitive capacities and cultural production. This connects to broader questions of what I have called explanatory publics (Berry 2021), where institutions must facilitate the development of interpretative frameworks that enable critical engagement with algorithmic systems. The university's traditional role in fostering critical thought thus takes on new urgency as we confront synthetic epistemologies, and to recognise inversion material that blurs the boundaries between human and synthetic cultural production. The challenge for universities lies not merely in teaching about these systems but in maintaining space for critical reflexivity and the ability to examine how our own cognitive capacities and critical faculties are shaped by algorithmic mediation. This connects to broader questions about computationality and the formation of computational subjects through education. Universities must therefore position themselves as sites of resistance to computational reification. This reformulation of the university's role connects to what I have described as explainable forms of life (Berry 2024a), where institutions foster critical practices that maintain human agency while engaging with these systems. The university's traditional emphasis on contemplation and critical distance thus becomes crucial. These considerations suggest universities must move beyond traditional disciplinary boundaries to develop interdisciplinary digital studies that connect to broader questions about the relationship between computation and critique, and the possibility of maintaining critical reason and cultural critique. Universities can and should position themselves as crucial sites of critique and resistance, giving space from chasing the frenetic pace of AI innovation to develop reasoned critique – as the University of Sussex's motto has it “to be still and know” (Berry 2024b). 


Bibliography

Adorno, T. W. and Horkheimer, M. (2016) Dialectic of Enlightenment. Verso.


Baudrillard, J. (1994) Simulacra and Simulation, University of Michigan Press

Benjamin, W. (1969) The Work of Art in the Age of Mechanical Reproduction, in Illuminations: Essays and Reflections. Hannah Arendt (ed.), Schocken Books, pp. 217-251.

Benjamin, W. (2002) The Arcades Project, Harvard University Press.

Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury. 

Berry, D.M. (2019) Against infrasomatization: Towards a critical theory of algorithms, in Data Politics, Routledge. https://www.taylorfrancis.com/reader/read-online/5e1e0ce1-5b49-445e-b23e-91e80a6c340a/chapter/pdf?context=ubx 

Berry, D.M. (2021) ‘Explanatory publics: explainability and democratic thought’, in B. Balaskas and C. Rito (eds) Fabricating Publics: The Dissemination of Culture in the Post-truth Era. London: Open Humanities Press, pp. 211–232. http://www.openhumanitiespress.org/books/titles/fabricating-publics/ 

Berry, D. M. (2023) AI, Ethics, and Digital Humanities, in J. O'Sullivan (ed.) The Bloomsbury Handbook to the Digital Humanities. Bloomsbury, pp. 445–457.

Berry, D.M. (2023) Critical Digital Humanities, in J. O’Sullivan (ed.) The Bloomsbury Handbook to the Digital Humanities. Bloomsbury, pp. 125–135. https://www.bloomsbury.com/uk/bloomsbury-handbook-to-the-digital-humanities-9781350232112/

Berry, D. M. (2024a) Algorithm and code: explainability, interpretability and policy, in Handbook on Public Policy and Artificial Intelligence. Edward Elgar, pp. 134-146.

David M. Berry, (2024b) A History of the Concept of the University of Sussex: From Balliol-­by-the-Sea to Plate Glass University, History of Universities Volume XXXVII. Edited by Robin Darwall-­Smith and Mordechai Feingold, Oxford University Press.

Harris, K.R. (2024) Synthetic Media Detection, the Wheel, and the Burden of Proof, Philosophy & Technology, 37(4), p. 131. https://doi.org/10.1007/s13347-024-00821-0.

Jaeggi, R. (2014) Alienation. Columbia University Press. 

Jaeggi, R. (2018) Critique of Forms of Life. Belknap Press.

Jahnke, H.N. (1991) Mathematics and Culture: The Case of Novalis, Science in Context, 4(2), pp. 279–295. https://doi.org/10.1017/S0269889700000971 

Keller, M.H. (2018) The Flourishing Business of Fake YouTube Views, New York Times, 11 August. https://web.archive.org/web/20230619053320/https://www.nytimes.com/interactive/2018/08/11/technology/youtube-fake-view-sellers.html

King, A. (2023) Are 10% of Spotify Streams Really “Fake”?, Digital Music News, 12 September. https://www.digitalmusicnews.com/2023/09/11/are-10-percent-of-spotify-streams-really-fake/   

Laas, O. (2023) ‘Deepfakes and trust in technology’, Synthese, 202(5), p. 132. https://doi.org/10.1007/s11229-023-04363-4.

Lovink, G. (2024) What’s Social Networking Today?, Institute of Network Cultures. https://networkcultures.org/geert/2024/11/03/whats-social-networking-today/  

Marx, K. and Engels, F. (1970) The German Ideology. International Publishers.

Marx, K. (1982) Capital: A Critique of Political Economy, Vol. 1. London: Penguin Books.

Meta (2022) Community Standards Enforcement Report, Meta Transparency Center. https://transparency.fb.com/data/community-standards-enforcement/

Placido, D.D. (2024) The Dead Internet Theory, Explained, Forbes. https://www.forbes.com/sites/danidiplacido/2024/01/16/the-dead-internet-theory-explained/ 

Read, M. (2018) How Much of the Internet Is Fake?, New York Magazine, December 26, https://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html.

Schecter, D. (2010) The Critique of Instrumental Reason from Weber to Habermas, Continuum.

Schoenberg, A. (1984) Style and Idea: Selected Writings of Arnold Schoenberg, University of California Press.

Silverstone, R. (1999) Why Study the Media? Sage Publications

Silverstone, R. (2005). The sociology of mediation and communication. In The SAGE Handbook of Sociology, SAGE Publications Ltd, pp. 188-207, https://doi.org/10.4135/9781848608115

Stiegler, B. (2010) For a New Critique of Political Economy. Polity Press.

Tiffany, K. (2021) Maybe You Missed It, but the Internet “Died” Five Years Ago, The Atlantic, 31 August. https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/ 

Thunström, A. O. (2022) We Asked GPT-3 to Write an Academic Paper about Itself - Then We Tried to Get It Published, Scientific American. https://www.scientificamerican.com/article/we-asked-gpt-3-to-write-an-academic-paper-about-itself-mdash-then-we-tried-to-get-it-published/

Waddell, K. (2019) Welcome to our new synthetic realities, Axios,  https://www.axios.com/2019/09/14/synthetic-realities-fiction-stories-fact-misinformation 

Weizenbaum, J. (1976) Computer Power and Human Reason: From Judgment to Calculation, W.H. Freeman.


Comments

Popular Posts