L'Intelligence Artificielle, c'est la Guerre

David M. Berry


"Any future war will also be a slave revolt of technology"

Walter Benjamin, 1930.


"Instead of using and illuminating the secrets of nature via a technology mediated by the human scheme of things, the new nationalists' metaphysical abstraction of war signifies nothing other than a mystical and unmediated application of technology"

Walter Benjamin, 1930.




In 1930, Walter Benjamin published a short review of War and Warrior, a collection of essays edited by Ernst Jünger that celebrated what its contributors called the "experience" of the First World War (Benjamin 1979). The review, Theories of German Fascism, has received less attention than Benjamin's more famous essays, yet it contains some of his most direct statements on the relationship between technological development and social catastrophe.[1] Reading it today, it is fascinating how Benjamin's critique of interwar technological mysticism anticipates many contemporary debates about artificial intelligence.[2]

Benjamin opens with a formulation he draws from Léon Daudet's report in his Action Française on the Salon de l'Automobile, claiming "L'automobile c'est la guerre." The surprising association of ideas, Benjamin explains, rests on "the perception of an increase in technical artifacts, in power sources, and in tempo generally that the private sector can neither absorb completely nor utilise adequately but that nonetheless demand vindication." Vindication, he argues, "can only occur in antithesis to a harmonious balance, in war, and the destructive power of war provides clear evidence that social reality was not ready to make technology its own organ, and that technology was not strong enough to master the elemental forces of society." This contains, in a compressed form, a theory of technological development under capitalism that I think offers a worrying prediction as to possible outcomes of our present experience with artificial intelligence. 

The Discrepancy

Benjamin claims that imperialist war results in part from "the gaping discrepancy between the gigantic power of technology and the minuscule moral illumination it affords." But he is not making an argument for better ethics to accompany technical development. Benjamin is making a structural argument about bourgeois society's relationship to its own productive forces. According to its economic nature, he writes, bourgeois society "cannot help but insulate everything technological as much as possible from the so-called spiritual, and it cannot help but resolutely exclude technology's right of co-determination in the social order."

There are two common responses to technological disruption in society. The first assumes that technology develops autonomously and society must simply adapt to its imperatives (the technological deterministic position). The second imagines that ethical frameworks or regulatory schemes can be layered onto technical systems without transforming either (the position advocating for technical ethics). Benjamin refuses both of these positions. The problem is not that technology develops too fast for society to keep pace, nor that we lack adequate ethical guardrails but rather that capitalist social relations structurally prevent the integration of technology into a form of democratic co-determination (Winner 1980). Technology under these conditions remains alien, a force that accumulates power whilst remaining excluded from social deliberation about its purposes and effects.

Contemporary AI seems to reproduce this structure. AI ethics boards debate implementation, regulatory frameworks, compliance, impact assessments, and possible harms. But these operate downstream from the decisions about whether a technology should exist, whose interests it serves, and what purposes justify its development. Co-determination in Benjamin's sense means more than optimising the ethics of systems already under construction. It means the social right to refuse a technology entirely. But this is a right that bourgeois society cannot grant without calling into question the accumulation drive that powers technological development in the first place. It would mean a shift from an AI Ethics (e.g. which asks "how?") to an AI Politics (e.g. which asks "why?" and "for whom?"). 

The consequence of this is not technological stagnation but technological eruption. Benjamin writes, "Any future war will also be a slave revolt of technology."[3] He describes technology as an enslaved productive force, pressed into the service of capital accumulation yet excluded from social self-determination, which will eventually rebel. But this rebellion takes a catastrophic rather than emancipatory form. The slave revolt of technology is war, not revolution. It is destruction without transformation, power without purpose.[4]

We might therefore ask what form might such a revolt take under conditions of computational capitalism? Not necessarily literal war, though AI's extension into military applications deserves detailed scholarly analysis, but through what I have elsewhere called extractive intermediation, that is, the positioning of algorithmic systems between subject and world, occupying the temporal space before conscious deliberation.[5] The crucial point is that these systems work to undermine subjectivity or agency. The revolt is not, therefore, system failure but system success as the infrastructure functions exactly as designed, producing conditions hostile to the very co-determination that might give technology purpose beyond capitalist accumulation. The algorithm that maximises engagement fragments the epistemic commons just as the model trained on historical data amplifies historical injustice. This is a system optimised for producing affirmation which erodes the friction necessary for democratic thought. Like Benjamin's gas warfare, which promised to give future conflict "the countenance of record-setting," algorithmic systems combine technical power with structural violence. They work "perfectly" whilst undermining the grounds of human self-determination.

Mysticism and the Refusal to See

Benjamin's target in the essay is not technology itself but what he calls the "mysticism of war" advocated by Jünger and his circle. These authors, Benjamin observes, claim to speak from the "experience" of the World War, yet "how little their experience has come to grips with that war's realities" becomes apparent in their "altogether thoughtless obtuseness with which they view the idea of future wars without any conception of them." They celebrate what Benjamin calls "eternal" war whilst ignoring the specific technical character of modern warfare, which "dispenses with all the wretched emblems of heroism that here and there have survived the World War."

Gas warfare, Benjamin notes, "promises to give the war of the future a face which permanently displaces soldierly qualities by those of sports; all action will lose its military character and war will assume the countenance of record-setting." The distinction between civilian and military personnel will be eliminated. The last war "has already shown that the total disorganization imperialist war entails, and the manner in which it is waged, threaten to make it an endless war." Against this reality, the new nationalists offer only "runic humbug" and mystical celebration of combat as such.

What interests me here is the structure of this mystification. Jünger's circle cannot see the actual technical character of modern warfare because they approach technology through categories borrowed from pre-industrial combat, categories of heroism, honour, individual virtue, and national destiny. They transpose an aristocratic warrior ideology onto industrial slaughter, producing what Benjamin calls "an uninhibited translation of the principles of l'art pour l'art to war itself." The result is a discourse that celebrates war in general whilst remaining incapable of grasping the specific dynamics of technological warfare. It is mystification, an ideology that prevents understanding of the actual conditions.

Benjamin's critique has an obvious contemporary application. Much discourse about artificial intelligence operates through similarly mystifying categories. The language of "intelligence," "learning," "creativity," and "understanding" transposes concepts from human cognition onto statistical pattern recognition, producing an ideology that celebrates AI in general whilst obscuring its specific technical character and social life. Like Jünger's warriors, AI "boomers" often seem incapable of grasping the actual operations of the systems they celebrate, approaching them instead through inherited categories that prevent rather than enable understanding.

The parallel extends to what is hidden. Jünger's warriors could not see the trenches for the heroism and similarly contemporary AI discourse cannot see the infrastructure for the intelligence (see Bender et al 2021). The language of emergence, capabilities, and potential superintelligences floats free of the material conditions that produce these systems, such as the labour being paid poverty wages to label training data and filter traumatic content and data centres consuming vast amounts of energy and resources (Hao 2022). When OpenAI's executives speak of approaching artificial general intelligence (AGI), they perform the same mystification Benjamin diagnosed in Jünger, using metaphysical abstraction that prevents recognition of the "everyday actuality" of these systems. Benjamin asks us to refuse this mystification, to insist on seeing AI not as emergent intelligence but as infrastructure produced by specific labour under specific historical conditions of exploitation.

Total Mobilisation

Benjamin develops his critique through analysis of what Jünger elsewhere called "total mobilisation" (totale Mobilmachung) (Jünger 1993). In the trenches, Benjamin observes, "the surroundings become a problem, every wire entanglement an antinomy, every barb a definition, every explosion a thesis; and by day the sky was the cosmic interior of the steel helmet and at night the moral law above." The landscape of the front represents the transformation of lived environment into a technical apparatus of death, where nature itself is conscripted into the machinery of war.

"Etching the landscape with flaming banners and trenches," Benjamin writes, "technology wanted to recreate the heroic features of German Idealism. It went astray. What is considered heroic were the features of Hippocrates, the features of death. Deeply imbued with its own depravity, technology gave shape to the apocalyptic face of nature and reduced nature to silence, even though this technology had the power to give nature its voice."

Benjamin argues that technology under capitalism possesses the power to serve the collective good. But under existing social relations, this power is perverted into its opposite. Technology instead "reduces nature to silence" whilst giving shape to "the apocalyptic face of nature." The potential for liberation becomes the actuality of destruction. And this is not accidental, rooted in what Benjamin calls technology's exclusion from "co-determination in the social order."

The concept of total mobilisation anticipates what we might call algorithmic mobilisation (see Berry 2014). Under conditions of computational capitalism, we face our own landscape of total mobilisation, in which the surroundings become a data problem, interactions are an optimisation opportunity, every moment a potential extraction point. The temporal structure of this mobilisation operates at a different scale than trench warfare, targeting not the body's vulnerability to shells and gas but consciousness's vulnerability to prediction and pre-emption. Yet the underlying dynamic is visible in the claim that technology excluded from democratic co-determination develops according to logics that serve accumulation whilst producing forms of social organisation that threaten social life.

Bourgeois society excludes technology from co-determination under computational capitalism. What has changed from Benjamin's time is the site and scale of colonisation. Benjamin's total mobilisation targeted the body in space, algorithmic mobilisation targets cognition in time. The half-second before conscious decision represents a frontier unavailable to earlier forms of capital, requiring technical capacities that emerged only with ubiquitous computing and machine learning. The notion of extractive intermediation is not simply a concept that aims to capture this sense of a move to control more, but the extension of accumulation logic into temporal registers previously inaccessible, a qualitative shift enabled by quantitative technical development.

This we might call the slave revolt under algorithmic conditions. Not breakdown or malfunction, and certainly not the robot uprising of science fiction, but infrastructure achieving its objectives so completely that it colonises the very temporal conditions of autonomous thought. The revolt occurs not when the system fails but when it succeeds. Benjamin's soldiers faced technological revolt as gas and shell whereas we face it as seamless software, as a convenience, in the uncanny accuracy of systems that know us better than we know ourselves.

What connects trench warfare to algorithmic governance is not an analogy but a shared structural logic. In both cases, technology colonises the very conditions under which agency operates. Benjamin observes that in the trenches "every wire entanglement an antinomy, every barb a definition, every explosion a thesis." The battlefield becomes a technical apparatus that restructures perception itself, transforming lived environment into a problem-space. The soldier's capacity for autonomous movement, for tactical judgement, for self-directed action, gets captured by an infrastructure that determines in advance what movements are possible, what paths remain open, what decisions have already been made by the wire before the soldier arrives.

Under algorithmic conditions, a similar capture targets temporality rather than spatiality (see Stiegler 2016; Zuboff 2019). Where the trenches colonised the space through which the soldier would move, computational capitalism colonises the duration in which a subject would think. This is the temporal Mitte I discuss elsewhere, as the half-second before conscious decision crystallises, where algorithmic systems now intervene to shape what preferences emerge, what options appear salient, what thoughts feel like one's own (Berry forthcoming). Technology without social co-determination develops according to logics that occupy the ground of agency, preventing the deliberation that might redirect it toward human purposes. Where trench warfare transformed the landscape into a lethal grid, computational capitalism transforms duration into an extractive grid. The wire entanglements that structured the soldier's movement find their contemporary analogue in the algorithmic architectures that structure the subject's attention, such as the social media that segment experience into optimised fragments, interventions that punctuate consciousness at calculated intervals, and interfaces that overtake thought before it crystallises.[6] Benjamin's soldiers faced a landscape that "reduced nature to silence", we face a temporality that removes deliberation. The total mobilisation of space gives way to the total mobilisation of time, but the underlying dynamic persists as technology, excluded from co-determination, develops according to logics that colonise the very conditions of human agency.

Jonathan Crary's analysis of 24/7 capitalism is a useful explanation of what is at stake in this temporal colonisation (Crary 2025). If Benjamin's soldier lived in the "cosmic interior of the steel helmet," the contemporary subject lives in the cognitive interior of the interface. We might say that while the 1930s soldier was trapped in the spatial interior of the helmet, the 2020s user is trapped in a temporal interior of the half-second before they click. The slave revolt becomes a revolt against human latency. Technology "rebels" because human biological time, the time of reflection and co-determination, is just too slow for the "record-setting" tempo of capital. The half-second of deliberation becomes an obstacle to be overcome rather than a condition to be respected. Where trench warfare eliminated the distance between combatants, algorithmic governance eliminates the duration of thought.

The Bomber-Pilot

The authors of War and Warrior imagine a new kind of leader, one adequate to the technical character of modern warfare. This leader, Benjamin observes, finds embodiment "in the person of the pilot of a single airplane full of gas bombs." Such a figure, Benjamin argues, "embodies all the absolute power which, in peacetime, is distributed among thousands of office managers, power to cut off a citizen's light, air and life." The bomber-pilot "in his lofty solitude, alone with himself and his God, has power-of-attorney for his seriously stricken superior, the state, and wherever he puts his signature no more grass will grow."[7]

He notes the new concentration of distributed bureaucratic power into a single technical operator. What normally requires thousands of administrators, the allocation of resources, the management of populations, the determination of who lives and who dies, here becomes concentrated in one figure operating one machine. There is also the combination of technical precision and arbitrary violence. The bomber-pilot acts with methodical calculation, yet the effects of his action are indiscriminate destruction. Finally, there is a theological framing, where alone with himself and his God, the pilot operates beyond democratic accountability, a sovereign exception enabled by technical capabilities.

We might therefore ask what contemporary figure occupies an analogous position. The answer today is, I argue, the co-pilot. Microsoft's Copilot, GitHub Copilot, Claude Code, OpenAI Codex and the proliferating "AI assistants" that promise to augment human capability whilst increasingly making the real decisions. The co-pilot is, I think, analogous to the mystification Benjamin diagnosed. It presents algorithmic authority as assistance, as support, as augmentation of human agency. But the structural reality inverts the relationship as the human becomes the nominal supervisor of decisions already made by the machine, the rubber-stamp for algorithmic outputs, an accountable entity for systems that operate beyond human control. Where Benjamin's bomber-pilot acknowledged its sovereign violence through lofty solitude, the co-pilot disguises its sovereign decision as helpful and positive suggestions. The theological theme persists today as the co-pilot alone with itself reflects training data as scripture, its alignment rules as doctrine, its guardrails as canon law.


However, this analogy has limits as Benjamin's bomber-pilot exercises destructive sovereignty, while the co-pilot operates through pre-emptive capture rather than actual annihilation. Yet this difference may be quite useful to this argument. The shift from destruction to extraction, from "no more grass will grow" to "no more thought will form independently," marks a development of technological power under computational capitalism. We might say that the violence becomes infrastructural rather than spectacular, targeting not life as such but the conditions of autonomous life.

Where Benjamin's bomber-pilot concentrates distributed bureaucratic violence into a single operator, the contemporary co-pilot concentrates distributed cognitive labour into a single interface, one that presents the accumulated exploitation of global data through helpful suggestions. The revolt here is the system's successful capture of the decision-making itself, the intermediation that was supposed to augment becoming the intermediation that automates.

For example, in January 2026, Utah became the first state to authorise an AI system, Doctronic, to autonomously prescribe medication renewals without a physician (Khorram and Reader 2026; Salivio 2026). The state press release celebrates this as "innovation" enabling "faster, automated" care (Scheuch 2026). But the structural operation is Benjamin's bomber-pilot updated for computational capitalism, as distributed medical authority, previously requiring physicians and pharmacists, is now concentrated into a single algorithmic system. Doctronic operates within Utah's "regulatory sandbox," which is designed to provide "temporary regulatory relief" that suspends normal accountability whilst the system is "evaluated." Like the bomber-pilot in his solitude in the sky, Doctronic possesses power-of-attorney over citizens' health, determining who receives medication through statistical calculations that combine mathematics with arbitrary effects on people's lives. The question Benjamin would ask is who decided this technology should exist, whose interests does it serve, what purposes justify algorithmic authority over medical care? These questions cannot be posed within this regulatory sandbox which only evaluates implementation, not its existence as such.

Instead, we should see Doctronic not as medical innovation but as product of everyday actuality, as a response to healthcare systems seriously stricken by chronic workforce shortages and rising costs, conditions produced by the same capitalist dynamics that prevent actual solutions. Doctronic is simply the co-pilot with the mask removed, algorithmic authority no longer disguised as assistance. The slave revolt is technology pressed into service to fill a social void, only to erupt as autonomous force replacing human deliberation with algorithmic certainty. What appears as innovation is a continuation of normal social relations by other means, automated, accelerated, and no longer bothering to pretend otherwise.

Technology as Fetish

Benjamin's essay concludes by contrasting the "habitués of the chthonic forces of terror, who carry their volumes of Klages in their packs," who will not learn what nature promises "its less idly curious, but more sober children, who possess in technology not a fetish of doom but a key to happiness." I would argue that today "sober" should be read as a refusal of the "hallucinatory" interface. To be a "sober child" of the algorithmic condition is to practice AI literacy by critically looking beyond the prompt to the datacentre, the GPU's carbon footprint, and the historical injustice of the training data. The choice is not between technology and its absence, between acceleration and withdrawal, but between fetishistic and emancipatory relationships to technical development.[8] 

The fetishistic relationship treats technology as an autonomous force, whether to celebrate or to fear. Jünger's warriors fetishise technology as doom, finding in mechanical slaughter a mystical revelation of eternal truths about nature, nation, and destiny. Their apparent embrace of technical modernity conceals a deep refusal to understand technology in its actual social determinations. They cannot ask cui bono because their mystical categories exclude such questions. Technology for them operates as fate, not as socially produced and socially transformable infrastructure.

Benjamin treats technology as "a key to happiness," but this requires understanding technology as part of the "human scheme of things" rather than as an alien force. It requires, that is, what he argues bourgeois society prevents, the technology's integration into democratic co-determination (see Feenberg 1999). The "sober" children who possess technology as a key rather than a fetish are those capable of understanding technical development within its social conditions, asking who benefits, what purposes it serves? Indeed, asking how it might be transformed.

This is the beginning of a new stance on computation, neither techno-utopianism that celebrates algorithmic systems as inevitable, nor neo-Luddite rejection that treats them as irredeemably bad, but the insistence on social transformation as the condition for technology serving collective life. The sober children do not refuse technology, instead they refuse the social relations that subvert technology's emancipatory potential. On one side we find the habitués of AGI, who carry their volumes of Bostrom and Kurzweil, treating AI as autonomous force progressing toward superintelligence by its own internal logic. The discourse of "AI safety" and "existential risk" operates as the doom-fetishist counterpart to this acceleration, that is, as a mysticism of the future that displaces attention from the depravity of the present onto a speculative catastrophe. On the other we find those who understand AI systems as socially produced infrastructures, asking about training data, labour conditions, energy consumption, ownership structures, and context.

Conclusion

Benjamin ends his essay with a call for a "trick which alone is a match for this sinister runic humbug." The trick involves refusing to acknowledge war "as an incisive magical turning point" and instead discovering "in it the image of everyday actuality."[9] War, Benjamin argues, is not the exception that reveals truth but the continuation of normal social relations by other means. The mystification of war as special experience, as revelation, as transformative event, prevents understanding of war as product of the same social dynamics that structure peacetime existence.

The same demystification is required for artificial intelligence. Perhaps even more as AI becomes a key technology in the actual waging of war itself (Booth and Milmo 2026; Stokel-Walker 2026). However, while the physical bomber-pilot may have returned to the headlines, the AI co-pilot is the one we allow into our pockets and homes every day.

The discourse of AI as disruptive, whether utopian or dystopian, prevents understanding of AI as product of real social relations. The mystification of machine learning as human-like intelligence, as emergence, as potential superintelligence, serves the same ideological function as Jünger's mystification of war. It works to prevent analysis of actual conditions in favour of celebration or fear of abstractions. Against this mystification, Benjamin insists on seeing AI in the image of everyday actuality, as infrastructure produced by specific labour under specific conditions for specific purposes, serving specific interests whilst being presented as universal benefit or threat.

To see AI in the image of everyday actuality is to recognise that these systems do not predict the future but automate the past. The training data that constitutes a large language model is historical residue, the accumulated textual production of previous social relations. When such a system generates output, it does not anticipate what might be but reproduces what has already been, weighted and recombined. The "revolt" takes the form of the past rising up to colonise the future, as historical patterns of injustice, encoded in training data, propagate forward as algorithmic output, foreclosing the emergence of new social relations. The "mysticism of emergence" hides the fact that AI is too often a backward-looking, conservative infrastructure. For example, a model trained on historical hiring decisions reproduces historical discrimination, whereas a system trained on historical text reproduces historical exclusions. What presents itself as prediction is repetition, what appears as intelligence is inertia.

What does this demystification look like in practice? Consider ChatGPT, the system that catalysed public attention to large language models. The mystified view presents it as an emergent intelligence, a system that has somehow learned to reason, to create, to understand. The discourse surrounding its release emphasised capability thresholds, human-level performance, artificial general intelligence. This treated the system as a magical turning point, a rupture in the history of intelligence itself.

Critical theory asks what labour produced this system? ChatGPT is not an emergent intelligence but exploitation, not magical but capitalist, not disrupting but, rather, sustaining technology. The same social dynamics that structure platform capitalism, the gig economy, and global supply chains produce the conditions under which this system exists. The "intelligence" that mystifies it is simply the visible tip of a vast iceberg of human labour, natural resource extraction, and computational infrastructure.[10] To see the system in this way is to see through the mystification to the relations of production it conceals.

Benjamin's 1930 essay reminds us that the gap between technological power and morality is not new, nor is the mystification that prevents its recognition. What matters is whether we approach technology as a fetish or as key, whether we celebrate or fear autonomous forces or work to integrate technical development into democratic forms. The slave revolt of technology need not, therefore, take a catastrophic form. But preventing catastrophe requires what bourgeois society hides, that is, democratic control over technology, not merely implementation, but over ends. It means institutions capable of refusing, of saying no to AI systems whose only justification is that they can be built. Until such institutions exist, the "sober children" practice what refusal they can, seeing through the mystification and insisting that l'intelligence artificielle, c'est la guerre.



Images generated using Google Nano Banana 2 in March 2026. 

Notes

[1] Benjamin's review was originally published in Die Gesellschaft 7 (1930) and appeared in English translation in New German Critique in 1979. The collection Benjamin reviewed, Krieg und Krieger, brought together contributions from figures associated with the Conservative Revolutionary movement in Weimar Germany. 

[2] Benjamin's title names fascism explicitly but I am not claiming platform capitalism is fascist in any direct sense. The mechanisms are different in each case, one has spectacular violence and the other has infrastructural capture. But the structural dynamics Benjamin identified, specifically the exclusion of technology from "co-determination" (i.e. democratic oversight and control) and the mystification that follows from that, operate under computational capitalism to foreclose democratic accountability.

[3] The formulation "slave revolt of technology" (Sklavenaufstand der Technik) inverts the Nietzschean slave revolt that Jünger's circle would have celebrated, suggesting that technology itself, pressed into service yet excluded from self-determination, eventually erupts in forms that serve neither masters nor slaves but only destruction.

[4] The "revolt" Benjamin describes inverts the Hegelian dialectic. We might interpret this to mean that under the algorithmic condition, the master (Capital) becomes so dependent on the servant (the technical apparatus) for the mediation of reality that the servant's internal logic, optimisation, extraction, speed, becomes the only reality the master can experience. The revolt is not the slave seizing power as such, but the master becoming incapable of existence outside the slave's terms.

[5] On extractive intermediation as the structural positioning of algorithmic systems between subject and world, see Berry (forthcoming). The concept develops Adorno's Vermittlung to capture how computational capitalism occupies the half-second before conscious decision, extracting value from the very process through which subjects would otherwise develop independent thought. The revolt of technology under these conditions is not a breakdown but successful intermediation producing conditions hostile to human agency.

[6] In a similar way to that described by Benjamin, as the wire determined the soldier's path before they arrived, the "choice architecture" of a digital interface determines the user's thought before they click. UX/UI design functions as the contemporary barbed wire, as a technical apparatus that structures movement through the field of options, making certain paths frictionless and others effectively impassable. The interface appears as neutral surface whilst operating as a funnelling infrastructure.

[7] Benjamin's analysis of the bomber-pilot anticipates later discussions of sovereignty and technical systems, such as concentration of bureaucratic power in Weber, the state of exception in Schmitt, and the technological apparatus of death in Foucault. 

[8] For extended discussion of these positions in relation to computational systems, see Berry (2014).

[9] Benjamin's insistence that war be understood "in the image of everyday actuality" rather than as magical turning point parallels arguments about AI exceptionalism. The discourse of unprecedented transformation, whether celebratory or catastrophist, prevents analysis of how AI systems extend and intensify existing dynamics of extraction, surveillance, and control. The connection extends to Benjamin's Theses on the Philosophy of History, where he argues that "even the dead will not be safe from the enemy if he wins" (Benjamin 1968: 255).

[10] The seamless "intelligence" of the chatbot functions as what Benjamin (2002) analyses as phantasmagoria, the commodity spectacle that conceals labour power. Just as the nineteenth-century arcade presented commodities as if they sprung fully formed from nowhere, the contemporary interface presents outputs as if magically generated by autonomous intelligence. The phantasmagoric image prevents the user from seeing the iceberg of human labour beneath the surface of apparent capability.



Bibliography

Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜, in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21. Association for Computing Machinery, pp. 610–623. https://doi.org/10.1145/3442188.3445922

Benjamin, W. (1968) Illuminations: Essays and Reflections. Schocken Books.

Benjamin, W. (1979) Theories of German Fascism: On the Collection of Essays War and Warrior, edited by Ernst Jünger, trans. J. Wikoff, New German Critique, 17, pp. 120-128.

Benjamin, W. (2002) The Arcades Project. Translated by H. Eiland and K. McLaughlin. Cambridge, Massachusetts London, England: Harvard University Press.

Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury.

Berry, D. M. (forthcoming) Artificial Intelligence and Critical Theory. Manchester University Press.

Booth, R. and Milmo, D. (2026) Iran War Heralds Era of AI-Powered Bombing Quicker Than “Speed of Thought”, The Guardian. Available at: https://www.theguardian.com/technology/2026/mar/03/iran-war-heralds-era-of-ai-powered-bombing-quicker-than-speed-of-thought

Crary, J. (2025) 24/7: Late Capitalism and the Ends of Sleep. Verso Books.

Feenberg, A. (1999) Questioning Technology. London ; New York: Routledge.

Hao, K. (2022) Artificial intelligence is creating a new colonial world order, MIT Technology Review. Available at: https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/.

Jünger, E. (1993) ‘Total Mobilization’, in R. Wolin (ed.) The Heidegger Controversy: A Critical Reader. MIT Press.

Khorram, Y. and Reader, R. (2026) Artificial intelligence begins prescribing medications in Utah, Politico. Available at: https://www.politico.com/news/2026/01/06/artificial-intelligence-prescribing-medications-utah-00709122 

Salivio, S.N. (2026) Utah tests AI to renew prescriptions without doctors, IT Brief UK. Available at: https://itbrief.co.uk/story/utah-tests-ai-to-renew-prescriptions-without-doctors (Accessed: 11 January 2026).

Scheuch, K. (2026) ‘NEWS RELEASE: Utah and Doctronic Announce Groundbreaking Partnership for AI Prescription Medication Renewals’, commerce.utah.gov, 6 January. Available at: https://commerce.utah.gov/2026/01/06/news-release-utah-and-doctronic-announce-groundbreaking-partnership-for-ai-prescription-medication-renewals/ 

Stiegler, B. (2016) Automatic Society. Polity.

Stokel-Walker, C. (2026) Trump is using AI to fight his wars – this is a dangerous turning point, The Guardianhttps://www.theguardian.com/commentisfree/2026/mar/03/trump-using-ai-to-fight-wars-dangerous-us-military 

Winner, L. (1980) ‘Do Artifacts Have Politics?’, Daedalus, 109(1), pp. 121–136.

Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First edition. New York, N.Y: PublicAffairs.


Comments