Provenance Anxiety: Death of the Author in the Age of Large Language Models

David M. Berry


 

In 1950, it may have been possible to choose other paths. In the third decade of the new millennium, however, our reliance on cognitive assemblages and computational media has progressed so far that there is no going back 

N. Katherine Hayles, 2025.

 

Though the sway of the Author remains powerful... it goes without saying that certain writers have long since attempted to loosen it 

Roland Barthes, 1977.

 

"What difference does it make who is speaking?" 

Michel Foucault, 1979.


Academics are currently debating what is being called the "provenance problem" in regard to Large Language Models (LLMs) (see Earp et al 2025). In these discussions there seems to be an underlying anxiety that there will be some sort of breakdown in chains of scholarly citation and acknowledgement caused by AI. This, they argue, means that academics and writers might unknowingly use ideas from sources they have never encountered, with the original authors receiving no credit. This represents, we are told, a new ethical challenge that existing frameworks of citation and attribution are ill-equipped to address (Earp et al 2025).

Earp et al describe a scenario in their paper in which a researcher prompts an LLM with fragments of an argument they are trying to make and receives a coherent paragraph in response. This fictional writer asks the LLM to make "light edits", with it creating a text that the fictional writer is unaware closely mirrors the substance and flow of "Smith (1975)". Earp et al treat this as a problem of failed attribution ("epistemic fairness").[1] However, it is notable that Earp et al do not distinguish between automation and augmentation in their discussion of LLM use in academic writing.[2]

At the end of their piece they reveal in the acknowledgements that they used LLMs in writing the article themselves. This follows from an earlier paper suggesting an ethical requirement to declare LLM use (Porsdam Mann et al 2024). This raises a curious question as to why they single out this particular computational tool for disclosure? LLMs are software, yet we do not require scholars to declare their use of word processors (which increasingly incorporate LLM features in obscure ways), spell-checkers and grammar-checkers (which have long suggested stylistic changes), digital databases (which algorithmically retrieve and rank sources), search engines (which shape what literature scholars encounter), or bibliographic management software like Zotero (which automates citation formatting and suggests related works). The demand to declare LLM use whilst treating other computational tools as transparent instruments seems to misunderstand the extent to which scholarly work has been softwarised since the 1990s. Every stage of contemporary academic writing, from literature discovery through algorithmic search, to note-taking in computational environments, to drafting in software that autocorrects and suggests, to citation management through automated tools, is already mediated by computational processes that shape intellectual labour in ways that remain largely unexamined and undeclared. The selective anxiety about LLMs thus appears less like a principled ethical stance and more like a reaction to a threshold where the computational mediation of thought becomes uncomfortably visible.[3]

This selective anxiety points to a deeper question of whether this provenance problem is actually a new crisis or is it the materialisation and acceleration of something that was always true. That is, that academic writing has always been a kind of bricolage, authorship has always been a construction, and that the fantasy of individual academic creation has never corresponded to the actual practices of scholarly production. This would mean that LLMs do not introduce a radical break in how academic knowledge is produced, rather, they reveal and intensify processes that were always operative but remained partially obscured by romantic ideologies of authorship and creativity.[4]

I would argue that the anxiety surrounding LLMs is best understood as a crisis in the mythology of academic authorship rather than in its actual practices. Their belief that LLMs encourage "'cryptomnesia', in which [their fictional academics] reproduce ideas that they have encountered previously but mistakenly believe to be original" seems to me to be problematic (Earl et al 2025). In contrast, I suggest that the LLM makes visible what was always the case, that is that scholarly writing proceeds largely through assemblage and recombination, complete attribution is largely impossible (e.g. the bibliography for every article and book one had ever read or consulted would be larger than the article itself). Thus a scholar's "own" ideas were already assemblages of half-remembered readings, classroom discussions, conference conversations, theoretical frameworks absorbed and internalised. The notion of originality, I suggest, serves only to police boundaries and distribute academic capital rather than to describe the processes of knowledge production. 

I think this relates interestingly to what I have elsewhere termed the Inversion. This I describe as a critical threshold where machine-generated content becomes not just indistinguishable from human writing but actively reshapes our understanding of it. The provenance problem is therefore a controversy that lets us see the clash between traditional modes of attribution (predicated on stable authorial origins) and what I call diffusionisation, a process through which knowledge and cultural production become subject to probabilistic dissolution and reconstitution via computational processes.

What becomes visible under these conditions of diffusionisation? Italo Calvino's essay Cybernetics and Ghosts (1986) offers a suggestive metaphor where he explores literature as a combinatorial system. Calvino's reading of the prisoner Edmond Dantès in The Count of Monte Cristo provides an interesting analogy with our situation under the Inversion. Calvino describes Dantès attempts to imagine the perfect prison from which escape is impossible, reasoning that if he succeeds, either he has perfectly modelled his actual prison (and must accept his fate), or he has imagined a prison even more secure than his actual one, suggesting his real prison has a flaw enabling his escape. Just as Dantès uses his model to identify points of difference from reality, we might use our encounters with LLMs to identify what cannot be captured by statistical recombination, that is, what resists diffusionisation. The provenance problem becomes not simply an ethical failure but an epistemological opportunity as the very untraceability of "Smith (1975)" in the LLM's output reveals something about the nature of scholarly production that academic citation practices obscure.

Calvino speculates about "a writing machine that would bring to the page all those things that we are accustomed to consider as the most jealously guarded attributes of our psychological life" (Calvino 1986: 10). LLMs represent such machines, automating existing cultural processes and dissolving the boundary between human and machine authorship through probabilistic media. What Calvino understood was that the challenge facing us is not technological but epistemological. We need to examine how to maintain critical reflexivity when "the more enlightened our houses are, the more their walls ooze ghosts" (Calvino 1986: 25).

What Is an Author?

What Barthes identifies as the historical emergence of the "Author" is helpful for understanding this argument (see also Hayles 2025: 147). He argues that the Author "is a modern figure, produced no doubt by our society insofar as, at the end of the middle ages, with English empiricism, French rationalism and the personal faith of the Reformation, it discovered the prestige of the individual" (Barthes 1977: 142-3). The author thus coincides with the rise of capitalist social relations and their investment in individual ownership and property. He states that "succeeding the Author, the writer no longer contains within himself passions, humors, sentiments, impressions, but that enormous dictionary, from which he derives a writing which can know no end or halt" (Barthes 1977: 146). The LLM is in this sense a dictionary made computational.

For Michael Foucault (1979), the author is not a natural category but a particular function that emerged historically to perform specific work within discourse. He calls it the author-function, and argues that it serves to limit the proliferation of meanings, to classify and group discourses, to establish ownership, and crucially, to distribute responsibility and credit within systems of knowledge production.

When academics cite material, they are not simply acknowledging influence but participating in a complex economy of intellectual credit that determines hiring, promotion, funding, and scholarly status. The citation serves to contain the dissemination of ideas within disciplinary channels, to ensure that knowledge can be attributed to identifiable subjects who can be held accountable and rewarded. Academic citation practices can, therefore, be understood as an apparatus for maintaining and policing these author-functions. 

When an LLM generates text, it does so by interpolating across huge numbers of articles, books and documents that have been turned into a model in its training. As Earp et al note, "the contribution of any specific text is often untraceable" because "LLM training distributes the influence of its source across billions of interdependent parameters". The provenance problem, they suggest, threatens the academic citation system by making the author-function irrelevant to generative writing processes. 

What is key is that this breakdown in attribution is not a failure of the technology but about something relevant to textuality itself. LLMs show in their operation that ideas circulate, combine, and recombine in ways that exceed the capacity of any attribution system to capture. The fantasy that we could maintain complete chains of intellectual attribution, that every idea could be traced back to its "rightful" creator, was always a myth serving particular institutional and economic functions within the university.

Information Loses Its Body

The LLM is a computational instantiation of this authorial reality. When it generates text, it treats all of its training data as informational patterns, stripped of the material and historical contexts that originally gave those texts meaning. "Smith (1975)" (the example given by Earp et al 2025) becomes not a specific argument made by a particular person at a particular time but a pattern distributed across billions of parameters, available for recombination with countless other patterns.

This process of what I am calling diffusionisation operates through mathematical abstractions that allow LLMs to blend, morph, and generate new textual forms through probability distributions rather than deterministic rules or simple reproduction (this applies to both autoregressive and diffusion forms of generative AI). This marks a shift from discretisation and encoding towards the generation of synthetic variations that might have no original drawn from human writing.[5]

This is what makes the provenance problem intractable. We cannot trace the influence of "Smith (1975)" in LLM-generated text because the text does not contain Smith's ideas in any recoverable form. It contains patterns that, when recombined with other patterns under certain probabilistic constraints, produce outputs that may resemble Smith's arguments.

This author-function is, for Foucault, about control. He writes, "The author is... the ideological figure by which one marks the manner in which we fear the proliferation of meaning" (Foucault 1979: 159). The LLM represents such a proliferation, a generation of meaning that cannot be contained within existing regimes of authorship and attribution. I argue that the Inversion marks the moment when this proliferation becomes systematic and when the default assumption shifts from human authorship to computational generation. We see this already in practice with academics proclaiming on social media their rightful ownership of the em-dash or the word "delve", seeking to defend their human originality against the onslaught of generative writing. But as time goes by, these technologies will undoubtably improve till a moment when being AI-generated is considered more "real" than being human-created – then the AIs will, perhaps, really own the em-dash. I would argue, therefore, that the anxiety this produces is not ethical but political as it threatens the mechanisms through which academics and universities distribute resources and recognition.[6]

We might note that academic citation practices extend this logic into the realm of ideas as they function as a kind of intellectual property regime, ensuring that concepts can be owned, that their circulation can be tracked and controlled, and that credit can be accumulated as a form of symbolic or social capital. For Barthes, writing is not the expression of a singular authorial subject but rather "a space of many dimensions, in which are wedded and contested various kinds of writing, no one of which is original" (Barthes 1977: 146). The text, he argues, "is a tissue of citations, resulting from the thousand sources of culture" rather than a line of words releasing a single meaning emanating from a kind of Author-God (Barthes 1977: 146).

What Barthes described metaphorically as the condition of all textuality, the LLM generates computationally. When we prompt an LLM to help draft a section of scholarly prose, what we receive back is precisely this assemblage, from fragments and patterns extracted from thousands of texts, recombined according to statistical probabilities into something that appears novel and coherent. However, this computational generation is itself uneven, revealing important limits to statistical recombination.


Figure 1: Comparison of human (blue) vs. AI (red)"jagged" intelligence profiles.
Source: https://substack.com/@tomaspueyo/note/c-182052822


Karpathy describes the "jagged frontier" of LLM capabilities, which I think is helpful in showing that these systems are both "a genius polymath and a confused and cognitively challenged grade schooler" (Karpathy 2025). They excel in domains where patterns are densely represented in training data whilst failing at tasks that seem trivial to humans (see figure 1 and 2). This jaggedness is not a temporary limitation to be overcome through better training but rather reveals something about how AIs currently operate. The unevenness shows us that aspects of scholarly writing might be reduced to statistical pattern-matching (i.e. far more than we might wish to admit) but that other elements will resist this reduction.[7] 

Figure 2: Comparison of human (blue) vs. AI intelligence (red) profiles,
showing jagged capabilities across different domains. (Karpathy 2025)

Nonetheless, LLMs are disrupting this symbolic economy by making the very notion of discrete intellectual property incoherent. When ideas have been distributed across the billions of parameters that make up the LLM model and the text is generated through statistical recombination rather than individual creation, the question of "who owns this idea?" becomes difficult to answer (although cases are currently making their way through the courts trying to settle this question). This is what causes anxiety for academics and problems ahead for the institutional structures that they depend on.

Flickering Signifiers

We can look at Hayles's concept of "flickering signifiers", which extends Barthes's work, to help to connect these issues to the specific materiality of LLM-generated text. Where traditional literary theory, following Derrida, focused on "floating signifiers" within a dialectic of presence and absence, Hayles argues that digital textuality operates according to a different logic, arguing that "Foregrounding pattern and randomness, information technologies operate within a realm in which the signifier is opened to a rich internal play of difference" (Hayles 1999: 31). Each time we prompt the model, we receive a different configuration of patterns, a different assemblage of a thousand sources of culture. The text, as it were, flickers between states, never settling into the kind of material fixity that traditional citation rely on. The LLM produces what Hayles calls "transient patterns" that "evoke and embody" previous texts without ever being identical to them (Hayles 1999: 47).

This flickering quality is not incidental but essential to how LLMs operate. The model does not contain "Smith (1975)" as a stable text. Instead, it contains patterns distilled from Smith's work that, when activated under certain conditions, produce outputs that may resemble Smith's arguments. But these patterns are perpetually in flux, recombining with other patterns, producing endless variations.

I argue that the process of diffusionisation operates through this shift from presence/absence to pattern/randomness. Through vector representation and latent space manipulation, diffusionisation dissolves the stable relationships between texts and their origins that grammatisation maintained. Knowledge and cultural production become subject to probabilistic dissolution where Smith's arguments are not simply encoded or reproduced but transformed into mathematical abstractions that can generate infinite synthetic variations (Berry 2025).

The provenance problem arises because we continue to treat this output as though it should conform to a model of authorship that Barthes declared dead half a century ago and that the Inversion is rendering confused and confusing. We still expect to be able to trace lines of influence, to identify origins, to assign credit to authorial subjects. But the LLM operates according to a different logic entirely.

Hayles's analysis of cognition helps us to understand this point. The posthuman subject, she argues, is characterised by "a distributed cognition located in disparate parts that may be in only tenuous communication with one another" (Hayles 1999: 3-4). When we use an LLM to draft text, we are enacting this kind of distributed cognition. The boundaries between our own thinking and the model's output become porous and unstable. We might not, therefore, be able to distinguish which ideas originated with us and which were suggested by the model, precisely because both are drawing on overlapping bodies of textual patterns.

More recently, Hayles has described distributed cognition in terms of cognitive assemblages which are "collectivities comprising humans, computational media, and electromechanical systems, through which information, interpretations, and meanings circulate" (Hayles 2025: 6). We might say that when a scholar uses an LLM to draft a paper, they participate in a cognitive assemblage where agency is distributed across multiple actors, not all of them human. The text that emerges is created by the assemblage.

The anxiety around LLMs might then be understood as resistance to what Barthes calls "the removal of the Author" which "utterly transforms the modern text" (Barthes 1977: 146). If we accept that the text is always already a tissue of quotations, that complete attribution is impossible, and that authorship is a construct rather than a reality, then the provenance problem vanishes. What remains is not an ethical crisis but one of recognition that asks how do we reorganise systems of academic credit once the mythology of individual authorship can no longer be sustained and the Inversion has made synthetic generation the default condition?

Barthes offers a radical solution, arguing, that "Once the Author is gone, the claim to 'decipher' a text becomes quite futile" (Barthes 1977: 147). The entire apparatus of academic criticism, which seeks to discover the Author beneath the work, to explain the text through recourse to its creator's intentions, biography, or psychology, becomes obsolete. The LLM produces text that has no Author, a text that cannot be deciphered by reference to an intentional subject. However, I would argue that this does not make such text meaningless but rather opens it to different modes of engagement. As Barthes writes, "In the multiplicity of writing, everything is to be disentangled, nothing deciphered" (Barthes 1977: 147). The question becomes not "what did the Author mean?" but "what can be done with this text?"

Hayles terms this moment technosymbiosis, "the deep symbiosis with computational media" (Hayles 2025: 14). She emphasises that "cognitive assemblages operate in contemporary society" in ways that "refuse the assumption that humans are primary in these arrangements" (Hayles 2025: 39). Agency is distributed throughout these assemblages, and the assumption of autonomous human authorship becomes increasingly difficult to maintain. She argues that this transformation is irreversible and "our reliance on cognitive assemblages and computational media has progressed so far that there is no going back. The only feasible options are to go forward from where we are now" (Hayles 2025: 8). Indeed, treating provenance anxiety as a temporary aberration to be corrected through better tools or stricter policies misdiagnoses our situation. We are not facing a momentary disruption of normal scholarly practices but what I think is a permanent transformation in how knowledge is produced under conditions of technosymbiosis. LLMs simply make visible what was always the case, that thought is distributed, that writing is collaborative, that the boundaries between self and other, human and machine, are permeable and constantly renegotiated. 

Conclusion

The provenance problem identified in relation to LLM use in scholarly writing is real, but it is not primarily a problem of ethics or attribution. Rather, it represents a crisis in the mythology of academic authorship, a controversy generated by the clash of romantic ideologies of individual creativity with the material realities of how knowledge has always been produced. I have argued above that LLMs do not introduce a break in scholarly practices but rather reveal and accelerate processes that were always operative. Indeed, the citational nature of all writing, the construction of authorship, the assemblage-like character of intellectual production, and the ways in which information has lost its body to become patterns available for endless recombination.

The anxiety surrounding LLMs is therefore understandable but misplaced. We are not facing the corruption of practices of academic attribution but the exposure of attribution's limits and functions. The elaborate system of academic citation has never, in reality, provided complete chains of intellectual genealogy. What the LLMs show, is that these claim can no longer be sustained in its current form.

Barthes shows that we might move from asking "who originated this idea?" to asking "what can be done with this text?", from obsessing over "ownership" of ideas to exploring destinations, from treating texts as stable presences to understanding them as flickering signifiers in continual circulation and recombination. However, these changes raise important questions. The university depends on mechanisms for distributing recognition and resources that presuppose identifiable individual authors. The labour market for academics, the structures of peer review, and the systems of research assessment, are predicated on the author-function that the LLM renders difficult, if not impossible. Moving beyond this would require not simply new ethical guidelines for LLM disclosure but a rethinking of how knowledge is valued, how intellectual labour is recognised, and how academic careers are built. We might not be ready for this but as Barthes argued, to give writing its future, "the birth of the reader must be ransomed by the death of the Author" (Barthes 1977: 148).

What seems clear is that treating the provenance problem as a technical issue to be solved through better attribution tools or stricter disclosure requirements misunderstands what is at stake. The LLM has not created a new problem so much as it has made an old problem impossible to ignore. The question is not whether we can return to a world where information had a body and authors had stable identities, but what kind of postdigital scholarship we will build in the world where both have irrevocably flickered into pattern, and where diffusionisation has become the dominant mode of textual production through probabilistic media (Berry 2015). We will need to think critically about how cultural forms emerge through statistical possibility, acknowledging that each scholarly text produced with or by LLMs exists as one materialisation from a vast field of potential configurations. Provenance anxiety thus reveals itself not as an ethical failure to be corrected but as an ontological transformation to be understood and engaged.


** Headline image generated using Google Gemini Pro. December 2025. The prompt used was: "Create a high-quality, realistic, and colorful 3D infographic diagram illustrating the "Jagged Frontier" of AI capabilities, featuring a left-to-right progression where "Tasks of a human job" are depicted as irregular, jagged blue polygons to represent human variability, while "Tasks an AI can do" are rendered as vibrant coral pink shapes. The visual narrative should evolve from a small pink form inside a blue polygon (labeled "The AI is a fun toy") to a growing overlapping shape ("The AI is helping me"), culminating in a large, highly irregular, and spiky "jagged frontier" that drastically overlaps the human shape with distinct peaks and valleys ("We are here"), and finally expanding into a massive form nearly swallowing the blue shape ("AGI"), all presented in a modern, professional style with depth, soft drop shadows, and clean typography" Due to the probabilistic way in which these images are generated, future images generated using this prompt are unlikely to be the same as this version. 

Notes

[1] Ironically, the proposal Earl et al (2025) give for a public declaration of LLM use in the acknowledgements (at the end of their paper) doesn't solve this provenance anxiety (see also Porsdam Mann et al 2024). As they state, "During the drafting of this paper, GPT-5 and Claude Sonnet 4.5 were used to help edit and shorten a longer draft written by the authors. The authors then further edited and refined the text by hand. Such use and this acknowledgement adhere to proposed ethical guidelines for generative AI use and acknowledgement in academic research. Each author has made a substantial contribution to the work, which has been thoroughly vetted for accuracy, and assumes responsibility for the integrity of their contributions". Curiously, they advocate the use of LLMs (or other AI-powered tools like Scite or Elicit) to "check whether AI-generated passages bear substantial similarity to existing scholarship" (Earl et al 2025). Using AI to verify AI demonstrates the possible infinite regression at the heart of the provenance problem as it seems to be LLMs all the way down, a rather counterintuitive way to address their provenance anxiety. 

[2] Automation implies the wholesale generation of text by LLMs, whilst augmentation suggests that LLMs are used as tools that enhance human capabilities. Each has implications for authorship, agency, and intellectual labour. More significantly, Earl et al (2025) focus on LLMs for text generation overlooks what may be the most transformative aspect of AI in academia in the use of AI methods for conducting research itself. AI techniques for pattern recognition, large-scale text analysis, data processing, and hypothesis generation represent methodological interventions that could reshape how knowledge is produced, not merely how it is written up. By framing the problem solely in terms of writing and attribution, Earp et al may miss the more profound epistemological shifts occurring when AI becomes key to research methods rather than simply a writing assistant.

[3] To be fair to Earl et al (2025) in the conclusion they do note, "Perhaps we should be moving toward a view of scholarship that is more collaborative and diffuse by default, involving complex assemblages of humans and machines". But this conclusion sits uneasily with their paper's main focus on disclosure and attribution, suggesting a tension in their argument. 

[4] A pragmatic response to these issues might be developing a threshold-based attribution. Rather than requiring disclosure of every computational tool (creating unnecessary writing bureaucracy) or treating all LLM use identically, we might distinguish significant from trivial computational mediation through thresholds. A 30% threshold could mark tools requiring a note (notable augmentation), whilst a 60% threshold would require a declaration (major augmentation). Tools below 30% (word processors' autocorrect, citation managers, brief LLM queries or brainstorming) would be part of standard computational infrastructure requiring no individual declaration. This might be implemented with a nested attribution that also locates individual authors within departmental and institutions, for example:

AUTHOR(S): Sarah Sein, Marcus Weber, 

DEPARTMENT: Digital Studies, University of Sussex

COMPUTATIONAL TOOLS: 

NOTABLE (>30%):

Python 3.11: NLP analysis; 

Gemini Pro: initial brainstorming;

MAJOR (>60%):

N/A; 

This helps with making visible the infrastructural conditions enabling scholarship whilst pragmatically preserving the author-function, where needed, for career progression and intellectual accountability. I look to explore this idea further in a forthcoming article.

[5] As Hayles (2025: 148) notes, "In addition, as Rita Raley (2022) has pointed out, the productions of GPT-3 are unrepeatable and hence unverifiable. If the same prompt is repeated for GPT-3, it will generate a response different from the one it did the first time. Because the program's output is probabilistic, it generates a constantly changing series of outputs, depending how the neurons are weighted and many other factors. Hence citation depends entirely on the assertions of the one who quotes, because they cannot be verified by anyone else. The resulting uncertainties destabilize the whole enterprise of literary criticism, which traditionally has treated exact quotation and citation as the sine qua non for acceptable work".

[6] As Foucault writes, "Although, since the eighteenth century, the author has played the role of the regulator of the fictive, a role quite characteristic of our era of industrial and bourgeois society, of individualism and private property, still, given the historical modifications that are taking place, it does not seem necessary that the author-function remain constant in form, complexity, and even in existence. I think that, as our society changes, at the very moment when it is in the process of changing, the author- function will disappear, and in such a manner that fiction and its polysemic texts will once again function according to another mode, but still with a system of constraint—one which will no longer be the author, but which will have to be determined or, perhaps, experienced" (Foucault 1979: 159-160).

[7] We might also see a radical shift in writing practice in a transition from prompt engineering to context engineering. Early LLM use focused on crafting the perfect prompt by using the right question, the right framing, the right constraints within a single written interaction or request of the LLM. Increasingly, however, sophisticated use involves engineering the context itself. This means providing an LLM with background materials, drafts, stylistic examples, and research or examples that shape its probabilistic space before generation begins. This is not merely a technical development but, I would argue, a step-change in the writer-LLM relationship. Context engineering reorders the boundary between "writing" and "prompting," creating what Hayles would recognise as a cognitive assemblage operating through continuous environmental manipulation rather than prompts. The author no longer simply asks the LLM to write but constructs an informational milieu within which writing takes place. This represents a further challenge to the author-function, and moves the question from "who wrote this?" to "who designed the context from which this emerged?" Under the conditions of the Inversion, I think it is highly likely that this form of contextual authorship may become the primary mode of writing. Its implications on scholarly writing will be equally profound.


Bibliography

Barthes, R. (1977) The Death of the Author, In Image-Music-Text, Fontana Press, pp. 142-148.

Berry, D.M. (2015) ‘The Postdigital Constellation’, in D.M. Berry and M. Dieter (eds) Postdigital Aesthetics: Art, Computation and Design. Palgrave Macmillan, pp. 44–57. https://doi.org/10.1057/9781137437204_4.

Berry, D.M. (2025) Synthetic Media and Computational Capitalism: Towards a Critical Theory of Artificial Intelligence, AI & Society, 40, pp. 5257-5269.

Berry, D.M. (2025) 'Probabilistic Media and the Inversion', stunlaw, 23 March. Available at: https://stunlaw.blogspot.com/2025/03/probabilistic-media-and-inversion.html 

Calvino, I. (1986) Cybernetics and Ghosts, in The Uses of Literature. Harcourt Brace, pp. 3-27.

Earp, B.D. Yuan, H., Koplin, J. and Porsdam Mann, S. (2025) ‘LLM use in scholarly writing poses a provenance problem’, Nature Machine Intelligence, pp. 1–2. https://doi.org/10.1038/s42256-025-01159-8.

Foucault, M. (1979) What is an Author?, in Harari, J. (ed.) Textual Strategies: Perspectives in Post-Structuralist Criticism, Cornell University Press, 1979, pp. 141-160.

Hayles, N.K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press.

Hayles, N.K. (2025) Bacteria to AI: Human Futures with Our Nonhuman Symbionts. University of Chicago Press.

Karpathy, A. (2025) Year in Review 2025, bearblog, https://karpathy.bearblog.dev/year-in-review-2025/

Porsdam Mann, S., Vazirani, A.A., Aboy, M., Earp, B.D., Minssen, T., Cohen, I.G., Savulescu, J., (2024) Guidelines for ethical use and acknowledgement of large language models in academic writing. Nat Mach Intell 6, 1272–1274. https://doi.org/10.1038/s42256-024-00922-7


Comments