The Vector Medium
David M. Berry
In The Philosophy of Software (Berry 2011: 10), drawing on Kittler's (1999) argument that the digital causes an implosion of previously distinct media forms, I argued that code functions as a super-medium, a medium that does not merely contain the fragmented media of the twentieth century but radically reshapes and transforms them into a new unity. Manovich made a similar argument that software simulates all prior media whilst adding properties native to computation itself constituting what he calls a metamedium (Manovich 2013).[1] This followed from Negroponte's claim that we were in the middle of a transition from atoms to bits (Negroponte 1996).
The vector medium operates through statistical compression.[4] Cultural artefacts, whether they are texts, images, or sounds, are compressed into dense vector representations, their media-specific properties stripped away. The operation is not encoding and decoding but compression and generation. What survives is not the signal but its statistical regularities, the patterns the training process has determined to be important. You cannot reconstruct the original from the embedding. This is the process I have called diffusionisation, the dissolution of cultural forms into statistical distributions from which no original can be extracted (Berry 2025, 2026a).[5]
If we look at the level of vector space, the vector medium can be understood as an epistemological transformation. It dissolves media specificity by converting all cultural forms into geometric positions within a shared high-dimensional space. What was once a photograph, a recording, or a text becomes a coordinate, and the coordinate's relationship to other coordinates is all that remains of what it once meant. The vector medium, at this level, replaces its definition with location and proximity. For example, if you were to embed "Hegelian freedom" and "Marxian freedom" they would register as neighbouring coordinates despite being philosophical antagonists. The so-called cosine distance is small, but the conceptual distance is actually enormous. We could say, the vector medium encodes a topic but it has the effect of flattening the argument that constituted the topic's meaning.
Turning to the level of the manifold, the vector medium can be looked at as an infrastructure. The manifold is not an abstract mathematical space but a material computational object, produced by specific labour, owned by particular corporations, shaped by specific interests, accessed therefore as a commodity. It has geography, within which dense regions can be understood as where investment and data extraction have concentrated, in contrast to sparse regions where capital has not found it profitable to encode. The vector medium, at this level, is not merely a transformation of meaning but a made object with an ownership structure and a labour history. In this sense, the unevenness of the manifold maps the unevenness of capital.
Finally, at the level of theory space, the vector medium can be understood as a regime of selection. For example, the dynamics of training, such as gradient descent on a loss landscape, determine which features of the cultural material survive statistical compression and which are suppressed as noise. This is where the mechanism of the medium's ideological operation can become visible. Why does the vector medium dissolve some forms of knowledge more completely than others? Because the dynamics of relevance that govern training treat certain patterns as signal and others as noise, and that determination is contingent on the training data, the loss function, and the reward model. A health worker's clinical observations, the activist's ecological knowledge, or the oral tradition that was never digitised are suppressed by a lossy compression whose criteria of relevance were never subjected to critical scrutiny. Theory space is the level at which critique is able to examine the specific mechanisms through which the vector medium produces its inclusions and exclusions.[6]
The Open and the Closed
Binary encoding, TCP/IP, HTML, Unicode were common protocols, accessible to anyone with the hardware to implement them. The encoding layer can be understood as a commons. Although ownership operated at the levels of content (i.e. through copyright and patents), infrastructure (i.e. networks, servers), and tools (i.e. software licenses), the digital medium itself, the capacity to represent information as discrete numerical values, was not proprietary. Anyone could encode bits. The digital medium was built largely on open standards.
In contrast the vector medium is owned and controlled, usually by large tech companies. The manifold, the trained weights, the embedding space are produced by corporations at extremely high costs (i.e. training runs now measured in billions of dollars), owned as trade secrets or distributed under restrictive licences, and accessed as commodities through per-token pricing. The vector medium itself is proprietary. This is not ownership of what passes through the medium but ownership of the medium as such.
The step from the digital medium to the vector medium is therefore the step from a commons at the encoding level to enclosure at the encoding level. Not just the content, not just the infrastructure, but the substrate of meaning itself becomes a commodity. When I argued in The Philosophy of Software that code functions as a super-medium, I was describing a transformation of form. Code reshaped media but the reshaping operated through open standards that capital could exploit without owning. The vector medium represents something different, the medium as commodity.
The encoding substrate is no longer infrastructure through which commodities pass. It is itself the commodity. Marx distinguished between formal subsumption, in which capital subordinates existing labour processes to valorisation whilst leaving their technical character intact, and real subsumption, in which capital transforms the labour process itself. The vector medium does both. The scraping of the internet for training data could be said to be its formal subsumption as existing cultural production appropriated as it stands. The embedding of that material into a manifold is an example of real subsumption where language and image are reconstituted at the level of their internal organisation, from sequential symbolic expression into geometric coordinates within a proprietary vector space. This is where real subsumption become geometric (Berry 2026a).[9]
The manifold contradiction I referred to above directly follows from this. The vector medium depends on the continued capture of human meaning, text, images, cultural forms, to maintain and extend its geometric coverage. But it degrades the conditions under which that meaning is produced, by displacing creative labour, by dissolving the temporal density of cultural work into instantaneous generation. It preempts the interval in which critical reflection forms by consuming the epistemic commons. I think the question is whether the vector medium feeds on what it destroys. The trajectory is ecological as much as economic. Shumailov et al. (2024) have demonstrated that models trained on their own outputs can undergo a model collapse, a progressive narrowing of the field that eventually produces broken AI systems. It does seem like the manifold needs fresh human meaning-production otherwise the manifold begins to consume itself.[10] I think that this is not a bug in the implementation but a structural feature of a proprietary medium whose economic logic requires continuous expansion of the manifold through scaling laws whilst undermining the human practices that populate it with anything worth encoding.
Conclusion
The vector medium names what has changed. Not that we have new kinds of media, though we do, but that we have a new kind of medium, one that is proprietary and owned. It is not what the vector medium does to different media forms, though these are important questions. It is what the vector medium does as a form of intermediation. Whose labour produces it? Whose capital owns it? What temporal, epistemic, and phenomenological consequences flow from its operation? I believe this signals that we are moving from platform capitalism to vector capitalism.
Images generated using Google Nano Banana 2 in March 2026.
Notes
[1] Manovich (2013) extends this idea through what he calls "media hybridisation," the capacity of software to combine formerly discrete media operations. This was an important step beyond simple convergence. But hybridisation still operates on recognisable media types. The vector medium does something more radical: it dissolves the types themselves into a common geometry.
[2] I am grateful to Leo Impett and Fabian Offert for sending me a pre-publication copy of their forthcoming Vector Media (Impett and Offert 2026) after they read my vector theory article. Their book has a similar diagnosis that overlaps with the one I develop here about how vectors transforms how computation handles media and that existing frameworks may not be adequate to it. Their intellectual history of embedding, from Grassmann through Barlow to CLIP, is important and their identification of a structural homology between Hinton's distributed representations and Marx's real abstraction is a particularly amazing find.
[3] Impett and Offert (2026) note the incompatibility of different foundation models' embedding spaces. If GPT-4, Claude, and Llama produce geometries in which the same concept occupies different coordinates, it might seem like there are competing media forms? However, I argue that the plurality of incompatible embedding spaces does not undermine the claim I am making for a vector medium. Indeed, incompatible digital formats, such as JPEG against PNG, MP3 against FLAC, did not make the digital medium plural. In the case of the vector medium it names the computational regime, not the coordinate system. All these models operate through the same operations, e.g. embedding, attention, cosine similarity, gradient descent, and their outputs are commensurable even when they are incommensurable in geometry. The incompatibility, I argue, is a feature of the political economy, not evidence of a plurality of media forms as such. If anything, the proliferation of incompatible manifolds makes the political situation worse, not better, because it forecloses even the possibility of adjudicating between competing geometries of meaning. There may not be a meta-space from which to compare them.
[4] Statistical compression as I use it here is a theoretical abstraction, not a technical description. In fact actual neural network architectures complicate this a lot. In diffusion models the U-Net includes skip connections that pass information around any bottleneck, and generation is not just decoding a stable point in a latent space but a thousand-step denoising trajectory, a point becoming a path (see Schaerf et al. 2025). The discussion in this article is at the level of semantic commensurability rather than the computational functions, because they still operate through vectorial alignment in a shared geometric space. The vector medium names this commensurability as a media-theoretical fact, not the specific computational pathway by which any given architecture achieves it. I am grateful to Leo Impett for comments on this point.
[5] The process should be distinguished from the technical term "diffusion models" used in image generation. I am naming something broader, a process of ontological thinning in which cultural forms lose their material specificity and their embeddedness in particular contexts of production and reception, as they are converted into statistical patterns distributed over the weights of a manifold. This concept extends Stiegler's (2010) grammatisation into the vectorial, describing the dissolution of these media forms into statistical distributions rather than the discretisation of experience into the digital.
[6] The three-level framework (vector space / manifold / theory space) and the taxonomy of silence will be outlined further in a forthcoming article. The concept of theory space draws on the renormalisation group (RG) theory from physics. The contingency of training and the fact that different training regimes navigate theory space differently is, I argue, what opens the space for political contestation.
[7] The vector medium transforms meaning temporally, by dissolving the sedimented past into a geometric present (diffusionisation as a temporal operation), compressing the interval between question and answer so that the duration in which critical thought forms is pre-empted, and eliminating the living friction of learning, the productive difficulty through which cognitive capacity is built. Every reification is a forgetting, as Adorno and Horkheimer argued, and the vector medium's forgetting is, at its deepest level, a forgetting of time. We might say that the manifold converts temporal depth into spatial position, history into geometry.
[8] I identify four dimensions of this contradiction in labour, meaning, extraction, and a reproductive crisis.
[9] The idea that "real subsumption become geometric" is developed in Berry (2026a). The aim is to show that this is not metaphorical and that the transformation of symbolic expression into vector coordinates is a literal reorganisation of the material structure of language, not merely a change in how it is accessed or distributed.
[10] The degrading is not just economic but structural as the manifold cannot sustain its own conditions of production.
Bibliography
Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, Palgrave Macmillan.
Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury.
Berry, D.M. (2025) 'Synthetic media and computational capitalism: towards a critical theory of artificial intelligence', AI & Society. Available at: https://doi.org/10.1007/s00146-025-02265-2.
Berry, D. M. (2026a) 'Vector Theory', Stunlaw. Available at: https://stunlaw.blogspot.com/2026/02/vector-theory.html.
Berry, D. M. (2026b) 'Generation Vector', Stunlaw. Available at: https://stunlaw.blogspot.com/2026/02/generation-vector.html.
Impett, L. and Offert, F. (2026) Vector Media. University of Minnesota Press.
Kittler, F. (1999) Gramophone, Film, Typewriter. Stanford University Press.
Manovich, L. (2013) Software Takes Command. Bloomsbury.
Mehta, P. and Schwab, D. J. (2014) An exact mapping between the variational renormalization group and deep learning. Available at: https://arxiv.org/abs/1410.3831.
Negroponte, N. (1996) Being Digital. Vintage.
Schaerf, L., Alfarano, A., Silvestri, F., Impett, L. (2025) Training-Free Style and Content Transfer by Leveraging U-Net Skip Connections in Stable Diffusion. https://doi.org/10.48550/arXiv.2501.14524
Stiegler, B. (2010) For a New Critique of Political Economy. Cambridge: Polity.
Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N. and Anderson, R. (2024) 'AI models collapse when trained on recursively generated data', Nature, 631, pp. 755-759.
Comments
Post a Comment