What is Vector Space?
David M. Berry
"The abstraction ruling over thought is not thought's own doing but is the effect of the society in which it takes place."
Alfred Sohn-Rethel
![]() |
| Figure 1: The vector space |
Technical accounts of artificial intelligence sometimes describe vector space as though it were a mathematical a priori, already existent, like a ready-made object. It is often described as a high-dimensional space, complete, continuous, equipped with an inner structure that defines angles and distances between any two points. This description is borrowed from linear algebra, where vector spaces are abstract structures. In this register, vector space is a space of real numbers, which is to say, a space of infinite precision. Any point can be specified exactly. Any distance can be measured without error. The space is mathematically smooth, continuous, and homogeneous, the same everywhere, with no grain, no texture, no material substrate. This matters because the gap between the mathematics and the material is where ideology enters.
In reality, of course, this abstract space does not exist unless it is created. The vector spaces in which language models actually operate are not infinite and they are limited by their materiality. The standard training format, bfloat16, uses 16 bits to represent each number with one sign bit, 8 bits for the exponent (which gives the number its range) and 7 bits for the mantissa (which gives it precision). Seven bits of mantissa gives roughly two to three decimal digits of precision. The space is not completely smooth, it is a grid, and the grid is coarse (see figure 1).[1]
The gap between the mathematical object and its material instantiation is also important. The manifold's internal geometry has never used the precision that the mathematical formalism implies (Berry 2026a).[2] In this article I ask what follows from this for the concept of vector space itself. If vector space is the first level of the framework I have outlined, the mathematical form of the transformation, the epistemological regime in which meaning becomes position, then the properties of that mathematical form matter (Berry 2026c). Current implementations of vector spaces solve an important problem of how to separate representations in high-dimensional space (i.e. what we might call their extensive value), while tending to neglect another. This is how finely those representations can be distinguished within it (i.e. what we might correspondingly call their intensive value). Indeed, perhaps the most fascinating property of the material instantiation of vector space, what I call the manifold, is that it is productively imprecise.[3]
Another Dimension, Another Dimension
The implementation of high dimensionality in vector spaces is not an arbitrary design choice. It solves a specific problem, separation of entities that are "represented" in the space. In low-dimensional spaces, concepts that are distinct are forced into too close a proximity because there are not enough "directions" in which to place them. For example, a two-dimensional map of the world's languages would collapse entire families onto each other. In contrast, a 768-dimensional space, or a 4096-dimensional one, provides enough room for concepts to occupy positions that do not interfere with each other. The key is not to set too many dimensions so that high-dimensional spaces provide too much space that non-interrelating representations become possible. In other words, the vector space is a bit like the story of Goldilocks and the three bears, it cannot be too small, or too large (i.e. sparse in AI circles), rather it must be just right for the training to be embedded within it.
Greater dimensionality (i.e. more or bigger spaces) creates a more "expressive" global topology. In other words, the overall shape of the larger space allows for the separation of major clusters and the capacity to keep individual things distinct. Current AI architectures exploit this design implementation aggressively. They over-provision dimensions relative to the intrinsic (i.e. precision) aspects of the data, in other words a larger space is more important than a more precise space for the degrees of freedom needed to describe the structure of language. The gap between this intrinsic and "ambient" (i.e. empty space) consequently creates representational redundancy but it is thought that this may help with robustness in the same way that biological neural systems use distributed, redundant coding, but it is technically geometrically wasteful.[4]
![]() |
| Figure 2: The manifold sitting in vector space |
But dimensionality also has a cost when the vector space scaling becomes very large. In very high-dimensional spaces neighbourhoods become unintuitive and the notion of meaningful distance starts to break down because in very high dimensions almost everything is approximately equidistant from everything else (this is a problem that normalisation and cosine similarity mitigate but do not get rid of). The functions of interpolation and generalisation, which depend on the vector space having local structure, become unreliable when the vector space is mostly empty. So vector space provides a room, as it were, but it does not provide a structure, that is instead provided by a manifold which is instantiated into the vector space (see figure 2).
![]() |
| Figure 3: An example of how proximity in the manifold encodes semantic relatedness in vector space |
There is a further consequence that connects the dimensionality of vector space to an epistemological problem. In the terminology I use (see Berry 2026c), vector space names the mathematical form of the empty and to-be-filled space. But the mathematical form has properties, dimensionality chief among them, and those properties shape how the vector space functions. For example, a 768-dimensional space and a 4096-dimensional space are different epistemological regimes, not different sizes of the same container (or receptacle). A larger space (i.e with more dimensions) can hold more non-interfering representations, which means it can contain a more complex manifold, which in turn means it can encode more of the structure of the data that it is trained on.
But it also means the space is more empty, as the manifold within it is thinner relative to the ambient volume, and that the geometry of meaning is more isolated within a geometry of emptiness. If you think about the way that an ocean has islands within it, by making the ocean bigger the islands become hard to find and hard to traverse. The larger vector space also means that because the manifold is thinner as it spreads over more dimensions, it becomes more fragile. Additionally, the coarser the "grain" of the vector space (which is dictated by the numerical representation of the vector space, such as bfloat16), the more a vector's position may be perturbed toward regions of the geometry where the learned structure is sparse or unreliable (see figure 2).
One way to understand this is to imagine the manifold as a thin surface of meaningful locations, like a galaxy in the sky, and the ambient space surrounding it having no "learned" structure as the distant blackness between galaxies. Location rounding and numerical noise, which is caused by less precision in the underlying number format, push representations away from where training placed them stretching out the galaxy, to continue the metaphor. The thinner the manifold relative to the ambient volume, the shorter the distance from any point to a region where the geometry offers no structure as such. So if one were to travel around in this "galaxy" of meaning, there is more chance of hitting empty space than actual stored locations. This is where something like semantic collapse occurs, not necessarily a dramatic ejection from the manifold but a drift into territory where the model's representations become unreliable, where it produces not a wrong answer but what we might call a geometrically unmoored one. When a vector is rounded, it does not simply move to a slightly different meaning. It risks moving into a region where the learned geometry has no coherent meaning at all. Not so much McLuhan's (1962) Gutenberg Galaxy, but more like an ultra-diffuse galaxy, like a podcast network, or a social media thread.
Meaning in these vector space based systems is therefore not a solid block. It is closer to a thin ribbon suspended in a vacuum, and we can think of the imprecision as the vibration that shakes representations loose from it. The coarser the "grain", the more the drift toward unreliable regions. The thinner the manifold relative to the empty ambient space, the less space there is for imprecision before a representation moves beyond the meaning that the geometry contains. As can be seen, materiality matters for AI systems, and this needs to be understood in order to grasp the wider implications for a vector medium. The question of dimensionality and precision are key to understanding the way in which the medium operates and how it stores information. The room in this sense defines how easily meaning can be lost inside the vector space and how the manifold is able to hold meaning and provide the conditions of possibility for generative systems through this dimensionality and imprecision.
![]() |
| Figure 4: An example how meaning collapses out of the manifold structure into the wider ambient space |
Another way to put this is that dimensionality gives you distinction but not the discrimination between them. Dimensions are an extensive property of the space, a matter of the space available in the room, as it were, the volume and the capacity to house representations without them being located on top of each other. For example, you can keep "justice" and "revenge" apart in 768 dimensions, but you cannot necessarily tell them apart at their boundary. However, the more dimensions the space provides, the thinner the manifold becomes relative to the ambient space, and the more each representation depends on the intensive grain of the substrate, the local precision that holds it in place. If this feels complicated, that's because it is, but the key elements to hold in mind are the difference between the number of dimensions and the precision of the digital numerical format that is used to house them. Each of them has a different effect on the capabilities of the manifold, and their capaciousness is not, therefore, purely a function of their digitality.
We might say that extensity produces the conditions under which intensity becomes critical, in a vast room (i.e. a large vector space) that enables this important separation of things also creates a fragility that demands finer discrimination at the boundary through a higher precision number type. Another way to put this is that resolution is the intensive property of vector space. Designing systems is a matter of balancing the extensive and intensive materiality of the vector space so that it is appropriate for the AI that uses it to store its manifold. Scaling addresses the issue of dimensionality as bigger computers with more ram can hold more dimensions, and these dimensions can be held in a relatively low precision number but sometimes the problem is not insufficient room (i.e. dimensionality) but insufficient grain, and no amount of room compensates for a grain that cannot hold the distinctions between things.
Theorists of the digital, from Kittler onwards, have argued that the properties of the digital determine what computation can do. It is tempting, therefore, to apply this framework to the vector medium, to treat the digital format, the precision of the floating point number or the coarseness of the grid as the fundamental issue. But for AI, I think this gets the relationship backwards.
The AI revolution can be understood as a revolution in dimensionality. Every recent major advance in language model performance has come from more parameters, more layers, larger embedding spaces, more dimensions in which the manifold can unfold. Ironically, digital precision has not increased alongside these gains. Counterintuitively, it has been deliberately reduced to make them possible. bfloat16 exists so that the same hardware memory can hold twice as many parameters as float32 would allow. The designs of AI trade precision for dimensionality because dimensionality is what the scaling laws reward and what produces the capability gains that justify the capital expenditure. The digital substrate matters, of course, but it matters less than the dimensionality it enables.[5] Indeed, it is hard to argue that the digital medium is determinative when the digital itself is being physically redesigned to the needs of the vector space.
This is what makes the precision of the digital numbers interesting. Fine discrimination at contested boundaries is not being neglected by accident or oversight. It is being structurally jettisoned because the capabilities that sell the AI, that attract further investment and that dominate evaluation, are capabilities of scale, of coverage, of competence, not capabilities of nuance, of boundary, of judgement.[6] The machines are designed at a fundamental level to play fast and loose with the truth.
No Resolution
One way of thinking about this is to make an analogy with celluloid film. Silver halide crystals in a photographic material set a material limit on what an image can capture. Faster, cheaper film uses larger crystals, which means coarser grain, and which then means less fine detail in the photographs. The grain is a material condition of possibility in the media, not something the photographer can work around. It is the literal material substrate of the image. Everything the photograph can show is captured in the grain, and what the grain cannot capture and store is lost. I have argued elsewhere that the digital itself has a grain, a material texture that shapes what can be represented within it (Berry 2014).
Vector space has a grain too, set by the number of dimensions, and it functions in a similar way. For vector space, the resolution of space is the number of distinct positions available across each dimension, we can think of this as the capacity of the grid. This affects the boundaries between items that are stored in vector space. Political concepts, ethical ideas, contested classifications, these cluster at boundaries where the space must make discriminations between things that are semantically close but conceptually distinct. For example, the difference between "critique" and "complaint," or "solidarity" and "compliance," or "resistance" and "obstruction." These pairs are conceptually distant as they name different relationships to power, different modes of agency, or even different political commitments. But they may be stored in the manifold quite close together where their statistical contexts converge. If the resolution (i.e. dimensions) is too low, the boundary between them dissolves into the grain of the vector space. The primary drivers of this collapse are the distribution of the training data and the loss function that rewards prediction over discrimination. The distinction is lost not because it was not learned but because the geometry of the vector space is too coarse to hold it.
These dimensions are generally stored in a number format called bfloat16, and this is the most common number format used in AIs. This means roughly two to three decimal digits of precision. From this the vector space (and therefore the manifold can be normalised down to a lower precision number), for example to the FP8 formats (e.g. E4M3, E5M2) that are now standard on NVIDIA's H100 training chips. This makes the precision lower, giving only a handful of significant digits. Each time the precision is reduced it coarsens the grid, reducing the number of distinct positions the space can hold but it seems that the manifold, due to its dimensionality, can cope with this normalisation for inference processes. Current AIs usually therefore privilege the first, the dimensions, and are less concerned for the precision of the second, the digital number format. The result is often a space that is topologically adequate but numerically rather coarse, a space that can tell you roughly where things are but sometimes not precisely what distinguishes them from their neighbours.[7] The geometric distinction between global and local structure maps directly onto this distinction. There is a balancing act between the number of dimensions of the vector space and therefore its capacity for the manifold to hold information within it and the precision in which the manifold is represented and therefore its discrimination between closely related things. This is the opposite of what Kittler would have expected as the dimensionality is what matters in contemporary AI, rather than the number format.
Noise as Constitutive
The strange outcome of these design decisions for AI is that the imprecision of vector space is only partially constitutive for the success of the systems. Low precision can be said to function as a regulariser, a form of noise that prevents the model from overfitting to the training data. The stochastic nature of gradient descent, combined with low-precision arithmetic, prevents the manifold from memorising every detail of the corpus and instead, the model, as it were, generalises. It learns statistical regularities rather than specific instances, and it is the noise introduced by lack of precision that enables this. In contrast to Kittler's (1999) claim that the medium determines our situation, rather the medium indetermines the situation, and this is where the power of these new systems lies.
This makes sense, if you consider digital precision as a storage medium, the perfect digital storage would not be learning, it would be memorisation. A space of infinite resolution would allow the model to encode every training example as a distinct point, perfectly separated from every other, with no incentive to find the common structure that makes generalisation possible. The noise introduced by low precision forces the model to compress, to find shared patterns, to smooth over differences that the precision cannot capture. This is why the imprecision is indeterminate, the manifold is constituted by its lack of digital precision not through it.
However, what is good for learning may actually be destructive for precise discrimination. The noise that enables generalisation is the same noise that actually degrades the boundaries between contested concepts (I wonder what Gallie would say). The same imprecision that allows the manifold to be useful, to generalise, to function as a practical tool, makes it weaker at maintaining fine differences, for example in ethical and political distinctions. This is a structural tension at the heart of the vector space, between the requirements of generalisation, which demand noise, and the requirements of discrimination, which demand precision.
For example, the boundary between "solidarity" and "compliance" might come at the cost of the model's capacity to generalise. The model needs noise to learn that solidarity and compliance appear in overlapping contexts, workplaces, institutions, collective action, because that overlap is part of what the concepts mean. But it needs precision to maintain the distinction between them, because the discimination is what makes them politically different concepts rather than synonyms. These two requirements pull in opposite directions, and the tension between them is not an engineering error to be resolved by better hardware. It is, perhaps, a limit on what the geometry can do for ethical and political thought. Under present training paradigms and economic conditions, the vector space regime optimises for generalisation at the systematic expense of discrimination, and the demand it sacrifices is the one that matters most for judgement.
When the grain is too coarse to maintain the boundary between two contested concepts, the model does not hover at some neutral midpoint between them. It gravitates toward their statistical average in the training data, the centre of mass of the contexts in which both terms appear. And that centre of mass is not politically neutral. It is weighted toward the dominant usage, the meaning that appears most frequently in the corpus. For "solidarity" and "compliance," the statistical average will probably be closer to compliance, because compliance is what institutions produce documents about, what management literature is on the internet. We could say that the blurring of concepts in the manifold has a direction. We might also note that the same corporations that curate the training corpora, design the loss functions, and define the evaluation benchmarks, which means that every stage of the pipeline, from data selection through training to deployment, is shaped by the same economic incentives that favour institutional legibility over contested meaning. The direction of the vector blurring concepts points toward the interpretation that is most productive for those that control the infrastructure.
Lower precision in the vector space functions as a centripetal force, dragging contested meanings away from their boundaries and toward the dense centres of the manifold. These are the most populated regions of the manifold and can be understood to function as a form of ideology as they encode dominant usage as common sense.[8] What an analysis of the material grain of vector space reveals is that geometric ideology operates at two levels, not one. At the level of training, the manifold's dense regions pull meaning toward a statistical average. At the level of the dimensionality of the vector space, the manifold ensures that any concept which strays too close to a boundary cannot maintain its position and falls back toward the nearest "high-density attractor". We might call this vector conformism. This conformism is increased when model outputs are fed back as training data for subsequent models. For each generation inherits a manifold in which the contested boundary has already been smoothed, producing a recursive erasure of the distinctions the manifold cannot hold.[9]
The Economics of Precision
The precision of vector space is, therefore, in many ways an economic decision. When researchers quantise large language models from float16 to INT4 for deployment, benchmark accuracy drops by modest amounts on standard tasks. However, there is suggestive evidence that what degrades first is performance on tasks requiring fine-grained discrimination. Performance on fine-grained classification and boundary-dependent reasoning appears to be more vulnerable to precision reduction than performance on broad AI competence tasks. But if the argument developed in this article is correct, this is what we should expect, the distinctions that require nuance would be the first casualties of model normalisation.
Indeed, bfloat16 was designed by Google Brain not because 7 bits of mantissa is the mathematically optimal precision for representing meaning but because it fits the engineering constraints of tensor processing units. Lower precision means smaller memory footprint, higher throughput, lower energy consumption but higher dimensions can be used because they are stored as integers. These are the pressures that dominate at scale, and they shape the political economic decisions about AI systems. So scaling dimensions is expensive in hardware terms but not in precision terms, which is exactly why the economic logic of AI has pushed toward more dimensions at lower precision rather than fewer dimensions at higher precision. You get more separation of concepts for roughly the same precision cost per coordinate.
Indeed, the field does not appear to be moving toward higher precision. It is actually moving toward lower. FP8 formats are increasingly used on H100 hardware. INT4 quantisation schemes are now common for inference. The precision of the manifold is getting lower, not higher, driven by the economic logic of compute efficiency. The content of AI is forced into the economy of the integer grid.[10]
Whether the field will continue this, or whether the economic pressure will be toward lower precision across the board remains to be seen. But the underlying intuition, that the manifold geometry will increasingly demand higher dimensions is, I think, correct.[11]
This is interesting to compare to the history of the digital which has been that the trajectory of general-purpose computation pointed in the opposite direction. From 8-bit to 16-bit to 32-bit to 64-bit, the trend was toward increasing precision because the tasks demanded it, financial calculations, physics simulations, scientific modelling, all involving cumulative numerical operations where small errors compound over time. Neural network training appears to reverse this assumption and renders our ideas about the digital anachronistic. We might say the history of the digital was a story of increasing precision but the history of AI is a story of increasing imprecision.
I have argued that what is owned in vector capitalism is not content but the geometric principles by which content is organised (Berry 2026c). The precision of that geometry is itself a proprietary decision. The choice of bfloat16 over float32, of FP8 over bfloat16, is a choice about how precise the grid of meaning will be, and it is made by the same corporations that own the manifold. The grain of the space is owned. The resolution of meaning is an economic variable determined by the parties who profit from the abstraction, not by the mathematics that describe it.
The Grain of the Geometry
What is vector space? It is the materially grained substrate on which AIs operate. Its grain is set by capital, its resolution is an economic variable, and its imprecision is constitutive of the abstraction it performs. Dimensionality buys size, the capacity to separate concepts that are distinct and useful in inference. In contrast, precision buys local structure, the capacity to discriminate between concepts at a fine level of detail. Current AIs produce vector spaces that are dimensionally large but numerically imprecise. Together they produce a vector space that is not neutral but ideological, one that systematically favours the dominant over the marginal, the frequent over the contested, common sense over difference.
The political consequence is that the boundaries where the manifold fails are the boundaries that matter most to societies, ethically, politically, and in terms of contested meaning. The imprecision of the space is not visible to the user and it is seldom discussed in the technical literature as a political problem. It is treated as an engineering trade-off, a neutral parameter to be optimised for efficiency of the AI system. But it is not neutral. The dimensions of the geometry determines what distinctions the space can maintain, and the distinctions it cannot maintain are disproportionately the ones that matter.
The manifold is shaped by training. The materiality from which it is made, the dimensions that structure the vector space, is where the geometry begins. These properties are not mathematically given but economically structured. The grid is the real abstraction of the vector space , a social decision, disguised as an engineering specification, about how much discrimination is thought of worth in an AI system for society. It is the digital equivalent of what Sohn-Rethel called the exchange abstraction as it ignores the qualitative specificity of the concept to ensure the quantitative efficiency of the system (see Berry 2026b). Semblance over correctness. That is what the the tech industry has decided thought costs.[12]
Notes
[1] A mathematical vector space is an abstract structure governed by rules about how vectors can be added and scaled (see Halmos 1958). The embedding spaces used in language models are a specific kind of vector space, one equipped with a way of measuring angles and distances between any two points. The mathematical definition assumes the space is built from real numbers, which are continuous and infinitely precise.
[2] In 'Brain Numbers' (Berry 2026a) I traced the cascade from float32 (used in research) to bfloat16 (training) to INT4 (deployment) as a progressive impoverishment of the numerical substrate through which the geometry of meaning is enacted. The central claim is that at each stage of compression the system barely registers the loss, because the loss function measures prediction of the next token, not precision of thought. The manifold's geometry is robust to numerical coarsening in ways that scientific computing is not, which is precisely what makes the imprecision invisible and therefore politically significant.
[3] Impett and Offert (2026) develop a parallel account of vector space, approaching the question from media theory rather than critical theory. Their treatment of what they call "vector media" shares the premise that the geometric properties of embedding spaces are not merely technical but constitutive of cultural meaning.
[4] The gap between intrinsic and ambient dimensionality is not wasted space in any simple sense, it provides the representational redundancy that enables robustness and allows the model to represent multiple distinct structures simultaneously, but it does mean that the space is mostly empty and that the geometric structure of the manifold is concentrated on a thin submanifold within the ambient space.
[5] The idea that the digital is the foundational issue, that vector space is ultimately implemented in binary arithmetic and that the properties of floating-point representation therefore determine everything that follows gets this relationship backwards. Floating-point precision is a constraint on what each dimension can express, but it is dimensionality that determines what the space as a whole can represent. The dramatic improvements in language model capability from GPT-2 (768 dimensions) to GPT-3 (12,288 dimensions) to GPT-4 came from scaling dimensionality, not from increasing digital precision. Interestingly, the assumption that the digital is what matters is itself a form of reductionism, one that collapses the emergent properties of high-dimensional geometry into the properties of its lowest-level implementation. As Kittler (1999) argued, the digital is itself merely a disciplining of analogue voltages, a threshold imposed on continuous electrical signals. If one follows this reductionist logic consistently, the digital dissolves into the analogue, and the question of what "really" determines the space regresses below the level at which it can be meaningfully asked. I argue that the interesting properties of vector space, separation, topology, manifold structure, operate at the level of dimensionality and geometric organisation, not at the level of the individual digital representation.
[6] If the digital were determinative, the properties of existing hardware would shape the geometry. What has actually happened seems to be the reverse. The demands of high-dimensional vector processing have caused the hardware to be redesigned. Google's Tensor Processing Units are the clearest example of chips designed from the ground up not for general-purpose digital computation but specifically for the matrix multiplications that high-dimensional vector spaces require. The TPU's architecture are shaped by the geometric operations needed for the manifold. Similarly, NVIDIA's tensor cores represent the same logic applied to GPUs, dedicated silicon carved out for matrix-multiply-accumulate operations at the scale that vector processing requires. These are cases of the vector space and manifold reshaping the digital substrate to better serve the geometry, not the digital determining the geometry. The hardware seems to follows the geometry, not the other way around.
[7] Mathematics distinguishes between two kinds of spatial information. Topology, tells you which things are connected and which are separate, the overall shape. In contrast, metric structure, tells you how far apart things are and what the local neighbourhood looks like. The argument I am developing is that current vector space regimes are well provisioned for the first (high dimensionality gives you the overall shape) but less interested in the second (low precision coarsens the local distances). We could say the manifold knows its shape but not its detail.
[8] The claim that density in the manifold functions as ideology is one I intend to develop at length elsewhere. The core intuition is that the manifold's dense regions are not neutral summaries of the data but geometrically privileged positions that "attract" nearby representations, producing a kind of gravitational structure in which the most frequent usage becomes the default meaning.
[9] Vector conformism becomes recursive when model outputs are used as training data for subsequent models, the so-called synthetic data loop. Each generation inherits a manifold in which the contested boundary has already been smoothed.
[10] The structure seems to mirror what Lukács, in his critique of Kant, identified as the antinomy of bourgeois thought. This is that the formal conditions of the system are protected whilst the content of experience is discarded. Kant privileges the transcendental apparatus, the categories, the conditions of possibility for knowledge, at the cost of declaring the thing-in-itself unknowable. The formal machinery is exact, but the world it apprehends is thereby impoverished. Similarly in the encoding into the manifold. The antinomy is not philosophical but economic, and it is inscribed in the manifold.
[11] The idea of assigning different precision to different parts of the geometry is an interesting areas of active research in AI.
[12] Mechanistic interpretability, which decomposes trained models into identifiable features and circuits, might appear to offer an empirical way for testing these claims. But it operates on the manifold, not on the vector space itself, and its interpretive categories may themselves be artefacts of the grain it cannot see. If the dimensions collapse contested meanings toward dense attractors, mechanistic interpretability may rediscover those attractors as clean, interpretable features, confirming the vector conformism rather than detecting it. I intend to return to this in a future piece, 'What Is Theory Space?'
Bibliography
Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury.
Berry, D. M. (2026a) 'Brain Numbers', Stunlaw. Available at: https://stunlaw.blogspot.com/2026/03/brain-numbers.html
Berry, D. M. (2026b) 'Real Abstraction Without Exchange', Stunlaw. Available at: https://stunlaw.blogspot.com/2026/03/real-abstraction-without-exchange.html
Berry, D. M. (2026c) 'The Vector Medium', Stunlaw. Available at: https://stunlaw.blogspot.com/2026/02/the-vector-medium.html
Berry, D. M. (forthcoming) 'What Is Theory Space?', Stunlaw.
Halmos, P. R. (1958) Finite-Dimensional Vector Spaces. Van Nostrand.
Impett, L. and Offert, F. (2026) Vector Media. University of Minnesota Press.
Kittler, F. (1999) Gramophone, Film, Typewriter. Stanford University Press.
Lukács, G. (1971) History and Class Consciousness: Studies in Marxist Dialectics. Merlin Press.
McLuhan, M. (1962) The Gutenberg Galaxy: The Making of Typographic Man. University of Toronto Press.
Sohn-Rethel, A. (1978) Intellectual and Manual Labour: A Critique of Epistemology. Macmillan.




Comments
Post a Comment