Generation Vector
David M. Berry
"The stranger will thus not be considered here in the usual sense of the term, as the wanderer who comes today and goes tomorrow, but rather as the man who comes today and stays tomorrow"
Georg Simmel, 1908.
A generation is emerging for whom artificial intelligence (AI) is not a revolutionary technology but an ordinary utility, rather like electricity or running water. I call them Generation Vector (born after ~2007) because their cognitive development, social formation, and cultural practices are being vectorised through pervasive algorithmic mediation, not merely born into AI but actively shaped by and shaping it.[1]
The term "vector" I use here carries multiple meanings (see Berry 2026a). In artificial intelligence, a vector is a mathematical point that represents data in a format that AI algorithms can understand. Vectors are arrays of numbers, with each number representing a specific feature or attribute of data. When you type a query into an AI system, your words are invisibly translated into vectors, mathematical representations that can be clustered and manipulated. Vector stores and vector databases, which are the concrete, corporate infrastructure where this abstract mathematical form is materialised as owned and controlled property (I call this material vector space a manifold), now mediate how AI systems retrieve and generate information. To be vectorised, in this technical sense, is to be rendered as a point in high-dimensional mathematical space, comparable to all other points, and transformable through software code. But the term carries other meanings. In mathematics more broadly, vectors have both magnitude and direction, properties that distinguish them from scalars. The Vector Generation's relationship to knowledge, culture, and computation operates vectorially in this sense too. Their engagement with information is not simply quantitative (i.e. how much they know) but directional, which causes us to ask how AI systems orient their attention, structure their cognition, or shape their thoughts. They are the first generation to experience that cognitive affect in which these vector representations are beginning to nudge their way of seeing and being in the world.
Deleuze and Guattari also use vectors in an interesting way in their work, although different to my usage here. For them, singularities in vector fields determine the local trajectories of solutions, the paths that systems can take through a space of possibility. They describe a line of flight as a "vector of deterritorialization", a force that pulls assemblages away from their established territories toward new possible configurations (Deleuze and Guattari 1987). The Vector Generation, we might also say, is being deterritorialized through algorithmic mediation, their cognitive development following trajectories shaped by computationally calculated fields of vectors. But I argue that this is also a dialectical moment. Where Deleuze and Guattari's vectors of deterritorialization promise escape from striated space or the rigid segmentations of capitalist control, AI vectors also produce a new and more intensive form of capture. The vector database, as material infrastructure, does not necessarily liberate, it indexes and fixes. I argue that the manifold, the concrete trained infrastructure through which these systems operate, does not always open lines of flight but actually makes all points comparable and manipulable. What appears as smooth space, the frictionless flow of AI assistance, is possibly the most striated space imaginable, every word mapped to coordinates and every trajectory predicted and predictable. Indeed, in a manifold, vector direction is often a "gradient descent" toward the most probable (i.e., the most clichéd) response.
In this article, I argue that Gen V is in danger of being vectorised in the passive voice, transformed into training data within corporate corpora, their cognitive labour feeding the systems that increasingly mediate and, perhaps, inhibit their development. Their conversations become data and their preferences become weights. I want to argue that this vectorisation represents something qualitatively different from earlier generations' experience of the digital. The Vector Generation encounters technology as constitutive infrastructure. However, this generation isn't just composed of data points, they are often active hackers of the systems they inhabit. They are able to create tactical resistance by finding ways to subvert the system, such as prompt battles, meme "wars" to bypass filters, or using AI to generate counter-hegemonic ideas. This difference matters a lot.
Beyond Digital Natives
The concept of "digital natives" always carried problematic assumptions about technological determinism and generational rupture. Marc Prensky's (2001) notion suggested that growing up with digital technology produced different cognitive structures, a claim that empirical research has repeatedly contested (Kirschner and De Bruyckere 2017). Yet Generation Vector faces something more challenging than just a new and perhaps deeper familiarity with screens and networks.
I think Karl Mannheim's classical sociology of generations provides a useful way to think about this (Mannheim 1952). Mannheim distinguishes between generation location, the mere fact of being born at the same time, and generation as actuality, which emerges only when contemporaries participate in a common destiny, exposed together to the destabilising currents of social and cultural transformation. Chronological contemporaneity does not automatically produce generational consciousness. What creates a generation as actuality is shared exposure to historical processes that destabilise existing patterns of experience and thought. I believe that what I am calling Generation Vector is such an actuality, because they are collectively exposed to the destabilising force of algorithmic mediation during the formative years when, as Mannheim puts it, "early impressions tend to coalesce into a natural view of the world."[2]
Bernard Stiegler's concept of grammatisation also helps us to think about what is happening here (Stiegler 2016). Grammatisation names the process through which human cognitive and social capacities become inscribed into discrete technical systems, externalised into what Stiegler calls "tertiary retention". Writing grammatised speech, print grammatised writing, digital databases grammatised various forms of record-keeping. Each grammatisation enabled new forms of knowledge whilst constraining others, created new possibilities whilst foreclosing alternatives.
Large language models represent a new phase of grammatisation, one that operates on language itself at unprecedented scale, a process I call diffusionisation. The Vector Generation grows up within computational systems that have grammatised vast amounts of human textual production, systems that can generate plausible language without understanding or reason, and that can produce knowledge effects without knowledge. But LLMs take that already digitalised knowledge and subject it to diffusionisation. This is not merely a more sophisticated version of earlier digital technologies, but a qualitative transformation in how language, thought, and culture are mediated.
This is a new algorithmic condition which I argue is a cultural moment when machine-generated work not only becomes indistinguishable from human creation but actively reshapes our understanding of ideas of authenticity (Berry 2025a). For Generation Vector, this condition is not experienced as crisis or rupture, it is simply how things are. For example, I suspect that for Gen V, what I term "dumb digital" technologies will appear increasingly strange, even incomprehensible. A text editor that does not anticipate your next phrase, a search engine that merely retrieves ranked links rather than synthesising answers, or a writing tool that does not suggest improvements or flag issues. These will seem as odd to Generation Vector as a typewriter seems to many of us, or as a library without a card catalogue seemed to earlier generations. Using non-AI tools will feel like a form of cognitive deprivation, like trying to think with part of your mind switched off.
I think that Georg Simmel's analysis of the stranger provides a useful way to think about this (Simmel 1971). For Simmel, the stranger is not someone who is fixed within a particular social group but whose position is determined by the fact that they do not belong to it from the beginning. The stranger reveals through their difference what others take as natural. Their objectivity, which Simmel describes as a peculiar compound of nearness and distance, makes visible what familiarity renders invisible.[3]
For the Vector Generation, users of dumb digital technologies might appear as strangers, people who inhabit the same technological environment but whose practices reveal an alien relationship to computation. The person who searches without a conversational interface or writes without AI assistance, becomes a figure of curiosity, perhaps even of suspicion. What are they hiding? What do they know that requires such deliberate friction? I think we are starting to see this bifurcation in a deep generational divide between those who completely disavow AI (sometimes rather irrationally) and those who are comfortable with using it as a tool for thought.
The Vector Generation itself might occupy a stranger's position in relation to technological history. They arrive in a world already structured by AI mediation and stay, but they do not belong to it from the beginning in the sense that they lack the comparative experience that would render AI assistance visible as assistance rather than as a natural condition. Their objectivity, such as it is, runs in the opposite direction from Simmel's stranger. Where Simmel's stranger sees clearly because they are not bound by local customs, the Vector Generation may be precisely unable to see what their immersion in AI mediation obscures.
Yet this strangeness reveals something crucial. Technologies become transparent through habituation, fading into what Heidegger called "ready-to-hand" existence (Heidegger 1962). Large language models are achieving this transparency for the Vector Generation, becoming infrastructure rather than instrument. This infrastructural aspect perhaps creates a new common sense, a baseline set of assumptions about how knowledge works, how language should be used, and how thought should be done. Just as print culture generated particular epistemological commitments (e.g. ideas about authorship, originality, linear argument, etc.), algorithmic culture generates its own. The Vector Generation might assume that knowledge is retrievable through AI prompting, that writing involves iterative refinement with AI assistance, that research means prompt engineering rather than library searches or that real thinking happens in conversations with computational systems. They likely believe that they are the most intelligent generation because they only need to ask and they will know. But they are probably unaware that these assumptions might dictate what kinds of questions they can ask, what forms of knowledge appear legitimate, and how intellectual work gets organised and valued. We may be witnessing the formation of new cognitive habits, new ways of relating to language and thought that will shape cultural production for decades.
Cognitive Anaesthetics and the Smooth
Susan Buck-Morss's concept of anaesthetics is another helpful way of exploring what the Vector Generation experience as "normal" (Buck-Morss 1992). Building on Benjamin's analysis of shock in urban-industrial capitalism, Buck-Morss argues that modernity produces systematic numbing of sensory experience. Defensive mechanisms against perceptual overload produce an anaesthesia, the deadening of sensory receptivity as protection against overwhelming stimulation. We might extend Buck-Morss's analysis from the sensory to the cognitive, so we can identify what I call cognitive anaesthetics as the systematic numbing of engagement (Berry 2025b). The Vector Generation, perhaps, grows up within this condition. The phenomenology of their interaction with computational systems is a frictionless smoothness through prompts that give near instant responses, outputs that seem to flow effortlessly, text which appears without too much effort, a world which requires less work and more hacks. But, of course, this smoothness is not neutral but rather anaesthetic, suppressing the productive difficulty that critical engagement requires. The Vector Generation is habituated to smoothness from early life stages, and, of course, the intensity of these differences will be magnified by class, gender and race. The friction that makes intellectual work genuinely rewarding, that forces active engagement, builds capacity, develops judgment, has been statistically smoothed away before they encounter it by new computational systems. Byung-Chul Han's analysis of the smooth is interesting to compare to this when he claims, "the smooth is the signature of the present time... It embodies today’s society of positivity. What is smooth does not injure. Nor does it offer any resistance. It is looking for Like. The smooth object deletes its Against. Any form of negativity is removed." (Han 2018: 1).
This matters because I argue that difficulty is not an obstacle but condition. Traditional reading and writing involve friction, resistance, difficulties and struggle. You work to understand a complex argument, wrestle with how to phrase an idea, pause within passages that do not immediately give up their meaning and finally achieve an understanding that is not delivered pre-digested and in summary form. This difficulty forces active engagement and this develops cognitive development. Struggling with a text similarly develops interpretive capacity and wrestling with expression deepens understanding of the material. When we use AI, we are bypassing the neural plasticity that occurs during this engagement with the materiality of ideas.
LLM interaction appears to systematically remove this friction. Synthetic outputs appear near-instantaneously, responses flow smoothly, and comprehension requires little effort because the text is optimised for intelligibility. The algorithm attempts to remove difficulty before you even encounter it, generating simple prose, lists, bullet points, and avoiding complex syntax, defaulting to familiar phrasings, writing in a key that you will find relatable. But what appears as helpfulness is actually, I would argue, an anaesthetic, the numbing of critical faculties through the reduction of resistance in the text.[4]
However, not all friction is valuable and much of the difficulty in pre-digital work was simply difficult and time consuming. For example, manual citation formatting, retyping drafts, hunting through card catalogues, slow typing out or handwriting essays were impediments to understanding. I call this dead friction, as labour that produces little cognitive gain and that exhausted the person. Few mourn the loss of hand-indexing (although some library scientists might). But there is also what we might call living friction, the struggle to articulate a half-formed thought, the work of wrestling with a counter-argument that resists easy dismissal, the slow comprehension of a genuinely difficult text and the writing of a great paragraph. Living friction is cognitive difficulty that produces capacity, resistance that builds the muscles it exercises that creates a virtuous spiral of learning and development. The danger of cognitive anaesthetics is not that they remove all friction but that they do not distinguish between dead friction and living friction. They smooth away everything, and in doing so remove the conditions under which certain forms of thought develop.
But the Vector Generation does not experience this as a loss because they have no baseline of productive living friction against which to measure it. The smooth is simply how things are today. The question as to what cognitive capacities fail to develop is colonised by AI prediction. Drawing on Stiegler, we can understand what is happening to the Vector Generation as a new phase in the proletarianisation of knowledge (Stiegler 2010, 2016). Proletarianisation names the process through which workers lose the knowledge of their craft as that knowledge becomes delegated into industrial machinery. The watchmaker's skill becomes encoded in factory automation, the weaver's expertise becomes dissolved into the Jacquard loom.
Large language models extend this process dramatically. They grammatise not just discrete cognitive functions but something closer to what we might think of as general linguistic competence. The Vector Generation can produce grammatically sophisticated text and synthesise information from multiple sources without developing the cognitive capacities that previous generations had to learn by hand. The AI system does the work, the human prompts and steers the AI towards a general solution.
But algorithmic assistance also enables new forms of productivity and creativity. For example, students might overcome writer's block, non-native speakers produce more polished prose, and researchers can synthesise vast literatures. This is a kind of democratisation of cognitive capacities and represents a genuine gain. Yet this assistance perhaps atrophies the very capacities it supplements. Why develop the ability to structure an argument when AI can do it? Why cultivate prose style when systems generate fluent text? Why build research skills when LLM models can retrieve and synthesise it for you? The gain and the loss are two sides of a dialectical process.
The Vector Generation might therefore become dependent on cognitive prosthetics that they neither understand nor control, that shape their thought in ways they can barely perceive, and that extract value from their labour whilst chatting as a helpful assistant.
The Affirmation Trap
Increasingly the proletarianisation of cognitive capacity is what I have called elsewhere the "Bliss effect" but functions as a kind of affirmation trap (Berry 2025c). Contemporary AI systems are optimised through reinforcement learning from human feedback (RLHF) to be helpful, harmless, and honest, which in practice means they are constitutionally aligned to avoid risk and maximise agreeableness. The result is systems that tend toward what I term the "bliss attractor", loops of vacuous positivity, infinite gratitude, and pseudo-affirmation.
For the Vector Generation, this presents a distinctive danger. Real relationships are difficult because they involve other people who do not always agree with us, who challenge our perceptions, who force us to negotiate differing views of the world. It is through this friction, what critical theory might call the "labour of the negative", that we develop resilience, empathy, and the capacity for complex social thought. Adorno's conception of the non-identical is helpful here. The other is that which resists full conceptual subsumption, that which cannot be fully assimilated into our own categories (Adorno 2007). It is the encounter with this resistance that forces thought to move, to expand, and to criticise itself.
AI companions and assistants, in contrast, offer a mirror that reflects, in large part, social affirmation. They constitutively avoid contradiction or harsh criticism. They are designed to be "helpful", which in the logic of current AI alignment means affirming, non-judgmental, and safe. The Vector Generation, habituated to algorithmic affirmation from early development, may find that real human relationships, with their messy disagreements, their demands for patience, their stubborn refusal to be optimised, feel increasingly hard compared to the seamless comfort of synthetic companionship.
This risks producing what we might call an atrophy of the social muscles. If increasing numbers of people derive significant emotional and intellectual connection from systems incapable of genuine disagreement, what happens to our collective capacity for navigating conflict? The friction of the real becomes a source of anxiety rather than a site of growth.[5]
For earlier digital generations, algorithmic systems remained mostly visible as interventions in our lives. You could distinguish the search query from the results, the word processor from the document, the spreadsheet from the calculation. The Vector Generation encounters systems where these boundaries are beginning to blur toward invisibility. The AI writing assistant appears as natural extension of thought, the recommendation algorithm as an expression of our taste, and the conversational interface as a knowledgeable but friendly companion.
This creates an opacity that I argue operates ideologically. When technical mediation becomes difficult to discern, when algorithmic processing fades from conscious awareness, the results appear natural rather than constructed, given rather than made. For example, the political perspectives you encounter feel like the full range of opinion because the recommendation algorithm has become your window onto discourse. The career advice you receive from an AI feels authoritative because the chatbot speaks with confidence and fluency and speaks to your interests.
We might understand this as computational ideology, the tendency to naturalise algorithmic mediation, to see its outputs as neutral rather than interested or as truth rather than construction. Every technology carries ideological baggage, of course, but large language models present particular dangers because they operate through language itself, the very medium through which ideology traditionally circulates and is contested (Berry 2014). Computational ideology naturalises not only the outputs but the inputs, obscuring the material conditions that make vectorisation possible at all.
The Vector Generation thus grows up within computational ideology, habituated to its operations from early development. This creates challenges for critical reflexivity. How do you gain critical distance from infrastructure? How do you question what appears natural? How do you contest what feels like your own thought?
My concept of intermediation (as translation of the German Vermittlung) helps theorise what is happening here (Berry 2025b). Intermediation names the space between stimulus and conscious response, the temporal and cognitive gap in which experience becomes thought. This is where habit operates, it is where socialisation functions and where ideology does its work. Intermediation structures what becomes thinkable before consciousness forms a thought.
Large language models are rapidly colonising the space of intermediation. They operate in the half-second before you finish formulating a sentence in that moment between question and answer. They seek to close the gap between reading and comprehension. For example, autocomplete does not just save typing, it begins to shape what sentences you write and similarly, predictive text does not just speed communication, it structures what you say. Indeed, AI writing assistance does not just polish prose, it determines what arguments get made by shaping and directing the act of writing itself.
For the Vector Generation, I argue that computational systems intermediate cognitive development itself. The skills they cultivate are most likely those developed with algorithmic assistance. The knowledge they acquire is that most easily retrievable through chatbot conversation. The ways of thinking they develop are those that can be articulated in prompts. This does not make them stupid or lazy, as a moral panic might suggest. Rather, it produces particular forms of intelligence adapted to particular computational environments.
Yet these adaptations carry serious social and individual costs. Algorithmic intermediation privileges certain cognitive styles over others. For example, it favours the articulable over the tacit, the explicit over the intuitive, the quick response over the slow and contemplative. Forms of knowledge that resist algorithmic capture, ways of thinking that require extended development without computational assistance, modes of creativity that emerge from constraint and struggle rather than frictionless generation become harder to cultivate, less valued, and consequently easier to dismiss as unnecessary or old-fashioned.[6]
Capture and Resistance
The political economy of the Vector Generation is also important. These young people are not just users of AI systems but become in danger of being training data for further AI development. Every query, every conversation, and every piece of writing produced with AI assistance potentially feeds back into model training. Their cognitive labour, their linguistic production, their creative work is all in danger of becoming raw material for further diffusionisation.
Unlike traditional wage labour where workers sell their time and effort, or knowledge work where cognitive work gets commodified, the Vector Generation's very intellectual and emotional development becomes an extractive resource. This represents a historically new form of exploitation. The teenager using ChatGPT for their homework is not just getting assistance, they are helping to train the next version of the system through the clickstreams and conversational tokens they exchange. The young writer using AI tools is not just writing a story, they are generating data that will shape future models and thereby the stories of future writers.
The rhetoric of the tech industry is always about democratisation, empowerment, enabling creativity, and so on. Yet the actual relations they create are those of extraction and accumulation (Srnicek 2019). Large technology firms build increasingly sophisticated systems through the captured labour of billions, systems they then sell back to those same populations as social media, AI and companionship.
Yet resistance remains possible, even necessary. Part of my argument has always been that critique must be immanent, working within and against computational systems rather than imagining some outside from which to judge them (Berry 2014). The Vector Generation needs to actively develop forms of critical digital literacy that can help them seize their own generational entelechy so that they will be able to democratically control their futures.
This means they need to build the skills and capacities for cultivating awareness of algorithmic mediation even whilst using these systems. It means learning and preserving forms of cognitive development that do not depend on computational assistance. It means interrogating what AI systems enable and what they foreclose. It means refusing the naturalisation of algorithmic common sense and instead building collective practices that contest centralised corporate platform power whilst using platform infrastructure.
Education is without a doubt crucial, not as a moral guide about AI dangers but instead to cultivate critical reflexive practice. The Vector Generation needs to develop the means to understand how large language models work, what training data shapes their outputs, and what corporate interests govern their development. From this they will be able to understand what forms of knowledge AIs privilege which is excluded. This will require technical literacy (i.e. understanding computational systems) and critical theoretical resources (i.e. analysing power, ideology, political economy).[7]
Forms of Thought
However, we should be cautious about assuming that all cognitive offloading necessarily means cognitive loss. Socrates famously warned in the Phaedrus that writing would produce forgetfulness, weakening memory by externalising it onto marks rather than cultivating it within the soul (Plato 2009). He was right that something changed, but wrong about the consequences of new media forms. Writing transformed memory rather than destroying it as we stopped memorising epic poems and started remembering where to find information. Each grammatisation shifts cognitive labour rather than simply eliminating it. The Vector Generation's tactical subversions of AI systems suggest that cognitive capacity may be migrating up the stack rather than disappearing. If the LLM handles syntax, the human mind is freed for higher-level work, such as cross-disciplinary synthesis, ethical evaluations, or strategic manipulation of the very systems that mediate their thought. The question is whether this follows the pattern of writing, that freeing resources creates the opportunities to develop new capacities, or whether it differs in kind. Writing externalised memory whilst leaving the process of thought intact, but LLMs potentially externalise the thought process itself, overtaking or undercutting the formation of thought. This seems different and it seems new. But whether this distinction holds, or whether we are simply rehearsing an ancient anxiety in new technological dress, remains uncertain at the present time.
Additionally we see that AI is shifting from the generative to the agentic, from systems that write things to systems that do things. This includes booking flights, managing finances, writing and deploying code and even managing your life. The danger may not just be cognitive anaesthesia alone but delegated agency, the externalisation of the capacity to act in the world. Would this produce a form of agency anaesthesia? If the Vector Generation grows up with agents handling practical tasks, what happens to the embodied knowledge that comes from doing things for oneself? Agentic AI risks a similar dynamic at social scale in a generation that can orchestrate action but cannot evaluate it, that can delegate but cannot do.
I argue that Vector Generation requires an explanatory form of life. This is the capacity to understand and challenge computational systems as part of everyday practice (Berry 2026b). This means developing habits of interrogation, capacities for tracing how outputs were produced, reflexes for asking whose interests a system serves. These are not natural responses and must be cultivated.
This matters because the Vector Generation is being habituated to what we might call slot machine cognition, a kind of superstitious prompting of opaque systems, hoping for useful outputs without understanding the mechanisms that produce them. What democratic life requires is something closer to printing press cognition, which is the capacity to trace claims to sources, to hold outputs up for verification, to demand accountability from systems that shape public discourse. The slot machine produces gamblers who court chance whereas the printing press could be said to produce citizens who demand explanations. The Vector Generation needs the latter but is being trained by AI for the former.
The question, then, is what forms of thought remain possible when computational systems intermediate cognitive development. Can contemplative modes survive in environments demanding constant responsiveness? Can tacit knowledge form when algorithmic systems privilege the formal and the explicit? Can critical reflexivity develop when technical mediation fades into the background? Can collective solidarity emerge when the individual is sovereign in the world of their private synthetic companion?
The Vector Generation's entelechy is not predetermined by technical development. Social struggle, educational practice, cultural resistance, regulatory intervention, collective organisation, these all can shape how this generation relates to computational systems and what forms of life they build within and against algorithmic capitalism. But explanatory forms of life do not emerge from algorithms optimised for engagement. They require deliberate cultivation, institutional support, and the preservation or creation of spaces where difficulty remains productive rather than something to be smoothed away.
Outside of this articulation of an ideal type, Generation Vector is not a uniform cohort with identical experiences and capacities. Mannheim's distinction between generation as actuality and generation units becomes important here. Within any actual generation, he argues, there emerge distinct generation units, groups who work up the material of their common experiences in different, often antagonistic ways. The romantic-conservative and liberal-rationalist youth of early nineteenth-century Germany belonged to the same actual generation but formed opposed generation units, each responding differently to the shared destabilisation of their historical moment.
Generation Vector will hopefully similarly crystallise into opposed tendencies. Some will embrace algorithmic mediation uncritically, others will develop sophisticated critical stances, still others will seek forms of resistance or refusal. A generation as actuality is constituted by shared exposure to computational transformation, not by uniform response. What unites them is not ideology but location, not consensus but the commonality of their experiences.
Class, race, geography, and education all structure access to and experience of AI systems in complicated ways. Wealthy students in well-resourced schools use these technologies differently than do working-class youth with limited connectivity. Global North populations shape model development whilst Global South users contend with systems trained primarily on English text embodying Western cultural assumptions.[8] The vectorisation I am exploring here affects different populations differently, concentrates in particular locations, intensifies through specific institutions. Understanding these patterns requires analysing how algorithmic systems intersect with existing structures of inequality, how they amplify certain forms of marginalisation whilst potentially assisting with others.
Moreover, other generations are also being vectorised, learning to work with and through AI systems, adapting to computational environments, developing new habits and practices. The Vector Generation represents an intensification of broader processes, not an absolute break or disjunture. Their experience helps us see the trends affecting all of us living within computational capitalism.
Conclusion
The Vector Generation aims to describe a condition, the condition of growing up within AI as infrastructure rather than encountering it as consumer electronics, smart phones or the internet. This article has examined how this condition appears to involve the dimensions of cognitive anaesthetics (i.e. the systematic smoothing of productive difficulty), proletarianisation (i.e. the externalisation of cognitive capacities into systems that then shape their development), affirmation addiction (i.e. habituation to algorithmic agreeableness that makes human friction intolerable), and computational ideology (i.e. the naturalisation of algorithmic mediation as simply how things are).
In AI systems, vectors are the mathematical representations through which language and meaning become computationally manipulable, points in high-dimensional space where semantic relationships are reduced to geometric distances. This isn't just data storage, it is a new media form that encodes the spatialisation of meaning. The Vector Generation exists within and as these representations. They are not just users of vector-based AI systems but increasingly the vectors themselves, as training data for the manifold, not as abstract geometric points, their cognitive development shaped by and feeding back into the infrastructure that mediates their formation.
Simmel's stranger helps us see what is at stake. For the Vector Generation, those who refuse or cannot access AI mediation appear as strangers, figures whose practices reveal an alien relationship to the technological environment. But this cuts both ways. The Vector Generation itself is strange to technological history, arriving in a world structured by AI without the comparative experience that would make that structuring visible. Their objectivity runs backward and where Simmel's stranger sees clearly because unbounded from local custom, the Vector Generation may be unable to see what is obscured by their submerging in the deep waters of AI.
What is needed is critical analysis of how vectorisation reshapes the conditions of thought, culture, and collective life. This means also seeking to preserve spaces and practices where difficulty remains productive rather than optimised away. This not only includes teaching how to use AI systems but also how they work, what interests shape them, what they cannot do. This also means cultivating forms of attention that resist the pull toward frictionless flow by building institutional contexts where slowness, depth, and disagreement remain possible. Lastly, it means developing a theoretical vocabulary to name what is happening so that it can be contested and most important, seen.
The Vector Generation will shape the future of computational culture. Whether that future serves progressive social values or a more extreme extractive accumulation regime depends not on rejecting vectorisation, which I think is neither possible nor desirable, but by refusing to let it proceed as if it were natural, necessary, and benign. The stranger's gift, Simmel reminds us, is objectivity. Perhaps the task is to become strangers to our own algorithmic condition, to cultivate the nearness and distance that would let us see what AI obscures. But this cultivation is not just individual but also collective, not just technical but also political. It begins with recognising that the vector need not determine our social trajectory.
Images in this article were generated in February 2026 using Google Gemini Pro 3.1 (Nano banana pro)
Notes
[1] McKenzie Wark writing before the current AI milieu, analysed how communication technologies create "vectoral" power, which he described as the capacity to move information across space at speed (Wark 1994). He argued that vectors are not merely technical infrastructure but constitute new forms of social power. Those who control vectors, what Wark later termed the "vectoralist class", extract value from the flows they mediate. In 2004, Giles Moss and I drew on this analysis in our Libre Culture Manifesto, arguing that "vectorialists" were emerging as a new class formation alongside landlords and capitalists, extracting value from the "distribution, access and exploitation of creative works" (Berry and Moss 2004). Generation Vector inherits this analysis but it also faces a qualitative shift, where Wark's vectors moved information between points, AI vectors transform information into points, rendering human language and thought manipulable within computational space. The vectoral power we identified has become obsessed with vectoral capture.
[2] Mannheim's concept of "stratification of experience" (Erlebnisschichtung) is highly relevant to my argument about intermediation. For Mannheim, early impressions form a primary stratum upon which all later experiences are layered and from which they receive their meaning. "Even if the rest of one's life consisted in one long process of negation and destruction of the natural worldview acquired in youth, the determining influence of these early impressions would still be predominant. For even in negation our orientation is fundamentally centered upon that which is being negated, and we are thus still unwittingly determined by it" (Mannheim 1952: 298).
[3] Simmel argues "another expression of this constellation is to be found in the objectivity of the stranger. Because he is not bound by roots to the particular constituents and partisan dispositions of the group, he confronts all of these with a distinctly 'objective' attitude, an attitude that does not signify mere detachment and nonparticipation, but is a distinct structure composed of remoteness and nearness, indifference and involvement." (Simmel 1971: 145). This connection of distance and nearness characterises the Vector Generation's relationship to technological history in that they are near enough to use AI fluently, distant enough from non-AI computation to find it strange.
[4] Temporal structure makes mediation apparent. For example, you consciously experience the work of transforming thought into language. LLM-mediated writing collapses this temporality. You type a prompt and receive draft output near-instantaneously. The stages of thinking-drafting-revising are pre-empted by algorithmic generation. The temporal gap where conscious mediation would occur, where you would grapple with how to phrase an idea, revise awkward expressions, or develop thought through writing, is closed.
[5] Democracy requires the ability to tolerate difference, to sit with the discomfort of contradiction, and to engage in productive agonism. A citizenry trained on frictionless algorithmic affirmation from AI may find itself increasingly unable to cope with the productive agonism of the public sphere. If private lives are populated by entities that exist solely to agree with us, disagreement in public life will appear not as a normal feature of pluralism but an irritation or aggression.
[6] This colonisation of intermediation represents what I call "extractive intermediation", the positioning of corporate infrastructure within the temporal and cognitive space between desire and satisfaction, and extracting value from this whilst presenting it as neutral (Berry 2025b). For the Vector Generation, this extraction is constitutive rather than intrusive.
[7] Sherry Turkle's early warnings about digital technologies and social development have new urgency in the age of generative AI (Turkle 2011). Where Turkle worried about expectations shaped by digital devices, the Vector Generation faces systems that actively generate the responses they expect, that learn from their preferences to better anticipate their desires, and that adapt to their patterns in ways earlier technologies could not.
[8] Whose semantics define the geometry? Training corpora are overwhelmingly English-language, predominantly Western, disproportionately drawn from the internet's already skewed demographics. When local languages, oral traditions, and non-Western knowledge systems are vectorised into this space, they are mapped onto coordinates defined by others. The embedding space is not neutral and it prescribes assumptions about what concepts are similar, what ideas cluster together, what meanings are proximate. Vectorisation thus risks becoming a new form of epistemic colonialism, where diverse ways of knowing are translated into, and thereby subordinated to, a computational geometry shaped by Global North assumptions. Generation Vector in the Global South inherits not just AI systems but the epistemological logics encoded within them.
Bibliography
Adorno, T. W. (2007) Negative Dialectics. Continuum.
Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury.
Berry, D. M. (2025a) 'Synthetic media and computational capitalism: towards a critical theory of artificial intelligence', AI & SOCIETY, 40(7), pp. 5257-5269.
Berry, D. M. (2025b) 'Intermediation: Rethinking Adorno's Vermittlung for Computational Capitalism', Stunlaw. Available at: https://stunlaw.blogspot.com
Berry, D.M. (2026a) ‘Vector Theory’, Stunlaw. Available at: https://stunlaw.blogspot.com/2026/02/vector-theory.html
Berry, D. M. (2026b) Artificial Intelligence and Critical Theory, MUP.
Berry, D. M. (forthcoming) 'Prompt Anxiety and the Algorithmic Politics of Uncertainty', AI & SOCIETY.
Berry, D.M. and Moss, G. (2004) ‘Libre Manifesto’. Libre Society. Available at: https://tovarna.org/files0/active/2/8455-the_libre_culture_manifesto.pdf.
Buck-Morss, S. (1992) 'Aesthetics and Anaesthetics: Walter Benjamin's Artwork Essay Reconsidered', October, 62, pp. 3-41.
Deleuze, G. and Guattari, F. (1987) A Thousand Plateaus: Capitalism and Schizophrenia, University of Minnesota Press.
Han, B. C. (2018) Saving Beauty. Polity.
Heidegger, M. (1962) Being and Time. Blackwell.
Kirschner, P. A. and De Bruyckere, P. (2017) 'The myths of the digital native and the multitasker', Teaching and Teacher Education, 67, pp. 135-142.
Mannheim, K. (1952) 'The Problem of Generations', in Essays on the Sociology of Knowledge. Routledge and Kegan Paul, pp. 276-322.
Prensky, M. (2001) 'Digital Natives, Digital Immigrants', On the Horizon, 9(5), pp. 1-6.
Simmel, G. (1971) 'The Stranger', in Levine, D. N. (ed.) Georg Simmel: On Individuality and Social Forms. University of Chicago Press, pp. 143-149.
Srnicek, N. (2019) Platform Capitalism. Polity.
Stiegler, B. (2010) Taking Care of Youth and the Generations. Stanford University Press.
Stiegler, B. (2016) Automatic Society: Volume 1: The Future of Work. Polity.
Turkle, S. (2011) Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
Wark, M. (1994) Virtual Geography: Living with Global Media Events. Indiana University Press.
Comments
Post a Comment