The Computational Adjacent

David M. Berry


Stuart Kauffman's concept of the adjacent possible, developed to understand biological evolution, suggests a productive starting point for theorising an idea I call the computational adjacent. Kauffman's notion describes how biological systems explore available possibilities through evolutionary processes, how organisms move into spaces of potential that exist just beyond their current state. Kauffman's work on complexity and emergence has influenced thinking across multiple fields, from economics to technology studies. His concept of the adjacent possible has been particularly generative for understanding innovation and creativity. The computational adjacent points to something different, in that it theorises how algorithmic systems generate and constrain potential futures through computational processes. I am interested here not in applying his biological framework directly but in using it as a point of departure for thinking about computation's relationship to adjacent possibility. The computational adjacent names something specific, how technical architectures and their social embeddings actively shape what futures can emerge. This distinction matters. Biological evolution works through natural selection acting on random variation. Computational systems are built by human design choices shaped by economic and social forces.

The computational adjacent operates through a dialectic between technical affordances and social relations of production. New computational possibilities emerge from the ongoing development of programming languages, frameworks and platforms. These possibilities are not neutral or natural. They are actively shaped by corporate interests, economic imperatives and social power relations. Unlike biological possibility which expands through evolutionary exploration, the computational adjacent is actively constructed through technical development whilst constrained by economic forces. Bernard Stiegler's notion of grammatisation helps here, the way human cognitive and social capacities become inscribed into discrete technical systems (Stiegler 2016). What was once fluid becomes formalised, what was implicit becomes explicit.

The computational adjacent is not simply what computation could do. It is the horizon of what becomes thinkable, actionable, and liveable under conditions of pervasive algorithmic mediation.

The computational adjacent operates through what can be understood as algorithmic potentiality, the way computational systems create certain affordances whilst foreclosing others. This is not a neutral technical matter. Through the architectures of software and infrastructure, certain possibilities are systematically excluded or made impossible. The question of affordance has a complex history in technology studies, running from Gibson's ecological psychology through Norman's design theory to contemporary platform studies. I use the term here to emphasise how technical systems do not merely enable or constrain but actively shape what kinds of actions, thoughts and relations become possible. Affordance in this sense is not a property of objects but a relation between technical systems and social practices.

To understand how this functions requires examining the material specificity of digital technologies and their embeddedness within contemporary capitalism. Adjacent possibilities do not emerge naturally. They are actively produced through the development of programming languages, frameworks, platforms and infrastructures that enable particular forms of software creation whilst constraining others. Object-oriented programming, for instance, allows developers to treat code as modular components that can be assembled in prescribed ways. Application programming interfaces abstract implementation details to facilitate particular kinds of software interaction. These computational abstractions generate an expanding but constrained space of possibilities as developers build on existing code libraries, frameworks and platforms.

The process is not neutral at all. Large technology corporations control key computational infrastructure that determines what kinds of possibilities can emerge. This represents a historically novel form of power, the capacity to shape the technical conditions under which social life increasingly unfolds. This concentration emerged rapidly. Before the mid-2000s, computational infrastructure remained relatively distributed. The shift to cloud computing, beginning with AWS in 2006, initiated a dramatic centralisation that accelerated through the 2010s. What we now call the computational adjacent is historically specific to this post-2006 conjuncture of platform capitalism.

The computational adjacent operates through what can be understood as infrasomatic systems, large-scale computational infrastructures that structure and enable particular forms of technical development whilst foreclosing others. Infrasomatic systems represent a fusion of technical infrastructure with social organisation, creating computational stacks that layer different forms of technical abstraction. These systems combine hardware, software, protocols and platforms into unified infrastructural arrangements that shape what kinds of computational possibilities can emerge. Infrastructure here is not background but constitutive.

Cloud computing platforms demonstrate how infrasomatic systems operate. Amazon Web Services provides the technical foundation for particular kinds of software services and applications through its distributed computing infrastructure. Originally developed as Amazon's internal computational capacity, AWS has become a dominant platform controlling vast swathes of internet infrastructure. This infrastructural power allows Amazon to shape the technical possibilities available to developers and organisations who rely on its services. Google Cloud Platform and Microsoft Azure create similar patterns of concentrated infrastructural control.

These infrasomatic systems concentrate control over computational infrastructure in corporate hands through both technical and economic means. Technical control operates through proprietary APIs, service agreements and platform governance that constrain how infrastructure can be used. Economic control stems from the massive capital requirements for building and maintaining global computing infrastructure, creating high barriers to entry that reinforce corporate monopolies. Indeed, the complex political economy of cloud computing deserves more attention than it typically receives in digital studies or media theory. The sheer scale of capital required to build and operate global data centre infrastructure means that meaningful alternatives to corporate cloud platforms face nearly insurmountable barriers. This is not a market failure to be corrected but a structural feature of how computational infrastructure has developed under capitalism.

The infrastructural power of these systems extends beyond mere technical provision. As Srnicek (2017) argues, platforms have become a new business model that enables ongoing value extraction through control of digital infrastructure. This creates what can be understood as platform capital, where corporations leverage their infrastructural position to capture and monetise the digital activities that depend on their systems. AWS generates billions in revenue not just from providing computing resources but from becoming essential infrastructure for contemporary digital capitalism.

Infrasomatic systems thus represent a key site where technical possibilities are actively shaped by corporate interests and economic imperatives. Their infrastructural logics determine what kinds of software development and deployment are possible (i.e. the computational adjacent) whilst systematically foreclosing alternatives through both technical constraints and economic relations. I use the term infrasomatic systems to name large-scale computational infrastructures (affordances) that structure and enable particular forms of technical development whilst foreclosing others. This builds on my earlier concept of infrasomatisation (Berry 2019), which describes how human cognitive and bodily capacities become incorporated into computational systems, combined with Susan Leigh Star's (1999) analysis of infrastructure as relational and embedded.

Understanding how the computational adjacent functions requires paying attention to multiple registers simultaneously. At a technical level, we must examine how programming languages, frameworks and platforms enable and constrain particular forms of software development. We must also analyse how these technical architectures are shaped by economic forces and social relations. This means continuing to develop a critical theory of the digital that can account for both technical specificity and social embeddedness (Berry 2014).

The computational adjacent is also shown in algorithmic imaginaries. These are the ways we conceive of and project possible futures for computational systems. These imaginaries are not unlimited but constrained by existing technical architectures and social relations. Machine learning systems enable certain kinds of natural language processing whilst foreclosing others based on training data and computational architecture. The possibilities that emerge from large language models, for instance, are shaped by the corpora on which they were trained, the architectural choices that structure their operations, and the corporate imperatives that govern their deployment. The current fascination with large language models risks obscuring the longer history of computational constraint on human possibility. ChatGPT and its likely successors represent one moment in this history, not its culmination. The computational adjacent names a more general condition, the way algorithmic systems have come to structure what kinds of futures can be imagined and enacted.

This points to the marriage of technical systems with capitalist imperatives of profit and control. The computational adjacent emerges from ongoing software development, but these possibilities are channelled in directions that serve corporate interests and enable value extraction increasingly mediated through proprietary computation. This raises key questions about human agency and autonomy under conditions of increasing algorithmic mediation. As computational systems become more sophisticated and ubiquitous, they structure the horizon of possibilities available to human actors. This creates the danger of a computational ideology, the tendency to see computation as an independent force rather than human-created infrastructure shaped by particular interests and relations.

To resist computational ideology requires developing what I term critical digital theory, the capacity to understand and critique computational systems in their technical specificity and social embeddedness. Critical digital theory builds on and extends earlier traditions of media critique and technology studies whilst attending to the particular characteristics of computational systems. It requires both technical literacy and critical theory.

What would such critique actually do? It would trace how specific technical architectures emerged from particular conjunctures of economic interest and design choice. It would identify points of contestation where alternatives remain possible. It would connect local technical decisions to broader patterns of accumulation and control. And it would support collective action to reshape the computational adjacent, whether through policy intervention, alternative infrastructure development, or refusal.

The concept of the computational adjacent points to the importance of what can be termed algorithmic critique, the capacity to understand and contest how computational systems shape social possibility. This means developing new forms of critical practice that can engage with both technical specificity and social relations. Such analysis must operate at multiple levels simultaneously, examining specific technical developments whilst understanding broader patterns of computational capital. It requires theoretical and conceptual developments that can account for how computational possibilities emerge from and are constrained by contemporary capitalism.

The task ahead lies in developing theory to think about and critique collective governance and how computational systems shape social possibility. This means creating new forms of algorithmic critique that can contest corporate control over digital infrastructure whilst enabling alternative forms of technical development and social organisation. The computational adjacent helps us understand this crucial political project for the algorithmic age. What possibilities computation opens and forecloses is not a technical question to be left to engineers. It is a political question about what kinds of futures we can collectively create.

Bibliography

Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury.

Berry, D.M. (2019) ‘Against infrasomatization: Towards a critical theory of algorithms’, in D. Bigo, E. Isin, and E. Ruppert (eds) Data Politics: Worlds, Subjects, Rights. Routledge.

Kauffman, S. A. (2019) A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Srnicek, N. (2017) Platform Capitalism. Polity.

Star, S.L. (1999) ‘The Ethnography of Infrastructure’, American Behavioral Scientist, 43(3), pp. 377–391. Available at: https://doi.org/10.1177/00027649921955326.

Stiegler, B. (2016) Automatic Society: The Future of Work. Polity.


Comments