What is Compute?

Compute has emerged as a crucial technical concept for understanding contemporary digital infrastructures and their political economy (Berry and Fagerjord 2017: 16). The notion represents an abstraction of computational capacity that can be dynamically allocated, measured, and priced across distributed systems. Unlike traditional notions of processing power or computational resources, compute represents “an abstract unit of computation which tends to be priced at a particular level by cloud server companies so one can purchase a certain capacity of computation” (Berry 2023a). The concept of compute emerges from what we might call the infrastructural turn in digital technologies, particularly the move towards cloud computing and distributed systems. As I have previously argued, these systems create what I call infrasomatisations which rely on “a complex fusion of endosomatic capacities and exosomatic technics”  (Berry 2023a) that make possible new forms of algorithmic governance. Compute, in this context, represents the commodification of computer processing capacity itself.

Technically, compute is measured through various metrics that attempt to quantify processing capacity. The major cloud providers each implement their own standards for measurement. Amazon Web Services utilises Elastic Compute Units (ECUs), whilst Microsoft Azure deploys Azure Compute Units (ACUs), and Google Cloud Platform operates with Google Compute Engine Unit (GCCUs). Virtual CPUs (vCPUs) are also commonly used across providers. Each of these represents an attempt to standardise computational capacity across different hardware configurations and architectures For example, one AWS ECU provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor (Amazon Web Services 2024). This standardisation enables fungibility in computation by treating processing power as a commodity that can be traded and allocated dynamically – in effect creating a market in computational power (see Amazon Web Services 2024, Google Cloud Platform 2024, Kubernetes 2024 and Microsoft Azure 2024). 

The concept of compute is particularly important because it enables edge-to-core-to-cloud pipeline for efficiently processing data, moving data to algorithms located where the processing power is best located (Berry 2023a). This creates a new computational geography where “edge devices, such as smartphones, feed data into core (on-premises large computing data centres) for algorithmic processing, or to cloud (off-premises shared data servers) to run AI models or complex operations” (Barroso et al 2013; Berry 2023a).

The quanta of compute varies depending on the provider and use case, but generally represents the minimum unit of computational capacity that can be allocated. This measurement combines processing power in terms of CPU cycles, memory allocation requirements, network bandwidth consumption, storage capacity needs, and time duration. These elements combine to create a notion of compute capability, which can be priced and traded as a commodity. The importance of this abstraction cannot be overstated as it enables what I have described as rent-seeking behaviour where infrastructure providers can “extract rent or tolls to pass data and calculations around a system” (Berry 2023a).

Power consumption represents a crucial dimension of compute that is often overlooked in technical discussions. Data centres, which provide the physical infrastructure for compute operations, consume vast amounts of electricity. A single data centre can use the equivalent electricity of a small city. The energy requirements vary dramatically based on computational load, cooling requirements, and environmental conditions. Machine learning operations, particularly training large models, are especially energy-intensive. For instance, training a single large language model can produce carbon emissions equivalent to the lifetime emissions of five average American cars (Strubell et al. 2019).

These power demands are intimately connected to networking infrastructure. The internet itself represents a vast compute infrastructure, with data centres, submarine cables, and terrestrial networks all requiring significant compute capacity to manage data flows. Internet Exchange Points (IXPs) serve as crucial nodes where different networks interconnect, each requiring substantial compute resources to handle routing and switching operations. The relationship between compute and networks is particularly visible in Content Delivery Networks (CDNs), which distribute compute capacity geographically to optimise data delivery and processing.

Contemporary examples of compute in practice extend across multiple domains. Cloud computing services from major providers like AWS, Azure, and Google Cloud represent the most visible manifestation, but compute also underpins edge computing devices that process data closer to its source. Data centres constantly allocate processing capacity across different clients and workloads, while machine learning systems require intensive compute resources for training and inference. Real-time stream processing, increasingly crucial for social media and financial systems, demands consistent and reliable compute allocation.

The term compute is particularly useful because it captures the way in which computational capacity has been transformed into a commodity that can be bought, sold, and traded. This commodification creates a form of computational capital, that is, the ability to control and allocate processing power across distributed systems. As I have argued, this tends towards “monopoly or oligopolistic behaviour” creating the conditions for “monopoly rent on the infrastructure” (Berry 2023a, 2023b).

The calculation of compute can be examined through both its technical measurement and its transformation into economic value (see Greenberg et al 2009). For example, a formula for a Compute Unit (CU), which represents a standardised measure of computational capacity, might be expressed in the following manner,

CU = (CPU * t) + (RAM * t) + (I/O * t) + (Network * t)

Where:

CPU is the processing capacity (in cores/cycles)

RAM is the memory allocation (in GB)

I/O is the storage operations (in IOPS)

Network is the bandwidth (in Gb/s)

t is the time duration (usually measured in hours)

This technical measurement is then transformed into economic value through a Compute Price Function (CPF), such as the following,

CPF = CU * (Base Rate + Peak Multiplier + Location Factor) * Market Demand Coefficient

This economic abstraction creates a form of computational rent though the ability to extract value from the control and allocation of the compute resources. The political economy of compute emerges in the way these calculations are deployed within computational capitalism. For example, a machine learning training operation might require the following hardware to create its model, 8 CPU cores, 64GB RAM, 500 IOPS, 10 Gb/s network and be running for 24 hours. The Compute Unit calculation might therefore be calculated with the following numbers,

Compute Units = (8 * 24) + (64 * 24) + (500 * 24) + (10 * 24) = 13,968 compute units

The Compute Price Function might then be:

CPF = 13,968 * ($0.05 + 1.5 + 1.2) * 1.3 = $49,935.60

This economic transformation reveals translates a technical process into a fungible financial asset that can generate income. The abstraction of compute into standardised units enables results in a kind of computational financialisation through the creation of markets for processing power. The control over compute resources can be thought of as a form of power, as providers can adjust pricing through, (1) adjusting the fundamental cost of compute, (2) charging premium rates during high-demand periods, (3) varying prices based on data centre location, and (4) responding to market demand and conditions, and correspondingly having elastic pricing. 

This system of calculation and pricing reflects a political economy built on the control and allocation of computational resources resulting in a new form of computational capital. As I have argued, these systems tend towards monopoly or oligopolistic behaviour (Berry 2023a), as the infrastructure requirements for providing compute at scale create significant barriers to entry.

The implications are profound. As compute becomes increasingly central to economic and social life, control over these resources represents a new form of capital. The ability to calculate, price, and allocate compute becomes a form of power that shapes possibilities for thought and action in contemporary society. This is particularly visible in the edge-to-core-to-cloud pipeline (Berry 2023a), where compute resources are distributed across different geographical and technical locations. Additionally, the environmental implications of this political economy are also significant (Vonderau 2019). If we include power consumption in the calculation, it might have the following form,

Energy Cost = Compute Units * (kWh/CU) * ($/kWh)

This helps reveal the material basis of compute capitalism in energy consumption and environmental impact. A single large-scale compute operation might easily consume therefore,

Energy Cost = 13,968 * 0.5 kWh/CU * $0.12/kWh = $838.08

This calculation reveals the way in which computational processes are fundamentally constrained by energy requirements and environmental impacts. The political economy of compute thus extends beyond mere technical calculation to encompass questions of sustainability, resource allocation, and environmental justice.

Understanding these calculations and their implications is crucial for developing what I have called a critical theory of the digital that can address both the technical specificity of computational systems and their broader social and political implications. Only by paying attention to how compute is calculated, commodified, and controlled can we hope to develop alternative models that might support more democratic and sustainable forms of computational infrastructure. For example, we might want to restructure the compute industry on a nationalised basis, with the supply of compute regulated and controlled by the government to enable democratic access to a previous and increasingly important resource. Another possibility would be to create a compute company, which we might call GB Compute, which creates a pricing foundation for compute supply in competition with private operators, but which prevents oligopolistic pricing or people being priced out of their ability to access and use cloud compute processing. Indeed, this might be useful to share compute capacity equitably across various aspects of the Nation, for example, across the NHS, welfare system, schools, universities, police and other parts of the public sector. 

Understanding compute is therefore crucial for understanding both the technical specificity of computational systems and their broader social and political implications. The concept helps us appreciate how computational capacity has been abstracted and commodified, creating new forms of digital capital and control. 


Blogpost by David M. Berry

** Headline image generated using DALL-E in November 2024. The prompt used was: "Draw a representation of - Compute has emerged as a crucial technical concept for understanding contemporary digital infrastructures and their political economy"

Bibliography

Amazon Web Services (2023) Amazon EC2 Instance Typesm https://aws.amazon.com/ec2/instance-types/ 

Barroso, L. A., Clidaras, J. and Hölzle, U. (2013) The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. Morgan & Claypool.

Berry, D. M. (2023a) The Explainability Turn, Digital Humanities Quarterly, 17(2).

Berry, D. M. (2023b) The Limits of Computation: Joseph Weizenbaum and the ELIZA Chatbot, Weizenbaum Journal of the Digital Society, 3(3), pp. 1-24.

Berry, D.M. and Fagerjord, A. (2017) Digital Humanities: Knowledge and Critique in a Digital Age, Polity Press.

Google Cloud Platform (2023) Machine Types. https://cloud.google.com/compute/docs/machine-types 

Greenberg, A., Hamilton, J., Maltz, D. A. and Patel, P. (2009) The Cost of a Cloud: Research Problems in Data Center Networks, ACM SIGCOMM Computer Communication Review, 39(1), pp. 68-73. https://dl.acm.org/doi/10.1145/1496091.1496103

Kubernetes (2023) Resource Units in Kubernetes, Kubernetes Documentation. Available at: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

Microsoft Azure (2023) Virtual Machine Series. https://azure.microsoft.com/en-us/pricing/details/virtual-machines/series/ 

Strubell, E., Ganesh, A. and McCallum, A. (2019) ‘Energy and Policy Considerations for Deep Learning in NLP’. arXiv. https://doi.org/10.48550/arXiv.1906.02243

Vonderau, A. (2019) Scaling the Cloud: Making State and Infrastructure in Sweden, Ethnos. https://www.tandfonline.com/d oi/abs/10.1080/00141844.2018.1471513 


Comments

Popular Posts