On Compute


Today, the condition of possibility for the milieu of contemporary life is compute. That is, compute as the abstract unit of computation, both as dunamis (potentiality) and energeia (actuality), that is as the condition of possibility for the question of the in-itself and the for-itself.  Compute as a concept, exists in two senses, as the potential contained in a computational system, or infrastructure, and in the actuation of that potential in actual work, as such. Whilst always already a theoretical limit, compute is also the material that may be brought to bear on a particular computational problem – and now many problems are indeed computational problems. Such then that the theoretical question posed by compute is directly relevant to the study of software, algorithms and code, and therefore the contemporary condition in computal society, because it represents the moment of potential in the transformation of inert materials into working systems. It is literally the computational unit of "energy" that is supplied to power the algorithms of the world's systems. Compute then, is a notion of abstract computation, but it is also the condition of possibility for and the potential actuation of that reserve power of computation in a particular task.

Compute becomes a key noetic means of thinking through the distribution of computation in the technological imaginary of computal society. Yet this noetic function operates under conditions of systematic obscurity. We are asked to think through compute whilst the material conditions of its production and distribution remain deliberately opaque, whilst the infrastructure that enables computational thought hides itself from computational thought. This creates a reflexive trap whereby the tools we use to analyse computation are themselves computational, dependent on the very compute resources whose political economy we seek to interrogate. 

In a highly distributed computational environment, such as we live in today, compute is itself distributed around society, carried in pockets, accessible through networks and wireless connections and pooled in huge computational clouds. Compute then is not only abstract but lived and enacted in everyday life, it is part of the texture of life, not just as a layer upon life but as a structural possibility for and mediation of such living. But crucially, compute is also an invisible factor in society, partially due to the obfuscation of the technical condition of the production of compute, but also due to the necessity for an interface, a surface, with which to interact with compute. Compute then as a milieu is such that it is never seen as such, even as it surrounds us and is constantly interacting with and framing our experiences. Indeed, Stiegler (2009) writes that,
Studying the senses, Aristotle underlines in effect that one does not see that, in the case of touching, it is the body that forms the milieu, whereas, for example, in the case of sight, the milieu is what he calls the diaphane. And he specifies that this milieu, because it is that which is most close, is that which is structurally forgotten, just as water is for a fish. The milieu is forgotten, because it effaces itself before that to which is gives place. There is always already a milieu, but this fact escapes us in the same way that "aquatic animals," as Aristotle says, "do not notice that one wet body touches another wet body" (423ab): water is what the fish always sees; it is what it never sees. Or, as Plato too says in the Timaeus, if the world was made of gold, gold would be the sole being that would never be seen – it would not be a being, but the inapparent being of that being, appearing only in the occurrence of being, by default (Stiegler 2009: 13-14)
This structural forgetting operates with particular intensity under computational conditions. The fish cannot see the water, but at least water has consistent physical properties. Compute, by contrast, shifts and reconfigures itself constantly through software updates, algorithmic adjustments, and infrastructure changes that remain opaque to users. We live within computational systems that mediate increasingly large domains of social life whilst their operations retreat further from perception or understanding. The milieu is not simply forgotten but actively obscured through layers of abstraction that separate interface from infrastructure, user experience from underlying computation. This presents a distinctive challenge for critical analysis because the object of critique continuously transforms itself in ways that escape direct observation.

In this sense, compute, is the structural condition of possibility that makes the milieu possible by giving it place, in as much as it creates those frameworks within which technicity takes place. The question of compute then, both as a theoretical concept but also as a technical definition is crucial for thinking through the challenge of computation more broadly. But, in a rapidly moving world of growing computational power, comparative analysis of computational change is made difficult without a metric by which to compare different moments historically. This is made much more difficult by the reality that compute is not simply the speed and bandwidth of a processor as such, but includes a number of other related technical considerations such as the speed of the underlying motherboard, ram, graphics processor(s), storage system and so forth.

Yet this distributed computational environment hides asymmetries in access to and control over compute resources. Whilst consumer devices carry modest computational capacity, the bulk of compute power concentrates in vast data centres controlled by a handful of corporations and state actors. Amazon, Google, and Microsoft together control the majority of cloud computing infrastructure, effectively monopolising the computational substrate upon which contemporary digital life depends. This concentration of compute represents the material foundation for extracting value from the billions of smaller computational devices that connect to these centralised systems. The smartphone in your pocket possesses significant processing power, but that power remains subordinated to the logic of platforms and clouds that orchestrate computational work across distributed networks. Compute becomes simultaneously ubiquitous and monopolised, present everywhere yet controlled by few.

This monopolisation carries material consequences beyond market concentration. Data centres consume vast quantities of electrical power, with estimates from 2012 suggesting that global computing infrastructure accounts for between ~1.1% and ~1.5% of worldwide electricity consumption. Google's data centres alone reportedly drew ~260 million watts in 2010, enough power to supply a city of ~200,000 inhabitants. The abstraction of compute into hourly rental units or virtual machine instances obscures these energy flows, presenting computational capacity as immaterial resource available on demand. Yet every algorithmic operation, every instance spun up in a cloud environment requires burning fossil fuels or diverting energy from other uses. Compute appears as pure potentiality, limitless and ethereal, whilst depending on vast material infrastructures of generation, transmission, and cooling – in no way is it a metaphysical operation. The environmental externalities of computation get excluded from the pricing of compute as a commodity, creating a subsidy whereby cloud providers profit whilst users remain insulated from the ecological consequences of their computational consumption.

Compute then is a relative concept and needs to be thought about in relation to previous iterations, and this is where benchmarking has become an important part of the assessment of compute – for example SPECint, a computer benchmark specification for a processor's integer processing power maintained by the Standard Performance Evaluation Corporation (SPEC 2014). Another, called GeekBench (2013), scores compute against a baseline score of 2500, which is the score of an Intel Core i5-2520M @ 2.50 GHz. In contrast, SYSmark 2007, another benchmark, attempts to bring "real world" applications into the processing measurement by including a number of ideal systems that run canned processing tasks (SYSmark 2007). As can be seen, comparing compute becomes a spectrum of benchmarks that test a variety of working definitions of forms of processing capacity. It is also unsurprising that as a result many manufactures create custom modes within their hardware to "game" these benchmarks and unfortunately obfuscate these definitions and comparators. For example,
Samsung created a white list for Exynos 5-based Galaxy S4 phones which allow some of the most popular benchmarking apps to shift into a high-performance mode not available to most applications. These apps run the GPU at 532MHz, while other apps cannot exceed 480MHz. This cheat was confirmed by AnandTech, who is the most respected name in both PC and mobile benchmarking. Samsung claims “the maximum GPU frequency is lowered to 480MHz for certain gaming apps that may cause an overload, when they are used for a prolonged period of time in full-screen mode,” but it doesn’t make sense that S Browser, Gallery, Camera and the Video Player apps can all run with the GPU wide open, but that all games are forced to run at a much lower speed (Schwartz 2013).
This gaming of benchmarks reveals a broader pattern whereby computational systems disguise legibility at the moments when comparison or verification becomes most important. The benchmark exists to create commensurability, to establish standards by which different systems might be measured against each other. When manufacturers undermine these standards through custom modes and whitelisted applications, they undermine the very possibility of informed comparison. This is not simply corporate deception but symptomatic of how compute as a concept resists stabilisation. Every attempt to pin down computational capacity through measurement encounters the reality that software can reconfigure hardware performance on the fly, that virtualisation can slice physical processors into multiple virtual machines, that workloads vary so dramatically that no single benchmark captures actual performance. The metrics fracture and multiply, each purporting to measure compute whilst actually measuring something more specific and situated. We confront here a key challenge for critical analysis of computational systems.

On a material register the unit of compute can be thought of as roughly the maximum potential processing capacity of a computer processing chip running for a notional hour. In todays softwarized landscape, of course, processing power itself become a service and hence more often is framed in terms of virtual machines (VMs), rather than actual physical machines – a number of compute instances can be realised on a single physical processor using sophisticated software to manage the illusion. Amazon itself defines compute through an abstraction of actual processing as follow,
Transitioning to a utility computing model fundamentally changes how developers have been trained to think about CPU resources. Instead of purchasing or leasing a particular processor to use for several months or years, you are renting capacity by the hour. Because Amazon EC2 is built on commodity hardware, over time there may be several different types of physical hardware underlying EC2 instances. Our goal is to provide a consistent amount of CPU capacity no matter what the actual underlying hardware (Amazon 2013).
Indeed, Amazon tends to discuss compute in relation to its unit of EC2 Compute Unit (ECU) to enable the discretisation.[1] Google also uses an abstract quantity and measures "minute-level increments" of computational time (Google 2013). The key is to begin thinking about how an instance provides a predictable amount of dedicated compute capacity and as such is a temporal measure of computational power albeit seemingly defined rather loosely in the technical documentation.

If we look at the Bitcoin mining boom that emerged after 2009. Graphics processing units originally designed for rendering video games became sought-after computational resources, their parallel processing architecture suited to solving the cryptographic puzzles that generate cryptocurrency. By 2013, GPU shortages followed as mining operations commandeered compute capacity, pricing out gamers and researchers. Here we see compute's fungibility made visceral. The same hardware that renders game environments or processes scientific simulations could be redirected entirely through software to participate in financial speculation. Yet this fungibility operated asymmetrically. Individual miners with desktop machines found themselves outcompeted by industrial operations, where cheap electricity and purpose-built facilities concentrated computational power. The distribution of compute determined who could participate in supposedly decentralised cryptocurrency networks. Compute capacity became both the medium and the barrier to entry, simultaneously appearing to democratise access whilst reconcentrating power in the hands of those controlling the largest pools of processing capability.

Compute measured temporally through hourly rental rates or minute-level increments marks a transformation in how computational labour gets organised and valued. When Amazon sells compute by the hour, it commodifies processing time in ways that parallel the commodification of human labour time under industrial capitalism. Yet this parallel conceals a difference. Human labour time, however exploited, retains biological limits and generative unpredictability. Computational labour time can be precisely divided, instantly reallocated, perfectly replicated across identical virtual machines. The EC2 Compute Unit abstracts away not just the specific hardware but the very materiality of computation, presenting processing capacity as infinitely fungible units of temporal work. This creates the proletarianisation of computation itself, whereby computational systems become deskilled through standardisation and abstraction, reduced to interchangeable units measured in time. The implications extend beyond technical infrastructure to questions about how computational mediation reshapes human cognitive labour when thought itself increasingly depends on rented processing time in distant data centres.

The question of compute is then a question of the origin of computation more generally, but also how the infrastructure of computation can be understood both qualitatively and quantitatively. Indeed, it is clear that the quantitative changes that greater compute capacity introduces makes possible the qualitative experience of computation that we increasingly take for granted in our use of a heavily software-textured world. 

To talk about software, processes, algorithms and code without a corresponding understanding of compute capacity is to mistake effects for causes, to analyse the visible whilst ignoring the infrastructural substrate that makes such visibility possible. Compute represents the material foundation and theoretical limit of computational culture, the condition of possibility that enables algorithmic mediation to operate at scales and speeds that transform qualitative experience. Yet because compute functions as milieu rather than object, as that which is structurally forgotten even whilst surrounding us, critical analysis faces a challenge of making visible what computational capitalism requires remain obscure. The concentration of compute in corporate clouds, the obfuscation of processing through virtualisation and benchmarking games, the commodification of computational time into rental units all point towards the need for critical frameworks capable of addressing both the technical specificity and political economy of computational infrastructure. Without such frameworks, we risk remaining like Aristotle's fish, unable to notice that wet bodies touch wet bodies, unable to see the computational water within which we swim.


Notes

[1] Amazon used to define the ECU directly, stating: "We use several benchmarks and tests to manage the consistency and predictability of the performance of an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation" (Berninger 2010). They appear to have stopped using this description in their documentation (see Amazon 2013). 

Bibliography

Amazon (2013) Amazon EC2 FAQs, accessed 05/01/2014, http://aws.amazon.com/ec2/faqs/#What_is_an_EC2_Compute_Unit_and_why_did_you_introduce_it

Berninger, D. (2010) What the heck is an ECU?,  accessed 05/01/2014, http://cloudpricecalculator.com/blog/hello-world/

GeekBench (2013) GeekBench Processor Benchmarks, accessed 05/01/2014, http://browser.primatelabs.com/processor-benchmarks

Google (2013) Compute Engine — Google Cloud Platform, accessed 05/01/2014, https://cloud.google.com/products/compute-engine/

Schwartz, R. (2013) The Dirty Little Secret About Mobile Benchmarks,  accessed 05/01/2014, http://mostly-tech.com/tag/geekbench/

SPEC (2014) The Standard Performance Evaluation Corporation (SPEC), accessed 05/01/2014, http://www.spec.org

Stiegler, B. (2009) Acting Out, Stanford University Press.

SYSmark (2007),  SYSmark 2007 Preview, accessed 05/01/2014, http://bapco.com/products/sysmark-2007#details-product-info

Comments