Reflections on Method for Critical Code Studies

Critical code studies offers an important intervention for understanding algorithms by studying source code through close reading techniques (Marino 2006, 2020). Conceptualising code as a cultural and technical text, this article develops an analytical framework which addresses three different levels for undertaking critical code studies. I call this a constellational analysis. This framework builds upon recent work in software studies while suggesting new methodological tools for analysing how code functions within a particular social context.

The emergence of critical code studies should be understood within broader developments in digital humanities and software studies (see Berry 2012). Early approaches focused primarily on technical analysis or cultural interpretation, lacking systematic methods for connecting these different dimensions. The field matured through several key phases: an initial focus on source code as text (Marino 2006), analysis of software as cultural form (Fuller 1999; Chun 2011), source code as mechanism (Berry 2011) and investigation of algorithmic governance (Galloway 2004; Berry 2014). My later work revealed growing attention to how code mediates social relations, something I called infrasomatization, the creation of computational infrastructures that structure thought and action (Berry 2019).

Constellational Analysis: A Three-Level Framework

I argue that a three-level framework for critical code studies provides a systematic approach to understanding how computation shapes contemporary society. Drawing on Habermas's (1972) theory of cognitive interests, it develops analytical methods appropriate to the algorithmic condition. At the technical-instrumental level, computer science offers tools for examining how code implements forms of technical control through its computational mechanisms and structures. The practical-communicative level employs hermeneutics to investigate how code operates as discourse, shaping social understanding through its languages, documentation and practices. The emancipatory level applies critical theory to reveal how code embeds and reproduces power relations while also containing possibilities for deeper sociological research. These three cognitive interests operate dialectically rather than hierarchically – technical implementation shapes but does not determine social meaning, while critical analysis reveals how technical choices relate to broader structures of power and control. 

Through this framework, we can analyse how code participates in the construction of social reality while maintaining focus on its potential for supporting human freedom and democratic values. By connecting technical analysis to communicative understanding and emancipatory critique, the framework enables systematic investigation of how computation shapes contemporary life.

Technical: examining technical control using formal logic and programming techniques

Communicative: examining communication and understanding using hermeneutics

Emancipatory: examining potential for emancipation using ideology critique

To understand how these cognitive interests operate in practice, we can examine specific examples of how code implements technical control, shapes social meaning, and relates to broader power structures. The framework enables systematic analysis across these dimensions, beginning with close examination of computational mechanisms. This is meant only to be an introductory text to begin to set out the kind of three-fold analysis that a constellational analysis would bring to critical code studies. 

At the technical-instrumental level, careful analysis of code reveals how algorithmic systems materially structure social relations through their implementation choices and architectural decisions.

Technical-instrumental analysis 

For example, by looking at GPT-3's attention mechanism implementation we can explore the specific computational mechanisms and algorithmic processes:

def attention(query, key, value):
    score = tf.matmul(query, key, transpose_b=True)
    weights = tf.nn.softmax(score, axis=-1)
    return tf.matmul(weights, value)

This code reveals how machine learning systems implement pattern recognition through matrix multiplication and softmax normalization. These technical choices embed specific assumptions about language and meaning. The attention mechanism privileges certain types of patterns while excluding others, shaping how the system processes and generates text. This technical structure can therefore be seen in how source code materially constrains possible meanings and interactions.

While these technical implementations reveal how computational systems structure pattern recognition and meaning generation, we must also examine how code functions as discourse shaping social interactions and relationships. Moving from the mathematical operations of machine learning to social media algorithms demonstrates how technical choices encode specific assumptions about human communication and sociality. The historical development of a particular algorithm provides a good example of how code both reflects and constructs social understanding through its changing structure and form and the mechanisms it uses.

Practical-communicative analysis 

If we look at a simplified example of streaming media, such as the Facebook's News Feed algorithm, we are able to investigate code as discourse through hermeneutic analysis, that is through close reading of the code (and exploratory simulation):

# ** 2009 Sourcecode Implementation
def rank_stories(stories):
    return sorted(stories, key=lambda x: x.time)

# ** 2018 Sourcecode Implementation 
def rank_stories(stories, user):
    for story in stories:
        story.score = (story.likes * 0.5 + 
                      story.comments * 2.0 + 
                      story.shares * 1.5) * 
                      time_decay(story.age) *
                      user.affinity(story.author)
                      )
 
This historical comparison between 2009 and 2018 shows how Facebook's conception of "meaningful" social interaction develops from chronological ordering of the newsfeed to complex weighting of engagement metrics. The code embeds assumptions about human relationships and attention that shape billions of social interactions daily on the platform whilst also documenting changing corporate priorities from user growth to income.

One of the advantages of critical code studies is that we are able to actually run this code to demonstrate how it might function and supplement our hermeneutic reading. 

Sample Data

By assuming the following attributes for stories:

  • time (timestamp)
  • likescommentsshares (engagement metrics)
  • age (time since posting)

And for the user:

  • affinity (relationship strength with the story's author).

Results

Story Rankings

Story Rankings for 2018 Source Code

Rank Story ID Likes Comments Shares Age Affinity Score
1 3 30 40 50 10 0.8 5.625
1 5 25 35 45 12 0.75 5.625
2 4 10 20 30 5 0.7 2.800
3 7 15 25 20 8 0.65 2.656
4 8 20 15 25 15 0.6 2.156
5 2 5 10 5 2 0.9 1.350
6 1 10 15 10 18 0.5 1.310
7 9 5 5 5 3 0.6 1.200
8 6 0 0 0 4 0.4 0.000
8 10 0 0 0 20 0.3 0.000

Analysis

The comparison highlights key differences between the 2009 and 2018 ranking implementations:

  1. 2009 Implementation: Stories are ranked solely by chronological order (smallest time value ranked highest). This approach prioritises most recent, making it a simple time-ordered list but does not take into account any other factors, such as user engagement or relevance.
  2. 2018 Implementation: Scores consider multiple factors like likes, comments, shares, user affinity, and time decay. For example, Story 3 and Story 5 tie for the top story with a score of 5.625 due to their high engagement metrics (likes, comments, shares) combined with moderate affinity and time decay. In contrast, Story 6 and Story 10 both score 0.0, as they have no engagement metrics (likes, comments, shares).

This demonstrates how the 2018 implementation encodes a different way of ordering the priority of the stories, significantly altering the ranking compared to the simple, linear approach of the 2009 code. It could be claimed there that this change in prioritising the stories might reflects changing corporate objectives like boosting engagement and time spent on the platform, elements which can be compared against corporate income and profit following the code changes. In this way the critical code studies reading offers new questions for further research. 

These embedded programmed decisions about how to use social interaction to shape a user experience (prescription of norms into source code) reveal how dominant platforms encode specific ideological orientations towards attention, engagement and profit. However, alternative source code analysis can help demonstrate that different technical implementations can support more democratic and user-controlled social relations (e.g. by prescribing a different set of norms and values into their source code). By examining platforms that explicitly resist centralised algorithmic control, we can identify how code might implement more emancipatory social norms.

Emancipatory critique

For example, looking at alternative social platforms like Mastodon and how they implement different social models we can begin to uncover particularly normative and value decisions embedded in the source code through ideology analysis:

def visibility_policy
  return :public if public?
  return :unlisted if unlisted?
  return :private if private?
  return :direct if direct?
end

def federated_timeline
  public_statuses.merge(followed_tags)
         .without_reblogs
         .local_only
end

Here, the code prioritises user control and federation over centralised algorithmic manipulation. However, it also reveals tensions between privacy, scalability and network effects that shape resistance to platform capitalism.

Dialectical Relationships

These three levels operate through mutual determination rather than through hierarchical relationships. Technical source code constraints shape but don't overdetermine meaning-making possibilities from using the interface, while social practices can influence technical implementations in interesting ways (e.g. the use of the # on Twitter was a venacular practice, and was later absorbed into the source code as a technical function). For example, machine learning systems can be examined to show this dialectic:

def train_model(data, hyperparameters):
    model = initialize_network(hyperparameters)
    for epoch in range(hyperparameters.epochs):
        loss = model.fit(data.train)
        metrics = evaluate(model, data.test)
        if metrics.bias > threshold:
            adjust_weights(model)

In this technical implementation of "bias detection" and "correction" we can see how it might reveal how social concerns (e.g. the concern about biased data and outputs) reshape technical systems, while technical limitations constrain possible corrections (e.g. "bias" is transformed into a technical concept, that is automated into the system).

Methodological Challenges

Machine learning systems also present new challenges for code interpretation in critical code studies. Their behaviour emerges from training data and learned parameters rather than merely through the explicit programming that tethers this network logic. For example in this simplified implementation:

from functools import reduce

class NeuralNetwork:
    def forward(self, x):
        return reduce(
            lambda acc, layer: layer(acc), self.layers, x)
    
    def explain(self, input):
        activations = self.forward(input)
        return interpret_importance(activations)
        
This source code highlights how explainability becomes crucial for critical analysis of modern systems, particularly artificial and machine learning systems (Berry 2023). Traditional code reading practices need to expand to interpret these emergent behaviours that are embedded within the latent spaces of the neural network.

Political Economic Analysis

The framework also enables analysis of how code relates to modes of accumulation for political economic approaches to source code. For example, contemporary platforms implement sophisticated value extraction:

class UserBehaviour {
    trackAction(action) {
        this.store.push({
            user: this.id,
            action: action,
            context: this.getCurrentContext(),
            timestamp: Date.now()
        });
        this.updateProfile();
        this.triggerRecommendations();
    }
}
This source code shows how surveillance capitalism operates through continuous behaviour tracking and profile updating. We can see in alternative coding practices, like given in Signal's encryption below, resist this logic:
async function deriveKeys(secret) {
    const salt = crypto.randomBytes(32);
    return await crypto.subtle.deriveKey(
        {name: 'PBKDF2', salt: salt, 
         iterations: 100000},
        secret,
        {name: 'AES-GCM', length: 256},
        true,
        ['encrypt', 'decrypt']
    );
}

Conclusions

The three-level framework for critical code studies helps to develop a systematic approach to analysing source code – an approach I call constellational analysis. By connecting technical implementation, social meaning, and emancipatory critique, it provides a multilevel approach for the urgent work of understanding the specificities of computer code in critical code studies.

However, some methodological challenges remain. The distributed and increasingly opaque nature of algorithmic systems necessitates new interpretative techniques. Analysing proprietary and machine learning based platforms requires an expansion of traditional code reading practices to work to understand emergent behaviours produced by these systems. Additionally, microservice architectures and cloud computing infrastructures introduce new scales of complexity that make individual readings or analytical approaches more difficult.

Most crucially, critical code studies needs to develop modes of analysis adequate to the political economy structuring computational capitalism. For example, techniques for studying how code practically implements regimes of extraction, surveillance and control would be very useful (See Zuboff 2019). So too would be methods for identifying, interpreting and amplifying alternative practices that actively resist these logics. From the federated and encrypted architectures of platforms like Mastodon and Signal, to the activist mobilisation of tech worker movements, critical code studies can play a vital role in recognising and supporting emancipatory possibilities immanent within the computational.

Realising this potential requires expanding beyond purely technical modes of analysis and critical code studies has been exemplary in developing these new approaches. By examining the formal properties of code and connecting it to the social meanings and political economic relations they embed and co-produce, critical code studies can operate as an interdisciplinary approach for reimagining the role of computation in contemporary life (Marino 2020). This implies a critical reflexive approach, attentive to how the concepts, methods and tools of critical code analysis are themselves shaped by their socio-historical conditions of emergence and operation. 

One suggestion for helping with developing new methods within critical code studies could be for it to develop a form of co-critique by using LLMs to assist in the human reading and comprehension of complex code bases that might be made up of hundreds, thousands and potentially, in the future, millions of lines of source code. At present there are problems with this approach: (1) LLMs have a tendency to hallucinate (or confabulate) answers, this enables the system to essentially “make things up”, often in a very "creative" way,[1] which can be a problem in research and so the output has to be carefully checked,  (2) they are limited in their context window and token generation, and (3) they tend to be overly positive in the responses they offer – not very useful for developing critical arguments about source code. 

However, there is a lot of potential here for new methods, and the idea is to develop use of these machine learning systems as “copilots” – helping you to do the job you are trying to do, a kind of human augmentation technology (see, for example Lawdroid Copilot). In software engineering co-pilot technology is maturing very quickly and can help write up to 30% of a programmers code – the most well-known of these is probably Microsoft Copilot, but ChatGPT is also rapidly developing this coding assistant technology. We can expect to see similar leaps in the technology, and therefore its potential as a method that enables new forms of copilot critique to be undertaken in critical code studies (I hope to write a more detailed blogpost about this shortly). 

Ultimately, the framework offered here is only a provisional map, an attempt to outline the current terrain and point to possible orientations for further exploration and experimentation. The real test lies in its practical application and iterative refinement through concrete interpretative work on specific code bases (see Berry and Marino 2024). Through careful attention to both technical specificity and social significance, I argue that we can better understand how code works and develop critiques of its material specificity. The constellational analysis framework aims to be a contribution to this critique and transformation.

Blogpost by David M. Berry

** Headline image generated using DALL-E in November 2024. The prompt used was: "Critical code studies is an approach to reading computer code. Draw an  image in green on black background to represent this looking like computer code. Please make the image look more scientific." Due to the probabilistic way in which these images are generated, future images generated using this prompt are unlikely to be the same as this version. 

Notes

[1] Hallucination (or confabulation) has been used creatively in literature to allow the writer to co-creatively write with the AI. For example, in Pharmako-AI, K. Allado-McDowell (founder of Google's Artists and Machine Intelligence program) initiates an experimental conversation with the AI language model GPT-3.

Bibliography 

Berry, D.M. (2011) The philosophy of software: Code and mediation in the digital age. Basingstoke: Palgrave Macmillan.

Berry, D.M. (ed.) (2012) Understanding digital humanities. Houndmills, Basingstoke, Hampshire ; New York: Palgrave Macmillan.

Berry, D. M. (2014) Critical Theory and the Digital. Bloomsbury. 

Berry, D. M.  (2019) Against infrasomatization: Towards a critical theory of algorithms, in Data Politics, Routledge. https://www.taylorfrancis.com/reader/read-online/5e1e0ce1-5b49-445e-b23e-91e80a6c340a/chapter/pdf?context=ubx 

Berry, D.M. (2023) ‘The Explainability Turn’, Digital Humanities Quarterly, 017(2). Available at: http://www.digitalhumanities.org/dhq/vol/17/2/000685/000685.html.

Berry, D.M. and Marino, M.C. (2024) ‘Reading ELIZA: Critical Code Studies in Action’, Electronic Book Review. Available at: https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/ 

Chun, W.H.K. (2011) Programmed Visions – Software and Memory. Cambridge, Mass: MIT Press.

Fuller, M. (2003) Behind the Blip: Essays on the Culture of Software. Autonomedia.

Galloway, A.R. (2004) Protocol: how control exists after decentralization. Cambridge, Massachusetts ; MIT Press (Leonardo).

Habermas, J. (1972) Knowledge and Human Interests. 2nd Printing October 1972 edition. Boston: Beacon Press.

Marino, M.C. (2006) ‘Critical Code Studies’, Electronic Book Review. Available at: https://electronicbookreview.com/essay/critical-code-studies/ (Accessed: 21 February 2024).

Marino, M.C. (2020) Critical code studies. Cambridge, Massachusetts: The MIT Press (Software studies).

Zuboff, S. (2019) The age of surveillance capitalism: the fight for a human future at the new frontier of power. First edition. New York, N.Y: PublicAffairs.



Comments

Popular Posts