Exhaustive Media


This is the edited text of a talk given by David M. Berry at Transmediale 2013 at the Depletion Design panel. 



Today there is constant talk about the "fact" that we (or our societies and economies) are exhausted, depleted or in a state of decay (see Wiedemann and Zehle 2012).  This notion of decline is a common theme in Western society, for example Spengler's The Decline of the West, and is articulated variously as a decline in morals, lack of respect for law, failing economic or military firepower, relative education levels, threats from ecological crisis, and so on. Talk of decline can spur societies into paroxysms of panic, self-criticism and calls for urgent political intervention. That is not to say that claims to relative decline are never real as such, indeed relative measures inevitably shift in respect to generation ageing and change, particularly in relation to other nations. However, the specific decline discourse is interesting for what it reveals about the concerns and interests of each population, and a particular generation within it, whether it be concerns with health, wealth, or intellectual ability, and so forth.

Karl Mannheim
The issue of a tendency inherent in a temporal social location, that is the certain definite modes of behaviour, feeling and thought in what we might call constantly repeated experience in a common location in the historical dimension of the social process is what Karl Mannheim called the generation entelechy (Mannheim 1952). This is the process of living in a changed world and is a re-evaluation of the inventory and the forgetting of that which is useful and covert and which is not yet won. In other words, the particular stratification of experience in relation to the the historical contexts of a specific generation – in both, what we might call, the inner and the outer dimensions of experience. This social process also naturally causes friction between different generation entelechies, such as that between an older and younger generation – there may also be moments of conflict within a generation entelechy or what Mannheim called generation units, although there is not space here to develop this question in relation to the digital.

The relative conditions of possibility, particularly in relation to what we might call the technical milieu for a generation entelechy, contribute towards slower or faster cultural, social, and economic change. The quicker the pace of social, technical, economic and cultural change is, the greater the likelihood that a particular generation location group will react to the changed situations by producing their own entelechy. Thus, individual and group experiences act as crystallising agents in this process, and plays out in notions of "being young", "freshness", "cool", or "with it" in some sense, which acts to position generation entelechies in relation to each other both historically and culturally.

Mannheim's theorisation of generational formation provides crucial insights for understanding how social and cultural processes are transmitted and transformed across time. His concept of generation entelechy outlines several fundamental features that structure this process of cultural transmission and change.

The first key aspect Mannheim identifies is the continuous emergence of new participants in cultural processes. This constant influx of new actors into social and cultural life ensures dynamism and the potential for transformation. Simultaneously, former participants gradually disappear from these processes, creating space for new interpretations and approaches.

Significantly, Mannheim argues that members of any given generation can only participate in a temporally limited section of the historical process. This temporal boundedness means that each generation experiences and interprets social reality from a specific historical location, shaping their consciousness and modes of understanding.

This temporal limitation creates the necessity for continuous transmission of accumulated cultural heritage. Without active processes of cultural transmission, each generation would need to begin anew, losing the benefit of accumulated knowledge and experience. However, this transmission is never purely mechanical reproduction, but always involves processes of interpretation and transformation.
The transition between generations, Mannheim emphasises, operates as a continuous process rather than through sharp breaks or clear demarcations. This continuity means that generational change involves complex processes of overlap and interaction, with different generational units coexisting and influencing each other. These insights have particular relevance for understanding contemporary digital culture and its modes of transmission. Under conditions of computational capitalism, the processes of generational formation and cultural transmission are increasingly mediated through digital technologies and platforms. This raises important questions about how generational consciousness forms within highly mediated computational environments.

Furthermore, the accelerated pace of technological change means that generational experiences may fragment more rapidly than in previous periods. The continuous emergence of new digital technologies and platforms creates multiple layers of generational experience even within traditionally defined cohorts. This suggests the need to refine and extend Mannheim's framework to account for these new conditions of generational formation. The question of cultural transmission becomes particularly complex when considering how digital literacy and computational understanding pass between generations. The rapid obsolescence of particular platforms and technologies means that accumulated knowledge may become outdated more quickly, while core competencies and critical approaches remain crucial across technological changes.

In relation to this general talk of depletion in Europe and the US, one of the recent decline-issues has been, particularly in the US and UK context, the worry about the decline of computational ability of young generations. More specifically the lack of digital literacy (or what I call elsewhere iteracy) of the new generations. In this specific discourse, the worry is articulated that a new generation is emerging that is not adequately prepared for what appears to be a deeply computational economic and cultural environment. This is usually, although not always, linked to a literal exhaustion of the new generation, the implication being a generation that is unprepared, apathetic, illiterate and/or disconnected. Often these claims are located within what Mannheim calls the "Intelligentsia", he writes, "in every society there are social groups whose special task it is to provide an interpretation of the world for that society. We call these the "Intelligentsia" (Mannheim 1967: 9). It is no surprise, then, that in the instance of digital literacy we see the same strata across society, commenting on and debating the relative merits of computational competences, abilities and literacies at a number of different levels, but particularly in relation to the education of new generations through discussions of school, college and university digital literacies.

Some of these claims are necessarily the result of a form of generational transference of the older generations' own worries concerning its inadequacies, in this case usually either (1) an inability to use the correct type of computational devices/systems; (2) a concern that the young are not using the computers in the correct manner that they themselves were taught, for example using a physical keyboard and mouse; or (3) a dismissal of the new forms of digitality that are seen as trivial, wasteful of time, and hence socially or economically unproductive, a classic example of this is social media. There are a number of themes and levels of analysis that are brought out in these discussions, often, but not limited to the question of moral failings of the new generation, but also to the technical abilities, economic possibilities, such as vocationalism, but also the ways of thinking appropriate to a perceived new environment or economic and technical ecology. This is similar to Foucault's question of a generational ethos, as it were, and whether it might be helpful if we,
envisage modernity rather as an attitude than as a period of history. And by 'attitude,' I mean a mode of relating to contemporary reality; a voluntary choice made by certain people; in the end, a way of thinking and feeling; a way, too, of acting and behaving that at one and the same time marks a relation of belonging and presents itself as a task. A bit, no doubt, like what the Greeks called an ethos (Foucault 1984: 39). 
Immanuel Kant
The condition of exhaustion among new generations requires careful analysis as a symptom of contemporary computational capitalism. Rather than viewing this exhaustion as mere apathy or lack of vitality, it must be understood as a structural outcome of specific political economic arrangements. This exhaustion operates through the literal draining of data, attention, and cognitive energy into technical systems. The computational infrastructure of contemporary society creates multiple drainage points through which human capacities are captured and redirected. These operate most intensively through intimate computational devices such as smartphones, tablets and laptops. Such technologies do not simply mediate experience but actively extract value through continuous processes of data collection, attention capture and behavioural modification.

Digital media increasingly function as exhaustive media in a dual sense. They exhaust their users through constant demands for engagement and interaction while simultaneously exhausting the possibilities for alternative forms of social and cultural life. This exhaustion manifests in the perpetual tiredness reported by young people, but this tiredness cannot be reduced to individual psychological states. The drainage points created by computational systems operate through specific technical affordances and interfaces. Social media platforms, for instance, create infinite scroll mechanisms that capture attention while generating valuable behavioural data. Mobile applications interrupt cognitive processes through notifications and alerts, fragmenting concentration while harvesting interaction data. Search engines shape patterns of knowledge acquisition while building detailed profiles of user interests and activities.

Understanding this exhaustion requires analysis of how computational systems extract value from human activity in increasingly sophisticated ways. The technical systems that facilitate this extraction are designed to operate continuously, creating a permanent state of drainage that affects both individual and collective capacities. This suggests the need to examine how specific technical arrangements contribute to generalised conditions of exhaustion. Critical theory must, therefore, interrogate these processes of exhaustion at multiple levels. This includes analysis of the technical systems that enable data extraction, the political economic structures that drive their deployment, and the social and psychological effects they produce. Particular attention must be paid to how computational systems create new forms of value extraction that operate through the cultivation and management of exhaustion.
The concept of exhaustive media thus provides a crucial analytical tool for understanding contemporary conditions. It enables examination of how computational systems simultaneously exhaust their users while extracting maximum value from their activities. This points toward the need for new theoretical frameworks capable of grasping these dynamics of exhaustion and extraction under computational capitalism.

The question of enlightenment under contemporary computational conditions requires fundamental rethinking of how public reason operates within software ecologies. If the enlightenment project, following Kant, centres on the universal and free public use of reason, then contemporary technical systems present novel challenges to this aspiration. The infrastructure of contemporary computational society includes pervasive systems of tracking, monitoring and data extraction. Web bugs, beacons, cookies and tracking scripts create comprehensive surveillance systems that monitor and shape public discourse. Cloud computing and real time data streams establish new conditions for the circulation and formation of knowledge. These technical systems do not simply mediate public reason but actively constitute its conditions of possibility.

Contemporary software ecologies raise crucial questions about the possibilities for genuine public reason. When all discourse passes through proprietary platforms and generates valuable behavioural data, the notion of truly free public deliberation becomes problematic. The technical systems that enable contemporary communication simultaneously constrain and shape its form and content. This suggests the need to reconceptualise enlightenment ideals for computational conditions. Rather than simply applying Kantian principles to digital media, critical theory must examine how computational systems transform the very possibility of public reason. This requires attention to how specific technical arrangements enable or foreclose particular forms of rational discourse and democratic deliberation.

The universal aspiration of enlightenment thought confronts new challenges under computational capitalism. When access to knowledge and platforms for public discourse are increasingly controlled by corporate entities, universality becomes entangled with questions of political economy. Critical theory must therefore examine how economic arrangements shape the possibilities for public reason.
Freedom of public reason in the digital age requires engaging with questions of technical infrastructure and design. This suggests the need for new forms of critical practice that can work within and against contemporary computational systems. Such practice must remain attentive to how technical arrangements shape possibilities for public discourse while seeking ways to preserve spaces for genuine rational deliberation. The enlightenment project must therefore be reformulated in relation to contemporary technical conditions. As Foucault described, for Kant,
when one is reasoning only in order to use one's reason, when one is reasoning as a reasonable being (and not as a cog in a machine), when one is reasoning as a member of reasonable humanity, then the use of reason must be free and public. Enlightenment is thus not merely the process by which individuals would see their own personal freedom of thought guaranteed. There is Enlightenment when the universal, the free, and the public uses of reason are superimposed on one another (Foucault 1984: 36-37).
Thus for Kant, to reach our political maturity as human beings we should "dare to know" or sapere aude, that is, "to have courage to use your own reasoning" (Kant 1991: 54). That is the challenge is for us to rise to the challenge issued by Foucault to think in terms of the 'historical ontology of ourselves'. Which enables us to further test contemporary reality to find "change points", and what might the implications be for an investigation of events by which we constitute ourselves as subjects? Indeed, Foucault further argues,
Michel Foucault
I do not know whether we will ever reach mature adulthood. Many things in our experience convince us that the historical event of the Enlightenment did not make us mature adults, and we have not reached that stage yet. However, it seems to me that a meaning can be attributed to that critical interrogation on the present and on ourselves which Kant formulated by reflecting on the Enlightenment. It seems to me that Kant's reflection is even a way of philosophizing that has not been without its importance or effectiveness during the last two centuries. The critical ontology of ourselves has to be considered not, certainly, as a theory, a doctrine, nor even as a permanent body of knowledge that is accumulating; it has to be conceived as an attitude, an ethos, a philosophical life in which the critique of what we are is at one and the same time the historical analysis of the limits that are imposed on us and an experiment with the possibility of going beyond them (Foucault 1984: 49).
One way forward might be to begin to map the exhaustion of a new generation entelechy in terms of a new political economy that is emerging in terms of the ability to exhaust us of our thoughts, movements, health, thoughts, life, practices, etc. That is, usefully captured in terms of the term of the art in technical circles of the “data exhaust” that all user of computational systems create. We might therefore think in terms of the computational imaginaries that are crystallised within particular generation entelechies - and how we might gather a critical purchase on them. In other words the generation entelechy connected to a particular computational Weltanschauung, or worldview – what I call computationality elsewhere (Berry 2011).

Understanding generational change under computational conditions requires moving beyond narrow conceptions of technical competence. Rather than focusing solely on programming skills or digital literacy as requirements for economic participation, critical theory must examine the broader historical and philosophical contexts of digital making practices. The formation of critical and reflexive capacities among new generations demands attention to how technical practices emerge from and contribute to specific social and historical conditions. Programming and software development cannot be reduced to mere technical skills, but must be understood as forms of cultural practice embedded within broader social relations.

This shows the value of examining concrete cases of programming projects and development communities. Such analysis reveals how particular forms of digital making emerge from specific historical conditions and social arrangements. It enables understanding of how technical practices relate to broader cultural and economic frameworks while pointing toward possible alternative trajectories.
Critical software studies provides important methodological tools for this analysis. By examining the social, cultural and political economic dimensions of software development, it enables understanding of how technical practices shape and are shaped by their contexts. This approach reveals programming as a culturally embedded practice rather than purely technical activity. The theoretical framework developed in my earlier work (Berry 2011) emphasises the need to situate digital making within wider historical and philosophical contexts. This enables examination of how particular programming practices emerge from specific social conditions while potentially pointing toward alternative possibilities for technical development.

Understanding these dynamics requires careful attention to both technical specificities and broader social relations. Critical theory must examine how programming practices simultaneously embody particular technical logics while expressing and transforming social relations. This suggests the need for theoretical frameworks capable of moving between technical and social analysis. The actualisation of critical and reflexive capacities among new generations thus requires engaging with both technical practices and their broader contexts.

This is a critical means of contributing to the importance of the project of making the invisibility of much of the digital infrastructures become visible and available to critique. Of course, understanding digital technology is a “hard” problem for the humanities, liberal arts and social sciences due to the extremely complex forms which contain agentic functions and normative (but often hidden) values. Indeed, we might contemplate the curious problem that as the digital increasingly structures the contemporary world, curiously, it also withdraws, becomes backgrounded (Berry 2011). This enables us to explore how knowledge is transformed when mediated through code and software and apply critical approaches to big data, visualisation, digital methods, digital humanities, and so forth. But crucially to also see this in relation to the crystallisation of new entelechies around digital technologies.

Thinking about knowledge in this way enables us to explore generational epistemological changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks, what Mitcham calls a ‘new ecology of artifice’ (Mitcham 1998: 43). Indeed, the proliferation of contrivances that are computationally based is truly breathtaking, and each year we are given statistics that demonstrate how profound the new computational world is. For example, in 2012, 427 million Europeans (or 65 percent) use the internet and more than 90% of European internet users read news online (Wauters 2012). These computational devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects, and usage have to be subject to the kind of critical reasoning that both Kant and Foucault called for.

This is nonetheless made much more difficult by both the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful, and less power-hungry, and by the increase in complexity, power, range, and intelligence of the software that powers them. Of course, we should also be attentive to the over-sharing or excessive and often invisible collection of data within these device ecologies that are outside of the control of the user to ‘redact themselves’, as represented by the recent revelation of the "Path" and "Hipster" apps that were automatically harvesting user address book data on mobile phones (BBC 2012).

We might consider these transformations in light of what Eric Schmitt, ex-CEO of Google called "augmented humanity". He described this as a number of movements within the capabilities of contemporary computational systems, such that at Google, “we know roughly who you are, roughly what you care about, roughly who your friends are...Google also knows, to within a foot, roughly where you are... I actually think most people don’t want Google to answer their questions... They want Google to tell them what they should be doing next” (Eaton 2010). Translated this means that Google believes that it knows better than the user what it is that they should be doing, and in many cases even thinking. Thus, the computational device the user holds contains the means to source the expertise to prescribe action in particular contexts, what we might call “context consumerism”. That is, that the user is able to purchase their cognitive/memory/expertise capabilities as required on-the-fly. Thus, humanity becomes what we might call, following the development of programming languages such as C++, a new augmented or extended humanity++. Indeed there are now a number of examples of these developments in relation to, for example, Google Glass, contextual UX, locative technologies, etc.

Bernard Stiegler
We might consider the entire technical and media industries in light of what Stiegler (2010) has called the "Programming Industries" which are involved in  creating institutionalized “context”. This is data collected from the tacit knowledge of users and their “data exhaust” and delegated to computer code/software. These algorithms then create “applied knowledge” and are capable of making “judgments” in specific use cases. Indeed, today people rarely use raw data – they consume it in processed form, using computers to aggregate or simplify the results. This means that increasingly the “interface” to computation is “visualised” through computational/information aesthetics techniques and visualisation, a software veil that hides the "making" of the digital computations involved. Indeed, today we see this increasingly being combined with realtime contextual sensors, history and so forth into “cards” and other push notification systems that create forms of just-in-time memory/cognitive processes.

These are new forms of invisible interface/ ubiquitous computing/ enchanted objects which use context to present user with predictive media and information in real-time. The aim, we might say, is to replace forethought by reconfiguring/replacing human “secondary memory” and thinking with computation. That is, the crucial half-second of pre-conscious decision-forming processes whereby we literally "make up our own minds" is today subject to the unregulated and aggressive targeting of the programming industry. This temporally located area of the processes of mind we might call the "enlightenment moment" as it is the fraction of a second that creates the condition of possibility for independent thought and reflexivity itself. Indeed, far from being science-fiction this is now the site of new technologies in the process of being constructed, current examples including: Google Now, Apple Siri, MindMeld, Tempo, etc. Not to mention the aggressive targeting by advertising companies of this area, but more worryingly of new generation entelechies who are still developing their critical or reflexive skills, such as children and young people. This, of course, raises important questions about whether these targeted computation systems and contextual processes should be regulated in law in relation to the young. These are not new issues in relation to the regulation of the minds of children, but the aggressiveness of computational devices and the targeting of this forethought by the programming industries raises the stakes further, indeed as Stiegler quotes,
after decades of struggle in civil society, governments have been forced to regulate air pollution, food and water,... few governments have shown themselves capable of regulating marketing practices targetting children. This situation has left industry free to decide what children watch on television, what products they are offered in order to distract them, what strategies can be used to manipulate their wishes, desires, and values (Brodeur, quoted in Stiegler 2010: 88)
For example, in the UK, with the turn to a competitive model of higher education, literally each university also begins to compete for an "audience" of students to take its courses, and for which the students now pay a considerable sum of money to be both educated and entertained. We could say that the universities become, in effect, another branch of the cultural industry. This represents a dangerous moment for the creation of critical attention, the possibility of reflexivity and enlightenment, in as much as increasingly students receive from the lecturer but do not need to participate; they await their educational portions which are easy to swallow, and happily regurgitate them in their assessments. The students are taught that they are the consumers of a product (rather than the product of education themselves as reflexive citizens in majority), and that this service industry, the university, is there to please them, to satisfy their requirements. How could this be otherwise when they are expected to fill in survey after survey, market research questions to determine how "satisfied" they are, how "happy", and "content" they are with their consumption. Which remains, finally, the delivery of the best possible product, the first class degree, the A marks, the final certificate covered in gilt which will deliver them the best paying job possible.  The university itself becomes a device, an interface between consumer and producer, and which too becomes highly technologised as it seeks to present a surface commodity layer to its consuming students. It is in this context that MOOCs (Massive Open Online Courses) should be understood and critiqued as they represent only the public face of changes taking place on the inside of universities at all levels.

The new imaginaries of highly invasive congnitve technologies are already being conceptualised as the “Age of Context” within the programming industries. Indeed, under this notion all material objects are carriers of signal to be collected and collated into computational systems, even the discarded, the trash, etc. contains RFID chips that can provide data for contextual systems. But more importantly, the phones we carry, the mobile computers and the tablets, now built with a number of computational sensors, such as GPS, compasses, gyroscopes, microphones, cameras, wifi, radio transmitters and so forth, enable a form of contextual awareness to be technically generated through massive real-time flows of data. For example, in the US Presidential election on 6/11/2012, Twitter recorded 31 million election-related Tweets from users of the streaming –  327,452 Tweets per minute (TPM) (Twitter 2012) all of which can be fed to the user. In a real-time stream ecology, such as Twitter, the notion of the human is already contested and challenged by a form of "hyper attention" in contrast to the 'deep attention' of previous ages. Indeed, the user is constantly bombarded with data. This is increasingly understood as a lack within human capabilities which needs to be remedied using yet more technology – real-time streams need visualisation, cognitive assistants, push notification, dashboard interfaces, and so forth.

Google Now and the Notification "Cards"
This much heralded "Age of Context" is being built upon the conditions of possibility made feasible by distributed computing, cloud services, smart devices, sensors, and new programming practices around mobile technologies. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies that provide data from real-time streams and APIs (Application Programming Interfaces) to enable a new kind of intelligence within these technical devices. A good example of this is given by Google’s new "Google Now" product, which attempts to think "ahead" of the user by providing algorithmic prediction based on past user behavior, preferences, Google search result history, smart device sensors, geolocation, and so forth. As Google explains,
Google Now gets you just the right information at just the right time. It tells you today’s weather before you start your day, how much traffic to expect before you leave for work, when the next train will arrive as you’re standing on the platform, or your favorite team's score while they’re playing. And the best part? All of this happens automatically. Cards appear throughout the day at the moment you need them (Google 2012).
These new technologies form a constellation that creates new products and services, new tastes and desires, and the ability to make an intervention into forethought – to produce the imaginary that Google names "Augmented Humanity" (Eaton 2011). In some senses this follows from the idea that after ‘human consciousness has been put under the microscope, [it has been] exposed mercilessly for the poor thing it is: a transitory and fleeting phenomenon’ (Donald, quoted in Thrift 2006: 284). The idea of augmented humanity and contextual computing are intended to remedy this ‘problem’ in human cognitive ability. Here the technologists are aware that they need to tread carefully as Eric Schmidt, Google’s ex-CEO, revealed "Google policy is to get right up to the creepy line and not cross it" (Schmidt, quoted in Richmond 2010). The "creepy line" is the point at which the public and politicians think a line has been crossed into surveillance, control, and manipulation, by capitalist corporations – of course, internally Google’s experimentation with these technologies is potentially much more radical and invasive. These new technologies need not be as dangerous as they might seem at first glance, and there is no doubt that the contextual computing paradigm can be extremely useful for users in their busy lives – acting more like a personal assistant than a secret policeman. Indeed, Shel Israel argues that this new ‘Age of Context’ is an exciting new augmented world made possible by the confluence of a number of competing technologies. He writes that contextual computing is built on,
[1] social media, [2] really smart mobile devices, [3] sensors, [4] Big Data and [5] mapping. [Such that] the confluence of these five forces creates a perfect storm whose sum is far greater than any one of the parts (Israel 2012).
These technologies are built on complex intertwined webs of software tie together these new meta-systems which abstract (are built) from:
  • the social layer, such as Twitter and Facebook,
  • the ambient data collection layer, using the sensors in mobile devices,
  • the web layer, the existing and future web content and technologies,
  • the notification layer, enabling reconciliation and unification of real-time streams,
  • the app layer, which is predominantly made up of single-function apps, 
These various layers are then loosely coupled to interoperate in unexpected but "delightful" perceived fashion, such as experienced with the conversation interfaces, such as Apple Siri, which have both an element of "understanding", but also contextual information about their environment. Critically engaging with this age of context is difficult due to the distributed software, material objects, "enchanted" objects, black-boxed systems and computational "things" that make it up. The threads that hold these systems together are not well understood as a totality nor their new calculative dashboards (e.g. notification interfaces). Indeed, we can already discern new forms of power that are tentatively visible in this new context layer, enabling new political economic actors, and a new form of intensive exploitation, such as that demonstrated by the intensification of the pre-cognitive moment discussed earlier.

Iconic New Aesthetic Image from Google Earth
I have argued previously that moments like the “new aesthetic”, glitches (Berry 2011, 2012a, 2012b, 2012c), and with others that exceptions and contextual failure are useful to begin mapping these new systems (Berry et al 2012a; Berry et al 2013). The black box of these exhaustive systems is spun around us in wireless radio networks and RFID webs – perhaps doubly invisible. We need to critique moments in exhaustive media that are connected to particular forms of what we might call "exhaustive" governmentality, self-monitoring and life-hacking practices, aesthetic, political, social, economic, etc. but also the way in which they shape the generational entelechies. For example, this could be through the creation of an affective relation with real-time streaming ecologies and a messianic mode of media. Indeed, we might say that anticipatory computing creates a new anticipatory subject, which elsewhere I have called a riparian citizen (Berry 2011: 144).

Indeed, it seems to me that mapping how computation contributes to new generational entelechies and functions to limit their ability to critically reflect on their own historical dimension of the social process is a crucial problem, for example where hegemonic rhetorics of the digital - “new aesthetic”, “pixels”, “sound waves” and so forth, are widely used to convince and seldom challenged. Indeed this contributes to a wider discussion of how medial changes create epistemic changes. For me, this project remains linked to a critical ontology of ourselves as ethos, a critical philosophical life and the historical analysis of imposed limits to reach towards experiments with going beyond current conditions and limits (Foucault 1984). Indeed, the possibility of a "digital" enlightenment ethos needs to be translated into coherent "labor of diverse inquiries", one of which is the urgent focus on the challenge to thinking represented by the intensification of the programming industries on the "enlightenment moment" of our prethought. This requires methodological approaches, which could certainly draw on the archeological and genealogical analysis of practices suggested by Foucault (1984) but also on the technological and strategic practices associated with shaping both the policies and concrete technologies themselves – perhaps, if not necessarily "Evil Media" (Fuller and Goffey 2012), then certainly critical software and political praxis. Last, and not least, is the theoretical moment required in developing the conceptual and critical means of defining unique forms of relations to things, others, ourselves (Foucault 1984) that are not limited by the frame of computationality.




Bibliography

BBC (2012) iPhone Apps Path and Hipster Offer Address-book Apology, BBC, 9 February 2012, http://www.bbc.co.uk/news/technology-16962129

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M. (2012a) Abduction Aesthetic: Computationality and the New Aesthetic, Stunlaw, accessed 18/04/2012, http://stunlaw.blogspot.co.uk/2012/04/abduction-aesthetic-computationality.html

Berry, D. M. (2012b) Computationality and the New Aesthetic, Imperica, accessed 18/04/2012. http://www.imperica.com/viewsreviews/david-m-berry-computationality-and-the-new-aesthetic

Berry, D. M. (2012c) Understanding Digital Humanities, London: Palgrave.

Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O'Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press

Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.

Eaton, K. (2012) The Future According to Schmidt: "Augmented Humanity," Integrated into Google, Fast Company, 25 January 2011, http://www.fastcompany.com/1720703/future-according-schmidt- augmented-humanity-integrated-google

Foucault, M. (1984) What is Enlightenment?, in Rabinow P. (ed.) The Foucault Reader, New York, Pantheon Books, pp. 32-50.

Fuller, M. and Goffey, A. (2012) Evil Media, MIT Press.

Google (2012) Google Now, Google, 2012, http://www.google.com/landing/now/

Israel, S. (2012) Age of Context: Really Smart Mobile Devices, Forbes, 5 September 2012, http://www.forbes.com/sites/shelisrael/2012/09/05/age-of-context-really-smart-mobile-devices/

Kant, I (1991) An Answer to the Question: What is Enlightenment?, in Kant: Political Writings, Cambridge: Cambridge University Press.

Mannheim, K. (1952) The Problem of Generations,  in Kecskemeti, P. (ed.) Karl Mannheim: Essays, London: Routledge, pp 276-322, accessed 15/02/2013, http://www.history.ucsb.edu/faculty/marcuse/classes/201/articles/27MannheimGenerations.pdf

Mannheim, K. (1967) Ideology and Utopia, London: Harvest.

Mitcham, C. (1998) The Importance of Philosophy to Engineering’, Teorema, Vol. XVII/3 (Autumn, 1998).

Richmond, S. (2010) Eric Schmidt: Google Gets Close to 'the Creepy Line', The Telegraph, 5 October 2010, http://blogs.telegraph.co.uk/technology/shanerichmond/100005766/eric-schmidt-getting- close-to-the-creepy-line/

Stiegler, B. (2010) For a New Critique of Political Economy, Cambridge: Polity Press.


Stiegler, B. (2010) Taking Care of Youth and the Generations, Cambridge: Polity Press.


Thrift, N. (2006) Re-inventing Invention: New Tendencies in Capitalist Commodification, Economy and Society, 35.2 (May, 2006): 284.

Wauters, R. (2012) 427 Million Europeans are Now Online, 37% Uses More than One Device: IAB, The Next Web, 31 May 2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now- online-37-uses-more-than-one-device-iab/

Wiedemann, C. and Zehle, S. (2012) Depletion Design: A Glossary of Network Ecologies, Amsterdam: Institute for Network Cultures















Comments

Popular Posts