23 January 2011

Real-Time Streams and the @Cloud

Heraclitus says, doesn't he, that all things move on and nothing stands still, and comparing things to the stream of a river he said that you cannot step twice into the same stream (Plato, in Cratylus 402A)

The Internet, and the services that it offers, have traditionally been a rather static affair. However, there is evidence that we are beginning to see a shift in the way in which we use the web, and also how the web uses us. This is known as the growth of the so-called ‘real-time web’ and represents the introduction of a software system that operates in real-time in terms of multiple sources of data fed through millions of data streams into computers, mobiles, and technical devices more generally.[1] Utilising Web 2.0 technologies, and with the mobility of new devices and their locative functionality, they can provide useful data to the user on the move. Additionally, these devices are not mere ‘consumers’ of the data provided, they also generate data themselves, about their location, their status and their usage. Further, they provide data on data, sending this back to servers on private data stream channels to be aggregated and analysed (such as clickstreams). As the web space begins to fill with these devices and services that have the facility to feedback information and exchange data in real-time we see the experience of the web begin to change, that is,
1. The web is transitioning from mere interactivity to a more dynamic, real-time web where read-write functions are heading towards balanced synchronicity. The real-time web... is the next logical step in the Internet’s evolution.
2. The complete disaggregation of the web in parallel with the slow decline of the destination web.
3. More and more people are publishing more and more “social objects” and sharing them online. That data deluge is creating a new kind of search opportunity (Malik 2009).

The way we have traditionally thought about the Internet has been in terms of pages, but we are about to see this changing to the concept of ‘streams’ (see Berry 2011). In essence, the change represents a move from a notion of information retrieval, where a user would attend to a particular machine to extract data as and when it was required, to an ecology of data streams that forms an intensive information environment. This notion of living within streams of data is predicated on the use of technical devices that allow us to manage and rely on the streaming feeds. Thus,
Once again, the Internet is shifting before our eyes. Information is increasingly being distributed and presented in real-time streams instead of dedicated Web pages. The shift is palpable, even if it is only in its early stages... The stream is winding its way throughout the Web and organizing it by nowness (Schonfeld 2009).
Importantly, the real-time stream is not just an empirical object; it also serves as a technological imaginary, and as such points the direction of travel for new computational devices and experiences (indeed, it encourages the consumption of devices and media). In the world of the real-time stream, it is argued that the user will be constantly bombarded with data from a thousand (million) different places, all in real-time, and that without the complementary technology to manage and comprehend the data she would drown in information overload (see Datasift for an example of a real-time social media filtering engine). But importantly, the user will also increasingly desire the real-time stream, both to be in it, to follow it, and to participate in it, and where the user wishes to opt out, the technical devices are being developed to manage this too. For example:


To avoid the speed of a multiply authored follow stream, especially where they might number in the hundreds or thousands of people you follow, instead you might choose to watch the @mention stream instead. This only shows Tweets that directly mention your username, substantially cutting down the amount of information moving past and relying on the social graph, i.e. other people in your network of friends, to filter the data for you. That is, the @mention stream becomes a collectively authored stream of information presented for you to read (Berry 2011).  


Gillmor (2011) calls this the @mention Cloud, and I think that the idea of a space or 'Cloud' which is a holding location for real-time streams is really interesting. Clouds, as in cloud-computing, are normally understood as location independent data-centres that are controlled and owned by data warehousing companies and which provide data, software and even processing power to client computer systems.[2] But clouds can also refer to statistical clusters, where elements are grouped around an anchor, in this case a particular username, or @mention. With his notion of the @mention cloud, Gillmor gestures towards an important part of the problem with following and understanding real-time streams, and that is the relevance and quality of the information they contain. And they do hold important information, its just difficult sometimes to find, extract and order it (for example, see the curation of real-time data streams in crises or Brand management with SwiftRiver). Indeed, one of the problems is that they transcend organisational boundaries and move quickly between different topics and knowledges.

The @mention stream, found on services like Twitter, allow your social graph (that is the group of people you follow) to act as a kind of social filter, only drawing your attention to the things that they think are important, often called the interest-graph. To attempt to follow the raw data stream from Twitter (which they call the firehose) would be impossible as the dataflow is just too fast, indeed, according to ComScore, there were over 25 billion tweets in 2010 alone (Jeavons 2011). Interestingly, there are now so-called Data Resellers like Gnip, that offer subsets of the firehose, called halfhose (50% of data stream), decahose (10% of data stream) and Spritzer (1-2% of data stream). Therefore information management becomes an increasingly important concern in order to keep some form of relationship with the flow of data that doesn’t halt the flow, but rather allows the user or organisation to step into and out of a number of different streams in an intuitive and useful way. This is because the web becomes,
A stream. A real time, flowing, dynamic stream of information — that we as users and participants can dip in and out of and whether we participate in them or simply observe we are [...] a part of this flow. Stowe Boyd talks about this as the web as flow: “the first glimmers of a web that isn’t about pages and browsers” (Borthwick 2009).

Of course, real-time streams and clouds could also enable the emergence of what is being called "cloud jacking" and "cloud hijacking" (Cohen 2009), and we might even envision dark-streams and dark-clouds, indeed we could think of Wikileaks as a dark-cloud itself. We could imagine that these dark-clouds absorb data, rather like a black-hole absorbs light, and into which we are unable to perform search or discovery, that is they remain opaque to us. Within certain industries this kind of dark cloud system could be useful for anonymising streams, or creating aggregations or search results without revealing the dark algorithms that drive them (Google page rank could be thought of as a dark algorithm), unsurprisingly in the finance sector there is the emergence of a similar concept called dark pools.[3]

However, the key question remains: how might we transform an @mention stream from its diachronic state, as a fast moving stream, into a frozen place of immanence, that is a synchronic state. This can be understood as the ability to use cloud-computing to freeze statistical @mention clouds, which I want to call the @Cloud.[4] The reason is, that as the real-time streams currently stand they become increasingly difficult to manipulate, refer to, or even connect and compare. The @Cloud would therefore need to implement the function that Kittler argues is intrinsic to all digital media, that is Time Axis Manipulation,[5]

[which] shift[s] the chronological order of time to the parallel order of space – and spaces are things that can principally be restructured – [thus] written media become elementary forms that not only allow temporal order to be stored but also to be manipulated and reversed (Krämer 2006). 

I also want to suggest that the @Cloud would preferably combine the features of computational search (exemplified by Google) and the social graph (exemplified by Facebook or Twitter). The key is to be able to translate multiple fast moving streams of information, that is a time-based medium, into a space-based medium. Providing the interface for temporality through storage, this is the essence of the @Cloud. But the @Cloud, is not merely a storage Cloud itself, as it allows multiple stream-like access points back into the information that it has collected, you have forwarded to it, or friends in your social graph have suggested (we could call these @streams). The @Cloud would, therefore, allow the replaying of the streams, the rewinding or fast-forwarding of the data, and even the move to a different dimension to view the information from above, below, or even comparatively against other data (anyone who has read Flatland will understand what I am suggesting here).


We can think of the @Cloud as a sink, into which we can pour various information, both diachronic (i.e. moving data streams that continue to flow into it) and synchronic (e.g. email, books, PDFs, photos, websites, URLs, etc).[6] But it is more than just a cloud-based storage service or data-locker.[7] The @Cloud can then act as a meta-interface with multiple dimensions into a datascape that is rapidly changing, including real-time streaming of itself (see Rao 2009, Gillmore 2011). This is, of course, not just RSS, which is information syndication, as it brings to bear the advantages of the social graph and even what we might call the thing-graph (i.e. the collection of devices, and things, that you have connected together through this @Cloud itself). Thus, one could watch one's own @streams from @clouds, including media-streams, photo-streams, @mention streams, and @reading streams. Each stream could potentially be connected to the others, and relations, ideas and concepts from each stream could interact and provoke combinations, questions and narratives that might not be apparent in isolation. 


Indeed, thinking of the @Cloud as an interface might be the best way of understand it, a highly visual experience for viewing complex time-based media, in a number of computationally and social-media assisted ways. Treating all information in the @Cloud as a potential stream (frozen/dried streams), rather than a collection of discrete objects, which can then be re-streamed using a number of different search/tagged criteria, would also open up new narrative modes of interpretation (certainly Qwiki demonstrates one way of reconceptualising search as a streamed media experience, and Apple iPhoto 9 with its 'Faces' and 'Places' function shows another). We could also imagine viewing one's @Cloud through filters such as heat-maps, wordle-type visualisations, location, people, places or even through versioning systems which highlight change within data streams.[8] Importantly, we could also share portions of our @clouds, creating new @tropospheres that others could explore.[9]


Notes

[1] Programming these new real-time services will pose particular problems as they will require computer code to remediate static web services to distributed computational devices. They also required the kind of distributed computing power that is able to respond, process and communicate through networks. 


[2] The Ecologist argues that "[c]alling this vague collection of ‘other’ computers a ‘cloud’ evokes a vaporous world of weightless websites, but that would be misleading. In truth, The Cloud consists of dataprocessing warehouses the size of football fields, strung together by fat cables and inside which air-conditioning fans cool rows of computing servers 24 hours a day. Far from being weightless, the expanding digital cloud is really an enormous necklace of steel, silicon and concrete." (Ecologist 2008)


[3] Whilst within data circles there has been a move to the language of streams and clouds, within the finance sector there has been a corresponding rise in the use of the language of so-called dark pools, these are "trading venues that match buyers and sellers anonymously. [Which by] concealing their identity, as well as the number of shares bought or sold, dark pools help institutional investors avoid price movements as the wider market reacts to their trades." (Economist 2009)


[4] We might think of the @cloud as a platform for streaming services. Completely customisable to user requirements in terms of search criteria and relevance. 


[5] The "means of time axis manipulation are only possible when the things that occupy a place in time and space are not only seen as singular events but as reproducible data. Such production sites of data are ‘discourse networks’. Discourse networks are media in the broader sense: they form networks of technological and institutional elements." (Krämer 2006) 

[6] The idea of collating email into an @cloud that can then be streamed back out, perhaps in a short format, translates the static nature of email into a dynamic streaming format. I can imagine that an @stream for email would be extremely useful. 

[7] Streaming media from an @cloud into custom @streams, such as photo-streams may be part of the investment  Apple is making into huge data centres


[8] Services that help to filter the real-time streams include peer-scoring like Klout and PeerIndex that calculate your 'authority' in relation to other users of real-time services.  Datasift, for example allows you to combine this reputational data with geo-location, 'sentiment', and lots of other filters to perform search and discovery on the Twitter realtime data stream. This could be used for crisis-tracking, brand tracking/management, or other forms of rapid data discovery. Datasift even has rules such as 'no swearing' which enables the automatic bowdlerisation of Twitter, or patterns of text like ISBN codes. 


[9] Salman Rushdie in Haroun and the Sea Of Stories has a wonderful passage that describes something similar to the @cloud, that is a living stream of narratives and temporalities: "Haroun looked into the water and saw that it was made up of a thousand thousand thousand and one different currents, each one a different color, weaving in and out of one another like a liquid tapestry of breathtaking complexity; and [the Water Genie] explained that these were the Streams of Story, that each colored strand represented and contained a single tale. Different parts of the Ocean contained different sorts of stories, and as all the stories that had ever been told and many that were still in the process of being invented could be found here, the Ocean of the Streams of Story was in fact the biggest library in the universe. And because the stories were held here in fluid form, they retained the ability to change, to become new versions of themselves, to join up with other stories and so become yet other stories; so that unlike a library of books, the Ocean of the Streams of Story was much more than a storeroom of yarns. It was not dead but alive." (Rushdie, quoted in Rumsey 2009).


14 January 2011

Digital Humanities: First, Second and Third Wave



Few dispute that digital technology is fundamentally changing the way in which we engage in the research process. Indeed, it is becoming more and more evident that research is increasingly being mediated through digital technology. Many argue that this mediation is slowly beginning to change what it means to undertake research, affecting both the epistemologies and ontologies that underlie a research programme (sometimes conceptualised as 'close' versus 'distant' reading, see Moretti 2000).1 Of course, this development is variable depending on disciplines and research agenda, with some more reliant on digital technology than others, but it is rare to find an academic today who had no access to digital technology as part of the research activity. Library catalogues are now probably the minimum way in which an academic can access books and research articles without the use of a computer, but with card indexes dying a slow and certain death (Baker 1996, 2001) there remains fewer means for the non-digital scholar to undertake research in the modern university (see JAH 2008). Not to mention the ubiquity of email, Google searches and bibliographic databases which become increasingly crucial as more of the worlds libraries are scanned and placed online. These, of course, also produce their own specific problems, such as huge quantities of articles, texts and data suddenly available at the researcher's fingertips:


It is now quite clear that historians will have to grapple with abundance, not scarcity. Several million books have been digitized by Google and the Open Content Alliance in the last two years, with millions more on the way shortly; the Library of Congress has scanned and made available online millions of images and documents from its collection; ProQuest has digitized millions of pages of newspapers, and nearly every day we are confronted with a new digital historical resource of almost unimaginable size (JAH 2008).


Whilst some decry the loss of the skills and techniques of older research traditions which relied heavily on close reading, others have warmly embraced what has come to be called the digital humanities, and has been strongly associated with the use of computational methods to assist the humanities scholar (Schreibman et al 2008; Schnapp and Presner 2009; Presner 2010; Hayles 2011).

The digital humanities themselves have had a rather interesting history, starting out as ‘computing in the humanities’, or ‘humanities computing’, the early days were very often seen as a technical support role to the work of the ‘real’ humanities scholars who would drive the projects. This was the application of the computer to the disciplines of the humanities, what has been described as treating the ‘machine’s efficiency as a servant’ rather than ‘its participant enabling of criticism’ (McCarty 2009). As Hayles explains, changing to the term ‘“Digital Humanities” was meant to signal that the field had emerged from the low-prestige status of a support service into a genuinely intellectual endeavour with its own professional practices, rigorous standards, and exciting theoretical explorations’ (Hayles 2011). Ironically, as the projects became bigger, more complex, and developed computational techniques as an intrinsic part of the research process, technically proficient researchers increasingly saw the computational as part and parcel of what it is to do research in the humanities itself. That is, computational technology has become the very condition of possibility required in order to think about many of the questions raised in the humanities today. For example, Schnapp and Presner (2009), in the Digital Humanities Manifesto 2.0, explained that,


The first wave of digital humanities work was quantitative, mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays. The second wave is qualitative, interpretive, experiential, emotive, generative in character. It harnesses digital toolkits in the service of the Humanities’ core methodological strengths: attention to complexity, medium specificity, historical context, analytical depth, critique and interpretation (Schnapp and Presner 2009, original emphasis).

Presner (2010) further argues that,


the first wave of Digital Humanities scholarship in the late 1990s and early 2000s tended to focus on large-scale digitization projects and the establishment of technological infrastructure, [while] the current second wave of Digital Humanities—what can be called “Digital Humanities 2.0”—is deeply generative, creating the environments and tools for producing, curating, and interacting with knowledge that is “born digital” and lives in various digital contexts. While the first wave of Digital Humanities concentrated, perhaps somewhat narrowly, on text analysis (such as classification systems, mark-up, text encoding, and scholarly editing) within established disciplines, Digital Humanities 2.0 introduces entirely new disciplinary paradigms, convergent fields, hybrid methodologies, and even new publication models that are often not derived from or limited to print culture (Presner 2010: 6).

The question of quite how the digital humanities undertake their research, and whether the notions of first and second wave digital humanities captures the current state of different working practices and methods in the digital humanities remains contested. However these can be useful analytical concepts for thinking through the changes in digital humanities. We might, however, observe the following, first-wave digital humanities was the building of infrastructure in the studying of humanities texts through digital repositories, text markup, etc. Whereas second-wave digital humanities expands the notional limits of the archive to include digital works and so bring to bear the humanities own methodological toolkits to look at born digital materials, such as electronic literature (e-lit), interactive fiction (IF), web-based artefacts, and so forth.

Indeed, I think that we need to further explore both first and second wave digital humanities, but also start to map out a tentative path for a third wave of digital humanities, concentrated focus around the underlying computationality of the forms held within a computational medium  (I call this the computational turn in the Arts and Humanities, see Berry 2011).2 That is, looking at the digital component of digital humanities in light of its medium specificity, as a way of thinking about how medial changes produce epistemic ones. This approach draws from recent work in digital humanities but also the specifics of general computability made available by specific platforms (Fuller, M. 2008; Manovich 2008; Montfort and Bogost 2009; Berry 2011). Therefore, I tentatively raise the idea that neither first, nor second-wave digital humanities really problematized what Lakatos (1980) would have called the ‘hard-core’ of the humanities, the unspoken assumptions and ontological foundations that support the ‘normal’ print-based research that humanities scholars undertake on an everyday basis (although see Presner 2010 who includes some discussion of this in his definition of digital humanities 2.0). The use of digital technologies can also problematise where disciplinary boundaries have been drawn in the past, especially considering the tendency of the digital to dissolve traditional institutional structures.3 Indeed, we could say that third-wave digital humanities points the way in which digital technology highlights the anomalies generated in a humanities research project and which leads to a questioning of the assumptions implicit in such research, e.g. close reading, canon formation, periodization, liberal humanism, etc. We are, as Presner (2010: 10) argues, ‘at the beginning of a shift in standards governing permissible problems, concepts, and explanations, and also in the midst of a transformation of the institutional and conceptual conditions of possibility for the generation, transmission, accessibility, and preservation of knowledge.’

As I argue elsewhere,

What I would like to suggest is that instead we are beginning to see the cultural importance of the digital as the unifying idea of the university. Initially [changes in technology] has tended to be associated with notions such as information literacy and digital literacy... [but] we should be thinking about what reading and writing actually should mean in a computational age. This is to argue for critical understanding of the literature of the digital, and... [the] shared digital culture through a form of digital Bildung. Here I am not calling for a return to the humanities of the past...‘for some humans’, but rather to a liberal arts that is ‘for all humans’ (see Fuller 2010). [T]his is to call for the development of a digital intellect as opposed to a digital intelligence... [Here] as Hofstadter (1963) argues, Intellect... is the critical, creative, and contemplative side of mind. Whereas intelligence seeks to grasp, manipulate, re-order, adjust, intellect examines, ponders, wonders, theorizes, criticizes, imagines. Intelligence will seize the immediate meaning in a situation and evaluate it. Intellect evaluates evaluations, and looks for the meanings of situations as a whole... Intellect [is] a unique manifestation of human dignity (Berry 2011: 20).4

Thus, there is an undeniable cultural dimension to computation and the medial affordances of software. This connection points to the importance of engaging with and understanding computer code, indeed, computer code can serve as an index of culture more generally (imagine digital humanities mapping different programming languages to the cultural possibilities and practices that it affords, e.g. HTML to cyberculture, AJAX to social media, etc.), not to mention mapping 'editing' software to new forms of film narrative, music, and art more generally, or cultural criticism via the digital humanities. As Liu (2011) argues:


In the digital humanities, cultural criticism–in both its interpretive and advocacy modes–has been noticeably absent by comparison with the mainstream humanities or, even more strikingly, with “new media studies” (populated as the latter is by net critics, tactical media critics, hacktivists, and so on). We digital humanists develop tools, data, metadata, and archives critically; and we have also developed critical positions on the nature of such resources (e.g., disputing whether computational methods are best used for truth-finding or, as Lisa Samuels and Jerome McGann put it, “deformation”). But rarely do we extend the issues involved into the register of society, economics, politics, or culture (Liu 2011).


This means that we could further ask the question: what is culture, politics and the economy after it has been ‘softwarized’? (Manovich 2008:41). That is not to say that humanities scholars, digital or otherwise, must be able to code or 'build' (cf. Ramsay 2011). Rather, that understanding the digital is in some sense also connected to understanding of code through study of the medial changes that it affords, that is, a hermeneutics of code (see Clinamen 2011, Sample 2011) or critical approaches to software itself (Manovich 2008, Berry 2011).5 One example, facilitated by software and code, is the emergence of the real-time stream of data, as opposed to the static knowledge objects humanities have traditionally been focussed upon, e.g. books and papers (see Flanders 2009). These include geolocation, real-time databases, Twitter, social media, SMS novels, and countless other processual and rapidly changing digital forms (including, of course, the Internet itself, which is becoming increasingly stream-like).

These streams are real-time and it is this aspect that is important because they deliver liveness, or ‘nowness’ to the users and contributors. Many technologists argue that we are currently undergoing a transition from a ‘slow web to a fast-moving stream... And as this happens we are shifting our attention from the past to the present, and our “now” is getting shorter’. Today, we live and work among a multitude of data streams of varying lengths, modulations, qualities, quantities and granularities. The new streams constitute a new kind of public, one that is ephemeral and constantly changing, but which modulates and reports a kind of reflexive aggregate of what we might think of as a stream-based publicness – which we might therefore call riparian-publicity (Berry 2011: 144).6

New methods and approaches, such as data visualisation, will be needed to track and understand these new streaming knowledge forms both in terms of pattern and narrative. Of course, there are also many existing humanities approaches that could also provide real value by application to these digital forms (both screenic and non-screenic).7 I also think that this could be a resourceful way of understanding cultural production more generally, for example, digital typesetting transformed the print newspaper industry, and eBook and eInk technologies are likely to do so again (the iPad and Kindle are ultimately devices to access real-time streaming culture). Not to mention how digital streams are infusing society, economics and politics. Therefore, I think that we should be taking the computational turn seriously as a key research question for the humanities (and the social sciences), and it is one that becomes increasingly difficult to avoid.




Notes

1 As Moretti 2007) points out, the traditional humanities focuses on a "minimal fraction of the literary field...[a] canon of two hundred novels, for instance, sounds very large for nineteenth-century Britain (and is much larger than the current one), but is still less than one per cent of the novels that were actually published: twenty thousand, thirty, more, no one really knows—and close reading won’t help here, a novel a day every day of the year would take a century or so... And it's not even a matter of time, but of method: a field this large cannot be understood by stitching together separate bits of knowledge about individual cases, because it isn't a sum of individual cases: it's a collective system, that should be grasped as such, as a whole" (Moretti 2007: 3-4).  


2 What isn't captured with the notion of  'waves' is the complimentary simultaneity of the approaches. Layers might be a better term. Indeed, layers would indicate that their interaction and inter-relations are crucial to understanding the digital humanities.  


3 For example as Liu (2003) argues, "[o]ne of the main tasks of those establishing programs in humanities technology, I suggest, is to use IT to refund and reorganize humanities work with the ultimate goal not of instituting, as it were, Humanities, Inc., but of giving the humanities the freedom and resources to imagine humanities scholarship anew in relation both to academic and business molds. The relation between narrow research communities and broad student audiences, for example, need not be the same as that between business producers and consumers. But unless the existing organizational paradigms for humanities work are supplemented by new models (e.g., laboratory- or studio-like environments in which faculty mix with graduate and undergraduate students in production work, or new research units intermixing faculty from the humanities, arts, sciences, engineering, and social sciences), it will become increasingly difficult to embed the particular knowledge of the humanities within the general economy of knowledge work." (Liu 2003: 8) 


4 If software and code become the condition of possibility for unifying the multiple knowledges now produced in the university, then the ability to think oneself, taught by rote learning of methods, calculation, equations, readings, canons, processes, etc, might become less important. Although there might be less need for an individual ability to perform these mental feats or, perhaps, even recall the entire canon ourselves due to its size and scope, using technical devices, in conjunction with collaborative methods of working and studying, would enable a cognitively supported method instead. The internalisation of particular practices that have been instilled for hundreds of years in children and students would need to be rethought, and in doing so the commonality of thinking qua thinking produced by this pedagogy would also change. It would be a radical decentring in some ways, as the Humboldtian subject filled with culture and a certain notion of rationality, would no longer exist, rather, the computational subject would know where to recall culture as and when it was needed in conjunction with computationally available others, a just-in-time cultural subject, perhaps, to feed into a certain form of connected computationally supported thinking through and visualised presentation. Rather than a method of thinking with eyes and hand, we would have a method of thinking with eyes and screen (Berry 2011).  



5 Currently digital humanities and software studies or critical code studies tend to be rather separate, but there is, of course, the potential for exchange of ideas and concepts in terms of their  respective theoretical and empirical approaches. 


6 A good example of riparian publicity is the use of @mention streams on Twitter. To avoid the speed of a multiply authored follow stream, especially where they might number in the hundreds or thousands of people you follow, instead you might choose to watch the @mention stream instead. This only shows Tweets that directly mention your username, substantially cutting down the amount of information moving past and relying on the social graph, i.e. other people in your network of friends, to filter the data for you. That is, the @mention stream becomes a collectively authored stream of information presented for you to read.  


7 See Montfort (2004) where he argues, "When scholars consider electronic literature, the screen is often portrayed as an essential aspect of all creative and communicative computing — a fixture, perhaps even a basis, for new media. The screen is relatively new on the scene, however. Early interaction with computers happened largely on paper: on paper tape, on punchcards, and on print terminals and teletypewriters, with their scroll-like supplies of continuous paper for printing output and input both... By looking back to early new media and examining the role of paper... we can correct the 'screen essentialist' assumption about computing and understand better the materiality of the computer text. While our understanding of 'materiality' may not be limited to the physical substance on which the text appears, that substance is certainly part of a work's material nature, so it makes sense to comment on that substance."  (Montfort 2004, emphasis added).





Bibliography


Baker, N. (1996) The Size of Thoughts: Essays and Other Lumber, New York: Random House.

Baker, N. (2001) Double Fold: Libraries and the Assault on Paper, New York: Random House.

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Clinamen (2011) The Procedural Rhetorics of the Obama Campaign, retrieved 15/1/2011 from http://clinamen.jamesjbrownjr.net/2011/01/15/the-procedural-rhetorics-of-the-obama-campaign/

Flanders, J. (2009) The Productive Unease of 21st-century Digital Scholarship, Digital Humanities Quarterly, Summer 2009, Volume 3 Number 3, retrieved 10/10/2010 from http://digitalhumanities.org/dhq/vol/3/3/000055/000055.html

Fuller, M. (2008) Software Studies \ A Lexicon, London: MIT Press.

Fuller, S. (2010) Humanity: The Always Already – or Never to be – Object of the Social Sciences?, in Bouwel, J. W. (ed.) The Social Sciences and Democracy, London: Palgrave.

Hayles, N. K. (2011) How We Think: Transforming Power and Digital Technologies, in Berry, D. M. (ed.) Understanding the Digital Humanities, London: Palgrave.

JAH (2008) Interchange: The Promise of Digital History, The Journal of American History, retrieved 12/12/2010 from http://www.journalofamericanhistory.org/issues/952/interchange/index.html

Lakatos, I. (1980) Methodology of Scientific Research Programmes, Cambridge: Cambridge University Press.

Liu. A. (2003) The Humanities: A Technical Profession, retrieved 15/12/2010 from http://www.english.ucsb.edu/faculty/ayliu/research/talks/2003mla/liu_talk.pdf

Liu, A. (2011) Where is Cultural Criticism in the Digital Humanities, retrieved 15/1/2011 from http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities/

Manovich, L. (2008) Software Takes Commons, retrieved 1/12/2010 from http://lab.softwarestudies.com/2008/11/softbook.html

McCarty, W. (2009) Attending from and to the machine, retrieved 18/09/2010 from http://staff.cch.kcl.ac.uk/~wmccarty/essays/McCarty,%20Inaugural.pdf

Montfort, Nick. (2004) Continuous Paper: The Early Materiality and Workings of Electronic Literature, retrieved 16/1/2011 from http://nickm.com/writing/essays/continuous_paper_mla.html

Montfort, N. and Bogost, I. (2009) Racing the Beam: The Atari Video Computer System, London: MIT Press.

Moretti, F. (2000) Conjectures on World Literature, retrieved 20/10/2010 from http://www.newleftreview.org/A2094

Moretti, F. (2007) Graphs, Maps, Trees: Abstract Models for a Literary History, London, Verso.

Ramsay, S. (2011) On Building, retrieved 15/1/11 from http://lenz.unl.edu/wordpress/?p=340

Sample, M. (2011) Criminal Code: The Procedural Logic of Crime in Videogames, retrieved 15/1/2011 from http://www.samplereality.com/2011/01/14/criminal-code-the-procedural-logic-of-crime-in-videogames/

Schnapp, J. and Presner, P. (2009) Digital Humanities Manifesto 2.0, retrieved 14/10/2010 from http://www.humanitiesblast.com/manifesto/Manifesto_V2.pdf

Schreibman, S., Siemans, R., and Unsworth, J. (2008) A Companion to Digital Humanities, London: Wiley-Blackwell.

Disqus for Stunlaw: A critical review of politics, arts and technology