Troubling the Accident: Notes on Compression Hacking – Carleigh Morgan

Takeshi Murata, Monster Movie, 2005, single-channel video, color, sound; 04:19 minutes, Smithsonian AmericanArt Museum, © 2005, Takeshi Murata, Museum purchase through the Luisita L. and Franz H. Denghausen Endowment, 2013.71

The technique of compression hacking made what is commonly credited as its first appearance with Takeshi Murata’s 2005 short film, ‘Monster Movie’. Other artistic examples soon followed: Sven Koënig’s ‘Download Finished!’ and David O’Reilly’s ‘Compression Reels’ were joined by the net 2.0 dirty aesthetics of cyberpunk art collectivePaperRad. Bubbling up from the underground networks, bizzaro outposts, and niche digital art groups of the noughties internet, compression hacking steadily found a larger audience. Three years later, compression hacking was incorporated into commercial music videos for both Kanye West and electro-pop group Chairlift. Directed by Ray Tintori, Chairlift’s music video for ‘Evident Utensil’ relied on compression hacking to create a dominant aesthetic of compression artefacts marked by their phantasmagorical array of mutating, warped colours and established an affect of digital psychedelia that upended the formal conventions of cinematic space and temporal order. Kanye West’s ‘Welcome to Heartbreak’ accomplished a more fastidiously controlled, choreographed style of compression hacking that combined chromakey and green screen techniques to unsettle the grammars of traditional spectatorship. Starring West and featured singer Kid Cudi, this video depicted them ‘melting’ into each other through a series of pixel bleeds, alternating recognisable fragments of each rapper’s face with sequences of digital artefacting that ruptured the representational image and converted it into a chaotic stream of coloured pixels.

In this short essay, I want to think about how compression hacking operates at the technological and aesthetic levels and to consider what this technique exposes about the computational logics that underlie the production of digital images. By understanding how compression artefacts work, we can explore how an emergent teleology of error and discourses of ‘machinic intentionality’ frame this phenomenon in terms of a dialectic of bad accident and good aesthetic. Furthermore, by troubling the category of accident, compression hacking can throw into relief how conversations in machine learning frequently blur the line between technical malfunction and human error—often to displace blame onto technical objects.

Defining some terms

The law of information processing upholds that the ‘fewer states one needs to process a message, the faster and more efficient the system is’.[1] The logic of data optimisation is, therefore, native to digital media. As Lev Manovich notes, it can be traced back through various practices in science including ‘nineteenth-century physics, biology, linguistics, statistics, economics and psychology, fields that have all attempted to represent the world or some aspect of it in the simplest possible terms, whether in elements, atoms, parts of the mind, or the Weber-Fechner law of just noticeable difference’.[2]

Data compression follows this law by simplifying how data is stored. The purpose of data compression is typically to optimise storage space or increase data transmission rates. By measuring the visual information of a moving image from frame to frame, compression algorithms exploit the durational differences in a moving image by recording only the measurable changes in the image data. As a result, only areas of a moving image which describe differential motion or changing luminance values are captured by the compression algorithms. According to this principle, images with fewer substantial changes from frame to frame are easier to encode.

Compression hacking uses compression algorithms to make art—it is like a technologically literate paint-by-numbers. Hacking implies an element of human intervention, the labour of the (perhaps crudely) digitally handmade. By exploiting the encoding and decoding processes of compression and decompression, contemporary artists like Rosa Menkman and Takeshi Murata produce compression artefacts which foreground the eerie materiality of digital objects.

Compression artefacts are product of lossy compression. whereby some of the information about an image is lost—or more accurately, discarded. Lossy compression can occur for a variety of reasons, but it is not bad—in fact, in instances of low bandwidth or limited storage space, lossy compression is desirable. When a compressed file is run through a decoder, the image it produces it appears as a distortion of the original. The lossy image is frequently thought of as a downgraded copy, one that prioritises easy storage and retrieval rather than visual fidelity or clarity. By extension, compression artefacts are often understood as evidence of technical degradation. Aberrations in the image. A glitch.

Genealogies of glitch

Compression hacking can be situated in a genealogy of glitch art, but it also fits into a longer historical practice of theorising the accident-in-the-machine. The spectre of error—alien ‘glitches’ in a system—haunts this long era of the technological, invading everything from the industrial advances in steam locomotion to computer science to drone warfare. Cultural theorists like Janne Vanhanen, Benjamin Schutlz-Figueora, and Steve Goodman cast glitches as Benjaminian ‘punctum,’ albeit one updated for art produced under digital conditions. For them, glitches are scars ‘on the pristine surface of science fiction capital’s vision of technological progress’.[3] Like Wintermute in William Gibson’s Neuromancer, glitches are conceptualised as ghostly forces, malfunctions in the normal operating capacity of systems which seemingly emerge ‘out of nothing and from nowhere,’[4]giving viewers ‘a fleeting glimpse of an alien intelligence at work’.[5] This theory of glitch traces its roots to anxieties that attenuated the industrial and technological shifts demarcating the late Victorian from the Early modern period, populated by stories of Frankensteinian technologies and ‘ghosts in the machine’ that depicted the ‘threat to the humans subject posed by an autonomous, uncontrollable technology’.[6]

Before ‘glitches’ came to be known as such, the ubiquity of the unnamed accident was a frequent source of terror for people of the industrial age, who struggled to come to grips with the provenance and cause of the technological accident. Many industrial technologies did not have monitoring systems, failsafe options, or the emergency stops. As such, industrial machines were constantly threatening to malfunction—the factory explosion was not only an ambient threat, but one of the few ways that workers were given a glimpse of the internal logic of the machine. By violently exploding, industrial machines dramatically exposed their interlocking mechanisms—we might say that the machinic accident demonstrated a machinic logic. The accident was a perverse autopsy, dissecting the machine for further study—we need only be reminded of ‘exploded-view-diagrams’ today to consider how the accident testifies not only to the structure and teleology of a machine, but also how ‘every technology carries its own negativity, which is invented at the same time as technical progress.’[7]

These mishaps in machinic function also influenced Freud, whose logic of the uncanny hearkened back to a stage of historical development that was unsettled by the animation and activation of machines and automata, transformations that ‘repressed’ animistic thinking and gave birth to a technological unconscious.[8] The concern over the diffuse power captured by machine cognition –or machine feeling—persists today. As visual cultural theorist Carolyn L Kane writes: ‘computers and algorithmic systems are progressively given authority over human action and experience…yet we have a dwindling capacity to recognize [sic] this’.[9] Viewed from afar, she hypothesises ‘the entire history of modern art could be construed as a glitch and compression of Enlightenment epistemology’.[10] In this paradigm, any divergence from the ‘clarity and precision of classic optics or Renaissance-based perspectival representation’[11] functions as a type of glitch, upending the register of representation and its conjoined twin, rationality.  

Within this framework, it is not surprising to see compression hacking theorised as a practice which brings to the surface of the image the operational failures of digital systems. But compression hacking is more than a mere recuperation of failure as an aesthetic. To call compression artefacts an aesthetic of accident is to both deny the artistic labour which produces compression hacked images and to impose a moral calculus on the computational logics of compression. Scholars like Casey Boyle advocate for a responsible art theoretical approach to glitch that embraces it as a generative practice, not just a performance of technical failure, because glitches can ‘render apparent that which is transparent by design’.[12]And rather than reading compression hacking as a positivistic valorisation of pure technical failure, art theorists like Greg Hainge argue that compression hacking and its broader genre of glitch foreground ‘how technology always relies on the successful inclusion or ‘integration of failure into its systems’.[13]

Rethinking the accident

I contend that compression hacking can be used to problematise ‘accident’ as a category by drawing attention the ways that intentionality and blame are ascribed to technical systems. Consider the pixel as a type of compression artefact. By changing the compression algorithm, an artist also changes the image that is produced. Pixellation unsettles our ability as spectators to perceive the visual data as a representationally recognisable object. The behaviour of pixels popcorning in the image from one moment to the next creates the impression of a digital schizophrenia, where pixels seem to scatter, breakthrough, and penetrate the digital materiality of the screen. The pixels show a moment-to-moment configuration that is not in line with regimes of representation or optical rationality. But there is no reason the suspect that the pixels are somehow glitched.

When pixels jump around onscreen from one frame to the next, they rupture the seamless visual transition that ordinarily takes place in the formal operations of logical, cinematic movement. But they are supposed to do that. A modified compression algorithm will direct the pixels to move in ways that undermine the representational coherence of the image accidentally, as an effect of the compression algorithm.

Pixellation is merely one kind of compression artefact that relies on the enduring functionality of the compression algorithm in order to appear onscreen. We can read beyond pixellation, however, to consider how compression artefacts problematise the technical accident. As with other compression artefacts, seeing pixellation as dysfunction is not a perception, but a judgement. That is, to always read the pixel as symptom of technical accident is to assign intentionality to the compression algorithm, which remains indifferent to the kinds of image it produces so long as the algorithm itself functions accordingly. Pixellation in this case is not an accident in the sense of mistake, but accident in the sense of contingent. In other words, the pixels can only be oriented to their new positions if the modified compression algorithm works. In creating compression artefacts through lossy compression, the algorithm itself must remain functional. What on first glance appears to be a materialisation of failure turns out to be a simulation of it, one dependent on the successful operations of the compression algorithmic for its effects.

Towards a conclusion

Digital media exists at the crossroads between optical and algorithmic epistemologies, and compression hacking exploits this inseparability. It is certainly possible to view compression hacking and glitch art more generally as the as a reaction against the impenetrable, black box logics of technology. By screening what looks like digital dysfunction, compression hacking alert us to the omnipresent regimes of rationality that condition our experiences of seeing. They also permit us to glimpse the algorithmic processes that structure digital images and call attention to the way that the accident can be used to (incorrectly) index blame to technological systems. It is telling that compression hacking is becoming popular now, when there is a marked ‘mathematical intensification and transformation of perception in the age of algorithmic optimization [sic]’.[14]

We could situate compression hacking in what David M Berry calls the New Aesthetic (NA) a form of ‘“breakdown” art linked to the conspicuousness of digital technologies’.[15] I like to think of breakdown not in the sense of dysfunction, but in the sense of take apart. Berry writes:

          ‘We might conclude that the NA is the cultural eruption of the grammatization                   [sic] of software logics into everyday life. The NA can be seen as surfacing                           computational patterns, and in doing so articulates and represents the unseen                   and little-understood logic of computation, which lies under, over and in the                       interstices between the modular elements if an increasingly computational                         society.’ [16]

This site is where compression hacking operates: at the seam between breakdown and break down. Compression hacking exploits the overlap between image and data to show that images of technological failure might depend on precisely the opposite of failure for their materialisation. It reminds us that theories of technology are suspect when they conflate accident with aberration, and it cautions us not to confuse human intervention with computational error.  Ultimately, compression artefacts demonstrate that what looks like a malfunction may, in fact, be a sign of a smoothly functioning machine. 


[1] Kane, Carolyn L. Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics after Code. The University of Chicago Press, 2014, p. 220.

[2] Ibid.

[3] Parikka, Jussi, and Tony D. Sampson. The Spam Book: on Viruses, Porn, and Other Anomalies from the Dark Side of Digital Culture. Hampton Press, 2011, p.133.

[4] Vanhanen, Janne. “Virtual Sound: Examining Glitch and Production.” Contemporary Music Review, vol. 22, no. 4, 2003, pp. 45–52., doi:10.1080/0749446032000156946, p. 46.

[5] Ibid.

[6] Rutsky, R. L. High Techne: Art and Technology from the Machine Aesthetic to the Posthuman. University of Minneapolis Press, 1999, p. 125.

[7] Virilio, Paul, et al. Politics of the Very Worst: Paul Virilio: An Interview. Semiotext(e), 1999, p. 89.

[8] Rutsky, p. 133.

[9] Kane, p. 219.

[10] Ibid.

[11] Ibid.

[12] Boyle, Casey. “The Rhetorical Question Concerning Glitch.” Computers and Composition, vol. 35, 2015, pp. 12–29., doi:10.1016/j.compcom.2015.01.003, p. 12.

[13] Hainge, Greg. “Of Glitch and Men: The Place of the Human in the Successful Integration of Failure and Noise in the Digital Realm.” Communication Theory, vol. 17, no. 1, 2007, pp. 26–42., doi:10.1111/j.1468-2885.2007.00286.x, p. 27.

[14] Kane, p. 216.

[15] Berry, David M. Critical Theory and the Digital. Bloomsbury, 2015, p. 56.

[16] Ibid, p. 57.



  1. A lucid and interesting piece. Thank you, Carleigh. I see that your paper and several others zoom in onto glitch as beyond mere errors and evidence something inherent in digital technologies. Specifically, I appreciate your bringing in the notion that negativity/failure is integrated into any technologies along with technical progress. If I could pick your brain a little, what are some sociopolitical and philosophical implications that would arise from a more nuanced, accurate understanding of glitch?


    1. Thank you very much for your perceptive question, Carmen. There are a range of philosophical implications that arise of thinking about the glitch as a) something native to technologies b) something that does not need to be eradicated for the technology to function as needed and c) something that may not be a glitch at all, but the deliberate artistic manipulation of the logic of a computer program in order to produce a visual artefact that resembles “glitchiness” as a stylistic form. In all three of incidents of glitch listed above, the connection between machine failure and the materialisation of a glitch is interrupted. As scholars working in computational new materialism and sound studies have shown, there is always an amount of noise contained within a signal—usually it exists below the levels of our human perceptual ability and is small enough not to disturb the technological system–and in some cases this noise is fundamental to enabling, rather than hindering, the production of a signal. In this case, the noise contained within a signal is not apparent until it surpasses a certain kind of threshold—we might imagine the simple act of turning up the volume on a speaker sound system in order to make ourselves aware the gentle crackle and thrum that ordinarily lies just below our perceptual awareness. If glitch can be dislocated from failure, then we cannot always inductively reason that its presence is an indication of a machine malfunction. As a corollary to this axiom, the act of machinic failure may not always produce a glitch. And finally, the materialisation of a glitch may, in fact, turn out to be an artistic intervention that borrows from the formalist qualities of the computer error to simulate a (superficial) aesthetics of failure that is not casually connected to the failure of the computer program which produced it.

      Taking these principles and reversing them work too—sometimes, the so-called “error” that is materialised by a computer system is not technological in origin, but meta-systemic/structural. The error that is materialised within the constraints of a computer system is evidence of one whose origin lies outside it. The error that arises may not point backwards to a fault within the system but is a crystallisation of things that get compressed into it, embedded into the architecture of the technical system or built into its design objectives.

      Ultimately, these cases should encourage us to think about where culpability for malfunction in a system lies, and to problematise the diagnostic tools we rely on when indexing and assigning responsibility for technological errors. I think these discussions would bring some important ethical considerations to bear in the field of machine learning particularly, where often human labour is abstracted away from the machine learning and where “technical errors” are often conjured in discourse to obscure deeper problems with the matter of human-machine labour. When machine learning yields embarrassing outputs, like racial profiling in police stop-and-search, biased recruitment and hiring practices, racially motivated surveillance technology, we must think critically about how error is indexed—as often, conversations around blame and responsibility convert the focus from complex structural issues to simple coding bugs…and this myopic thinking about technology as a closed system insulated from conversations around gender, class, race and so on inscribes injustice into machine learning and its adjacent fields by absenting these structural causes from the equation. If glitch is only seen by these disciplines as an error in programming or with data, then conversations about race, gender, class, dis/ability are always “out there”, beyond the confines of a closed system.

      For example: Microsoft’s Taybot, roundly criticised as a 24 hour PR disaster for becoming a racist, Nazi-sympathising twitter bot overnight is not a technological failure, but Microsoft’s response suggest otherwise. This twitter bot functioned according to its design, learning from interactions on twitter and imitating the grammar and content of its interlocutors. It was a sensational success in this regard. The error in TayBot’s case cannot be indexed to the twitter bot itself–the error, instead, lay somewhere outside the system. TayBot highlight Microsoft’s severe lack of foresight, but it also indexed another problem: violent, racist, abusive speech on a social media platform. Here is Microsoft’s response: “We have taken Tay offline and are making adjustments.” In other words, they will apply abusive language filters so that Tay does not train itself on racist or abusive language. You can see how Microsoft has diagnosed the problem in this case–the problem is not abusive language itself and the way that Twitter promotes its circulation, but the lack of filters in Tay to separate the “bad” data from the “good”. They’ve manufactured an error within TayBot’s design to propose that language filters are a viable solution. Remember, in this case there was not an error within TayBot’s programming at all–yet the language used by Microsoft locates an insufficiency, a flaw, within TayBot’s protocols in order to recuperate the experiment by making minor adjustments, like the add-on language filters. This is the kind of language of blame, error, and moral recuperation that problematising “glitch as error” should make us more attentive to and critical of in machine learning and related fields. The language around glitch seems to pinpoint the error to the machine, and so by showing how glitch is not always a reliable indicator of failure or its location, we can think more carefully about how to index culpability for error when dealing with machine learning, neural networks, and even AI–and to use this knowledge to challenge the notion that machine learning, individual accountability, and social justice are completely disconnected.


  2. Carleigh, thank you for yet another thoughtful and elaborate contribution. Glitch: multifaceted implications! I find it inspiring to approach glitch as, in your words, meta-systemic/structural; and thus crucial to (re-)framing accountability, in light of social justice and a fundamental ambivalence in current discourses on machine learning. An ambivalence that treats technical issues as easy scapegoats on the one hand (as you illustrated through the example of Taybot); and on the other, a pursuit of progress and efficiency often at the expense of tackling key issues on sociopolitical and ethical dimensions. A potentially comforting sign is that cogent, critical literature is emerging (e.g. Cathy O’Neil’s Weapons of Math Destruction; Max Tegmark’s Life 3.0; and various books on robot ethics). I am glad that your insights on glitch guide us into a dialog on important motifs and phenomena of the historical present.

    I imagine the discussion will gain even more heft, here or at the workshops. Some other papers also discuss various forms of rupture as revealing and renewing (sensory) experiences, as means to problematize the status quo in technological and cultural terms. Curious to see how the discussion unfolds. 🙂

    Liked by 1 person

  3. dear carleig, thx for sharing!

    here few thoughts:

    – wondering if you’d agree over thinking about compression hacking as “making visible the invisible” as in paul klee – with the twist that the invisible here is not anymore this ghostly presence you refer to at the beginning of your paper but rather the invisible hidden materiality of algorithmic images.

    – furthermore, i wonder if your framework can be useful to look at deep dream images as a sample of the creative generativity of glitches – deep dream images intended as a type of image where the weber-fechner law of just noticeable difference goes a bit banana and turns into the formation of new images. within your framework this would be not treated as apophenia anymore (another conceptual instantiation of the negative / mistake-oriented framework you’re trying to deconstruct?) but rather as a generative emergent form of the glitch itself intended as positive creative force.

    – your framework also seems to challenge the Plato relation between original and copy – a framework which obviously is already kind of gone when it comes to digital images but yet i wonder if your approach to glitch possibly reframes it in terms of thinking about glitches not as distortion from a given original but rather as original themselves. Truly original in the sense that glitches as accident cannot really be reproduced, and if they can they certainly cannot be reproduced as easily as the original image from where they’re coming from after the compression process.

    looking forward to talk more!

    Liked by 1 person

    1. thanks for your very insightful comment, Emanuele. Having looked at your academic bio, it seems our research focusses are aligned and we think in similar ways about technologies of vision and changing cinematic conventions as a result of developments like virtual cameras, performance capture technology, and digital production.

      To your first point–I had not thought of Klee in relation to this project but I arrived at exactly the same conclusion and appreciate your reminding me of his work. In a longer version of this piece I do detail how glitch functions to describe the material architectures of the algorithmic image, as it intercedes between the appearance of a mistake (the optical illusion of glitch being perceived as an error, rather than a stylistic form) and the aetiology of accident (in other words and algorithmic glitch may not always give rise to a visible manifestation of error, as a result we cannot use optics as a reliable measure of machinic error).

      As for your second point, the interesting thing about the Weber-Fechner law as it is applied in computer science is that it always refers to a human POV: even the name “Google Deep Dream” speaks to this anthropocentric perspective. In my opinion, the images produce by the sophisticated Deep Dream neural networks are not “machine dreams” as the engineers would suggest with their romantical naming, but its is precisely this non-representational fluidity of the images that are evocative of dreams. These images disturb (or play with) the law of JND and is what the public might expect machine dreams to look like, but this assumption is built on other assumptions, like that machines are designed to ordinarily produce images that are representationally sensible. We hold neural networks to an expectation: that their outcomes will be availabe of human-oriented sense-making and will be representationally legible. But this assumption is rarely acknowledge and even more rarely questioned, and I think that an engagement with the regimes of vision at work and the responses to Deep Dream would be a good way to tease out the underlying politics of sight between humans and machines.

      the last thing is something i’ve been thinking about for ages–from the Plato distinction you outline to the Lacanian Real vs Virtual; to Baudrillard’ simulation and simulacra: to the “crisis” in media studies that mourns the loss of the index with the advent of digital technologies. Interestingly, however, the high stakes we associate between the original and the copy are very Western ways of understanding the ontological and aesthetic dimensions of media–I wonder how Benjamin’s work of art might be received in China, for example, where the legal and cultural rules around copying and copyright are very different. See here:


  4. Carleigh, thanks for this, very perceptive and inspiring! Your piece made me think of Dorota Maslowska, a prominent Polish writer of the younger generation, who – having published several very successful novels – decided to produced her own music videos under the pseudonym Mister D ( I mention her because your critical account of the glitch-accident offers itself to uncover the political potential of Maslowska’s aesthetics (which caused an outrage in Poland back in 2014). If you have a look at this video, for example, titled – you see a prominent Polish gay figure, Michal Pirog, appearing as a glitch, an accidental rendering that obstructs vision, seemingly disturbing the flow of an overtly homophobic video by ‘Mister D.’ This provides a very evocative example of what you call ‘glitchiness’ as style – an artistic intervention that borrows from the conception of ‘glitch as error’ to critique a system that ‘optimizes’ vision.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s