When it comes to automated sensing, the algorithmic processing of information via machine learning can be regarded as one of the indicators for a paradigmatic epistemological shift that has made its transverse way throughoutthe 20th and 21st century: Instead of essentially stable and self-contained meaningful entities, relations are the only decisive criteria for rationality, with the consequence that the relational formations determine the materialization of sense rather than being conditioned by a teleological meaning or transcendent subject.
Within the realms of unsupervised machine learning this shift becomes mostly evident as the aim is to implement a method for automated reasoning where the ability to draw conclusions—which in this case refers to identifying patterns in digitalized video or audio material that underwent a transmission into distinct and numeric structures—is achievedby abstracting patterns through collating information within a variability ofcontent and context specific data, without any coded semantic category that could serve as the basis for verification. Here, what appears to beknowledgeable is not given and the function of reasoning is not defined as a reproduction of symbolic information that has been already predetermined to be meaningful. Or to phrase it differently, significance is defined by a speculative process of abstracting similarities out of relational differences that can befound in random data. Thus, this marks a transition where the design of machine learning applications is not concerned with what to learn, but “learning how tolearn” (Parisi Reprogramming Decisionism 4). Luciana Parisi states that “[c]yberneticinstrumentality replaces truth as knowledge with the means of knowing, and announces a metaphysical dimension of machine knowledge originating from within its automated functions of learning and prediction.” (ibid.)
At the core of this epistemology, that can be described as technoecological, one of its premises is connectivity. The enabled connections between different, not merely human corporealities through time and space is not onlyan effect of the implementation of an algorithmic infrastructure into everyday life that practices the informational capturing of relations. There is also an understanding of worlding as a process of material-semiotic (cf. Haraway 11) differentiation that has been informed by philosophical, socio- and techno-scientific thinking. It assumes connectivity as a primordial condition for becoming as such: By emphasizing that all entities—living and non-living, natural and artificial ones—come into the world by a process ofreciprocal prehension (cf. Whitehead 57ff.; cf. Parisi Technoecologies of Sensation 182) on different scales. In consequence their boarders and surfaces, their substantial qualities are regarded not as being stable, eternal or essential but rather become question- and negotiable. This comprehension of connectivity—that even analytically cannot be separated from differentiation—as the main principle for un/becoming culminated among other things into the cybernetic vision of governing as an action that affects the conditions of occurring materializations and thus interfering into the very process of individuation. From this point of view, the implementation of an algorithmic infrastructures can be regarded as a powerful tool for intervention as well as a further scale that partakes in theway entities come to matter. Also, it appears that computational prehending mediatechnologies (cf. Parisi and Hörl 38) reinforce the disclosure of the relational nature of reconfiguring reality on the one side, and on the other side they are decisive for introducing an understanding of the nature of becoming as a deeply technological process (cf. Hörl 1-21).
In the case of unsupervised machine learning the principal of connectivity/differentiation can be found in the schematic design of artificial neural networks: Here, the task to identify and classify relations within digitalized audio or visual artefacts in order to gain a hypothetical abstraction of the connecting elements between random data is divided into several millions simultaneously performing and interconnected algorithmic cyberneurons. For the purpose of testing the range of success of the performance of such an algorithmic network, the developers monitor the numeric activity of the cyberneurons while feeding new, previously unprocessed data into the program. One result of the observation caught most of the attention: The selective activity of several cyberneuronsmatches with semantic concepts of what the data is supposed to show—for example, there are cyberneurons that detect solely cat faces, others human faces or human silhouettes (cf. Le). The researchers of Google’s DeepMind termed those ones as “easy to interpretneurons” (Morcos and Barrett) as it seems to be clear to what semantic conceptthey respond to. But they are also concerned about the role of the other socalled “confusing” (ibid.) cyberneurons that so far make up the majority of an algorithmic network. While monitoring their activity and inactivity these “confusing”cyberneurons are not easy to interpret their responses to the datasets seems toremain random—the algorithmically conglomerated patterns of information don’t make any semantic sense the human mind. Have those cyberneurons failed their task or do they havea “hidden” semantic or syntactic (functional) sense that asks for further investigation? From which side can a lack of comprehension be addressed? The researches engaged in the phenomenon by evaluating the performance of the network while deleting different cyberneurons (cf. Le). The experiment brought forward two conclusions: First, the “confusing” or seemingly indecisive cyberneurons are not less important than the “selective” ones, whose responsesare regarded as a correct semantic classification of the images. Second, “selective” cyberneurons that are able to perform a semantically correct identification of previously unknown data are “more resilient” to deletion than “selective” networks that fulfill the task only in response to already classified material (cf.Morcos and Barrett). Thus, the empirical examination implies that a network’s capacity for establishing generalizing patterns as means for semantic abstraction is not exclusively dependent on the seemingly high degree of selectivity to be found in isolated algorithmic knots. It even “suggests that highly class selective units may actually be harmful to network performance” (Le8). And importantly, the study shows that every computational process forstructuring data in meaningful ways is accompanied by a proliferation of new data randomization.
Even though the study has not (yet) determined the “hidden sense” of the “confusing” cyberneurons or developed a theory that provides an explanation for their use, it neither frames them as errors in the sense of being false executions nor it suggests to the development of methods to optimized their selectivity in order to prevent effects of confusion or meaninglessness. Rather, it does make an argument for the acknowledgment for being connected and making connections as a profound principle of acting intelligibly. The act of connecting seems to be decisive on many levels: It is crucial for the design of both experimental arrangements—teaching algorithmic structures to learn and learning how unsupervised machinic learning operates—, and it is decisive for the scope within algorithmic processing—as the numeric response is regarded as a way of connecting with the data that then again becomes the ground for considering what and how the network has learned. Hence, disregard the degree of selectivity or accuracy in regard to semantic classification being connected and making connections here coincidences with the generative quality of being meaningful and making sense. The input does not predetermine the output in a specific teleological way, rather new speculative information can arise out ofthe process of connecting what appears to be contingent. Although the task of classification installs a representative logic into the evaluation of the algorithmic performance, it is not axiomatically inscribed into the design of the cybernetic network which axiomatically orientates towards a process of collating through selecting only and whose agency doesn’t correspond to aknowledge in terms of precoded the semantic categories. Following Luciana Parisi, “the imbedding of the social practices of learning into the conceptual infrastructure of the networks” can be regarded as a crucial aspect for the computational partaking in an epistemology that transited from “thinking in terms of learning factual data towards a thinking as a practice of knowing how to generate hypothesis, whose indeterminacy in regard to its results expanded the possibility to extend the search for meaningful information” (Parisi Das Lernen lernen oder die algorithmische Entdeckung von Information 103) . What is at stake, is there configuration of the “culture of sense” (Hörl 4) that in the words of Erich Hörl can be coined as a technoecological conception of sense, where “signs are no longer seen primarily as representative but as operative entities” (Hörl 19). Promoted and conditioned by technological hyperconnectivity meaning and mattering coincidence into an onto-epistemological performability. In other words, sense emerges out of an technoecological capacity to differentiate and being differentiated—significanceis conceived as a non-cognitive (but abstractive) sensing that is immanent in the generative process of becoming.
From this point of view, it seems to be very obvious to modulate the notion of connectivity towards Karan Barads terminology of entanglement of matter and meaning—aconcept that is highly technoecological . For Barad distinct entities are primordially conditioned by an “ontic and semantic indeterminacy” so that they cannot be taken for granted but are temporal materializations within a phenomenon. She argues against the assumption that knowledge is produced by aninteraction of essentially separable unities and suggests instead that distinctionsare the result of “intra-actions” within material agency (cf. Barad 132-185). Fromthis entangled perspective, what earlier has been described as hyperconnectivity—the notion of being connected and making connections as an integral part of makingsense within unsupervised machinic processing—appears to be the cause and not the result of the conceptual and material infrastructure of the algorithmic networks. Moreover, by considering this machinic arrangement, that is enabled to learn, as an apparatus the process of differentiation brought forward by abstracting informational patterns through collating random data can be understood as the execution of “agential cuts”. “[A]gential cuts are at once ontic and semantic. It is only through specific agential intra-actions that the boundaries and properties of “components” of phenomena become determinate and that particular articulations become meaningful. In the absence of specific agential intra-actions, these ontic-semantic boundaries are indeterminate. In short, theapparatus specifies an agential cut that enacts a resolution (within the phenomenon) of the semantic, as well as ontic, indeterminacy.” (ibid. 148) So, agential cuts are the generative effect of intra-action processes that transforman onto-epistemological indeterminacy into determinate separability. They are the very action of an intelligible sensing that unfolds an irreducible power within technoecological conditions. Indeterminacy is here equated with an immediacy that refuses any direct access so that any form of determination is understood as a process of mediation that constitutes itself through in- and exclusion of possible onto-semantic materializations, whereby exclusions are the constitutive matter of indeterminacy’s potential (cf. ibid. 179). Though itseems that from this point of view disentanglement is an absolute impossibility and is transferred into the realms of the unthinkable, it also draws attention to the aspect that detachment inherently partakes in the machinic modulation of sensibility. It might even be argued that this inherent detachment or dividing within the process of concretization causes environments of unfeeling when reconfigured boundaries propel a sense of alienation towards reality. In the case of machine learning it not only raises attention to the circumstance that learning how to learn is accompanied by an unlearning of meaning and knowledge in a factual sense. It also creates an accountability for the cybernetic and capitalist informed desires of exploiting the relational by instrumentalizing a capturing of indeterminacy through the algorithmic automatization of sensing processes that then enables new ways of governing reality. Relations here are not only conceived as an onto-semantic indeterminate condition but become objectified through the determining process of differentiation. From this perspective, computation can no longer be regarded as a mode of representing the world, but as an immanent mediating and agential intervention of un/becoming. Last but notleast, the relation between a naturalization of a technoecological culture of senseand the enhancement of an algorithmic automatization of sensing urges to be problematized.
 The notion of technoecology derives from Erich Hörl and will be further addressed later in the text.
 Translated from the German publication by I.R.
 Though Barad doesn’t refer to this term specifically and neither Hörl nor Parisi make any reference to Barad.
Barad, Karen. Meeting the Universe Halfway. Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press 2007.
Haraway, Donna: Modest_Witness@Second_Millenium. FemaleManã_Meets_OncoMouseä_Feminism and Technoscience. Routledge, 1997.
Hörl, Erich. “Introduction to general ecology: The ecologization of thinking”. General Ecology. The New Ecological Paradigm. Bloomsbury Academic, 2017, p. 1-65.
Le, Quoc V. et. al. BuildingHigh-level Features Using Large Scale Unsupersived Learning, 12 Jul.2012, arxiv.org/pdf/1112.6209.pdf.
Morcos, Ari S. et. al. On the Importance of Single Directions for Generalization. 22 May 2018, arxiv.org/pdf/1803.06959.pdf.
Morcos, Ari and David Barret. Understanding Deep Learning through neuron deletion. 21 May 2018, deepmind.com/blog/understanding-deep-learning-through-neuron-deletion/.
Parisi, Luciana. “Das Lernen lernen oder die algorithmische Entdeckung von Information”. Machine Learning. Medien, Infrastrukturen und Technologien der Künstlichen Intelligenz. Transcript, 2018, p. 93-113.
—. “Technoecologies of Sensation”. Deleuze, Guattari & Ecology. Palgrave Macmillan, 2009, p. 182–199.
—. “Reprogramming Decisionism”. e-flux journal #85, Oct. 2017, e-flux.com/journal/85/155472/reprogramming-decisionism/.
— and Erich Hörl. “Was heißt Medienästhetik? Ein Gespräch über die algorithmischeÄsthetik, automatisches Denken und die postkybernetische Logik derKomputation”. Zeitschrift für Medienwissenschaft. Medienästhetik. Diaphanes, I/2013, p.35-51.
Whitehead,Alfred North. Prozess und Realität.Entwurf einer Kosmologie. Suhrkamp, 1987.