Fantasies of Feeling – Sascha Pohflepp

Why would we want machines to “feel,” perhaps even like a human? To guarantee that a machine can assume the perspective of its maker by guaranteeing its existence to be “human-centered” as called for by some may foreclose other modes of thinking and ultimately contribute to ensuring humanity’s privileged position within in a planetary ecology of minds, when perhaps we should be working towards a goal of de-centering. Instead, I want to suggest that the potential for “machine feelings” beyond conventional narratives may lie in being able to act as “intercalary elements” (Gilles Deleuze) between different natural minds such as ours and other entities who we find it difficult to directly “communicate” with, such as plants, the Earth system or our very own neural correlates.

In its common usage, the concept of “feeling” stretches from raw perception on the sensory level all the way to complex, human emotions. A physicalist standpoint would hold that feelings must as “mental states [be] states of the body” (Nagel, 1974) and that any feeling would thus emerge from the nerves, neurons and networks that have formed along the arrow of one’s personal history. While humans can emphasize with one another, as no two individuals will ever have identical networks (and activations) and we can never “know” what another human truly feels. Philosopher Thomas Nagel extended this epistemological dilemma when he asked what it might be like “to be a bat” and found that while “our own experience provides the basic material for our imagination,” (1974) the sensory arrays and body plans of a bat are vastly different, giving rise to a radically “subjective character of experience,” which in turn (perhaps proportionally to the measure of “alienness” of the underlying substrate) tends to quickly withdraw from what is conveyable in human language. At the membranes of “ourselves” (individual, species, genera) and with decreasing similarity between neural correlates, knowledge must gradually give way to imagination and simulation.

On the side of synthetic intelligence, the history of thinking machines in part originated with “feeling” as sensing, when Jerome Lettvin, Humberto Maturana, Warren McCulloch and Walter Pitts were wondering “what the frog’s eye tells the frog’s brain.” (1959) Their findings complicated the belief that eyes are merely sensors and that brains are vast assemblages of logic operators. Instead, it suggested the existence of networks that encode information by “calculating” gradients into a model of the world become flesh. Present-day artificial neural networks are systems that are able to replicate a range of aspects of natural cognition, a chimera in itself which borrows from flies, frogs, birds and humans. For the abstraction afforded by universal computing machines in Alan Turing’s sense this does not prove an unsurmountable problem, rather another target of their simulative effort. 

Let us assume for a moment there are in fact multiple forms of intelligence situated within an asymmetrical field that cannot be reduced to a certain set of properties such as pattern recognition of self-awareness. Rather different correlates (natural or synthetic) are giving rise to their own minds, which have their respective strengths and weaknesses. Cognitive roboticist Murray Shanahan has recently extended on Nagel’s work, finding that conscious entities are imaginable that would be “wholly inscrutable, which is to say it would be beyond the reach of anthropology.” (2016)

As real “agency” may be transferred towards other intelligences, there will be unresolvable situations, not unlike with natural nature. There is a certain irony to this claim, as the key narrative of modernity, which has produced intelligent machines, is also one of control. Intelligent autonomous organizations or agents are exactly that—intelligent and autonomous—which means that to yield a measure of control is bound to be necessary if we wish to reap the benefits that the other minds’ different mode of “feeling” the world may offer. 

On an ecological scale, calls for firmly “human-centered AI” may even appear paradoxical as it may serve to tide-lock a genuinely new ecological entity into an orbit around our own species, quite literally calling for anthropocentrism when at the same time we speak of creating a more level field between inhabitants of Earth by stripping humanity of its planetary privilege. Perhaps it would be helpful then, to instead consider ways of how we may want to relate to other natural entities in the future before we define the way we want a specific technology to relate to us as this, in a sense, may follow. 

A recent conversation with an agricultural scientist might provide a glimpse of such “intercalary” elements who may be able to feel what humans cannot: LED illumination that is finely adjustable in terms of wavelength and energy output in combination with sensors that measure certain vital signs of plants is revolutionizing the indoor cultivation of plants. At the present moment such systems are utilizing fairly simple algorithms, but using machine learning to sense the condition of a given plant organism in order to react accordingly appears an obvious avenue for engineering to explore. This effectively means for an AI to employ a variety of sensors (all of which only in the broadest terms map to natural senses found in human bodies) to “feel” the plant. As the neural network is being trained, it learns about the plant and later, through its ability to control the light, establishes “communication” as it reacts to its metabolism.

On its most material level, the neural correlate that this knowledge would have been embodied in is far more alien to us than the plant itself, yet it may come to “know” the plant more intimately than any human gardener ever could, increasing agricultural productivity (and perhaps plant happiness.) More wide-ranging scenarios are currently being suggested in which autonomous distributed organizations (DAOs) without any human agency become custodians of entire landscapes, to protect them from human exploitation or perhaps to better exploit themselves, such as terra0’s “technologically-augmented ecosystems that are […] able to act within a predetermined set of rules in the economic sphere as agents in their own right.” Regardless, those systems suggest a potential an anthropodecentric alliance of feeling within partially synthetic yet non-human ecosystems.

The existence of potentially “inscrutable” feeling machines poses the problem of how to engage with them as they will likely require us to complete shift our ontological relationship to them from a “designed stance” to an “intentional stance” according to the categories that Daniel Dennett has outlined (2009), with the additional problem that our own faculty (or notion) of rationality may not be sufficient to properly situate a given agent’s “rational demands.” Yet, we will likely always perceive a need to gain insight into the modes of thinking and purposes of entities that we are going to design, collaborate with or co-exist with. Shanahan therefore suggests that “to discern purposeful behavior in an unfamiliar system (or creature or being), we might need to engineer an encounter with it,” in order to feel it out, so to speak.

Ludic platforms such as chess or Go, while they were primarily chosen because of the boundedness and rule-baseness of their respective “worlds,” may have incidentally become prototypes of such engineered encounters in which human minds are able to obtain a feeling for the “perception, belief, desire, intention and action” of another kind of intelligence. Conversely, within human history, games have been a formidable technique, not only for training the cognitive modeling of another person’s mind by predicting their future moves, but also to side-step Nagel’s “subjective character of experience” of the human by tapping into intrinsically “inhuman” factors such as the randomness of dice in games of chance.

Given the present lack of a complete knowledge about how natural brains give rise to intelligence and self-consciousness and the recent successes in giving emergence to simple forms of intelligence in fundamentally different substrates, it appears more likely that we are moving toward a future filled with alien encounters rather than with machines that are “feeling” in the human sense if the word. Those encounters might necessitate new games, and will change our understanding of the old ones, as it is presently happening with Go where the spectrum of potentially gainful agential expressions that are afforded by the game (and thus also the way that humans play) has already been expanded. As “we might discover whole new categories of behavior or cognition” (and feeling, perhaps), “relevant parts of our language might be reshaped, augmented or supplanted by wholly new ways of talking,” (Dennett) reflecting how human culture will be changing along with the space of possible minds. 

Should we chose to tie the development of synthetic minds to remain in the space of Nagel’s “someone sufficiently similar” in order to be able empathize with us (if this were even possible) we may end up with fantasies of feeling. If we instead embrace the alien subjectivity of other animals, machines or whole ecosystems, we might learn a lot more about the nature of intelligence, not least that of our own minds.

Image: terra0 “Flowertokens” installation at Trust

References

Dennett, Daniel. “Intentional Systems Theory.” Oxford University Press, 2009. 

Lettvin, J., et al. “What the Frog’s Eye Tells the Frog’s Brain.” Proceedings of the IRE, vol. 47, no. 11, Nov. 1959.

Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, vol. 83, no. 4, Oct. 1974.

Shanahan, Murray. “From Algorithms to Aliens, Could Humans Ever Understand Minds That Are Radically Unlike Our Own?” Aeon Magazine, Oct. 2016. https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there.

Advertisements

4 Comments

  1. Sascha, thanks so much for sharing your work. The questions you raise bring to mind the work of Dr Rebecca Davnall, who gave a talk on ludic empathy and non-player characters at an AI and games workshop I attended in Cambridge last year. Her argument was that we ought to extend our empathy (within a gaming environment) to even non-player characters (i.e. the artificial intelligences of the game, as distinct from characters controlled by other humans).

    The whole idea of empathising with “alien intelligences” is a fascinating one, and I wonder what you make of a need to empathise with entities that we might not perceive as intelligent–as in, organisms that we recognise as “sensing” but not as “feeling”–e.g. slime moulds, amoebas, plants, even fish (according to some, they can’t feel pain). To quote Dennett from your essay: “we might discover whole new categories of behavior or cognition” (and feeling, perhaps), “relevant parts of our language might be reshaped, augmented or supplanted by wholly new ways of talking,” If the duty to empathise is predicated on recognising alien “subjectivity”, I fear that the demand it places on organisms is the need to perform to an adequate, anthropocentric definition of subjectivity. Dolphins are considered non-human persons for having met very high criteria for intelligence, for instance, but I wonder if the relative lack of entities that we recognise as “subjective enough” says more about our own limited theory of mind than it does about the capacities of alien intelligences at work. Finally, have you any thoughts on how something like an alien kinship model might respond to the problems of object oriented ontology–did this framework and its limitations animate your thinking at all?

    Feel free to answer only one of the above questions, I know I’ve posed quite a few in this comment.

    Like

  2. Hi Sascha. Thanks for the interesting read! I’m most drawn to your evocation of games as test sites for encounters with alien intelligence. You write that “within human history, games have been a formidable technique, not only for training the cognitive modeling of another person’s mind by predicting their future moves, but also to side-step Nagel’s “subjective character of experience” of the human by tapping into intrinsically “inhuman” factors such as the randomness of dice in games of chance.” What is quite fascinating here about what is ‘counted’ or ‘marked’ as intelligent is something inhuman such as chance, or the unpredictable. What role does predictability play in our definitions of intelligence/humanity?

    I have been working on some research on the production of subjectivity and am fascinated by the models of encounter that are evoked in many of the theories on subjectivity. I am wondering what you would make of the Fermi paradox or the Turing test. For instance, I look toward the assumptions that are made within defining intelligence outside the scope of the human and the way that those a priori assessments do more to re-inscribe racial and other human categories onto the world around us. Something I’ve always been troubled by is the legacy of rationalism and enlightenment reason within computation. The legacies of passing, of marking various humanisms as attached to a long history of coloniality and instrumental rationality, seem to play a huge role in the assessment of intelligence. Do you feel that the turn toward affective computing can somehow dislodge the assumptions of rationality and reason onto theories of technological subjectivity?

    Lots of questions, I know! But maybe something sparks?

    Like

  3. hi sascha — thank you for your work! i too am very drawn to the idea of decentering the human in favor of expanding the boundaries of intelligence and reason to alien forms of being that perhaps lie outside of the domain of the human sensorium. i am very interested in your notion of the “encounter” (i am strongly reminded of haraway here) as a site where humans might develop an empathetic relation with a nonhuman agent, and the possibilities of contingency that the encounter might open up. what might we be able to learn from other orders of being and how might alien knowledges inform our human ethical and political practices? i also wonder how encounters with this alien intelligence might, in turn, permit humans to further learn what it means to be -human-. particularly, with processes like artificial intelligence, that perpetuate and amplify the preexisting human biases that go into their creation, might an encounter with this form of alien intelligence inform our future efforts in engineering these technologies towards more ethical ends? i wonder how the limitations of humanist discourse and theories of mind would prevent us from forming this type of radical encounter that you’ve outline, and what forms of practice such encounters would take, particularly from within the confines of the historical framework of AI (which you’ve outlined) as it is part and parcel of a techno-capitalist discourse of command and control. i am looking forward to talking more!

    Like

  4. Hi Sascha, thank you for sharing your work.

    I am indeed fascinated by human affect and now the idea of said (machinic) alien subjectivities. The obsession with anthropomorphizing AI is certainly a drawback from an epistemological standpoint. How can we learn from alien subjectivities if we are obsessed with trying to make them more like ‘us’? Empathizing with the “they/them,” certainly opens possibility of learning about the nature of intelligence, and hopefully, affect, consciousness and so forth, though, I am not sure it is actualized due to the humans’ foregoing need to name/categorize (and not empathize) with one another.

    I am very much looking forward to the discussions!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s