Mechanical Affective Abilities – Maike Klein

Emotion theories and definitions

There are thousands of different definitions of “emotion” and other affective phenomena that come with different theoretical frameworks. Manyof these theories have their origins in philosophy or psychology and they emphasize different aspects of emotions. For instance, take the distinction between non-cognitivist and cognitivist emotion theories: When William James writes about “emotion”, hemeans the occurring bodily changes and their felt experience. For him, emotions without a bodily component are “cold” mental states (James 1884). In contrast to this, Martha Nussbaum claims that emotions are cognitive “judgments of value” and the possibly occurring bodily changes are just their byproduct through physically imitating the cognitive processes (Nussbaum 2004).These theories exemplify opponent traditions that both are in one way or another exclusive: Nussbaums theory is far from an embodied perspective and excludes per se animals and children up to a certain age. James’ theory (or theJames-Lange theory that has been developed later) would possibly be difficult to apply on systems that do not have a body of flesh and blood. There is no clear definition of “emotion” or affective phenomena – it is still an ongoing debate.

For several years now, in disciplines like Affective Computing and Social Robotics, computer scientists and roboticists became interested in affective phenomena – first and foremost “emotion” – and started to technically apply (mostly psychological) emotion theories. With no clear definition, however, it is difficult to choose which theoretical framework to take and how to translate the (more or less) wordy theories into numbers.

Although many different theories of emotion and affectivity exist, most emotion models depend on few theories that are limited in describing affective phenomena:

– Basic Emotion Theory (Ekman 1999):

As basic emotions, Ekman distinguishes in Darwin’s tradition six emotions – anger, happiness, sadness, disgust, fear, surprise – that have been distinguished across the cultures through six different facial emotional expressions. The basic emotions are the widely used theoretical foundation for emotion recognition technology and for immediately visible external emotional features like displayed emotions.

– Appraisal Theory (Scherer 1999):

Appraisal Theory goes back to Magda Arnold (1960) who first used the notion “appraisal”. This is the subjective evaluation of a situation,an object, or an event that elicits a distinct emotion (Scherer:637). It has been further developed notably by Richard Lazarus and Klaus Scherer and is often translated into emotion models to regulate emotional behavior of artificial agents (Broekens 2008).

– Three-Factor Theory of Emotion (sometimes better known as “Pleasure-Arousal-Dominance” or “PAD”; Russell and Mehrabian 1977):

For Russell and Mehrabian, emotions can be captured and described in terms of pleasure, arousal, and dominance. Depending on the numerical value of each of the three dimensions, they form another emotion. This theory has been further developed in various ways since 1977, with Russell’s Core Affect Theory as its most popular contemporary spin-off. Today, PAD is still popular to equip robots and virtual characters with emotions. For this, the PAD values are mapped as vectors in a three-dimensional space. Thus, an emotion program can be coded and added to the robot’s other programs in the operating system. Depending on external or internal stimuli, for instance through incoming data from an ultrasonic distance sensor, the robot’s emotion changes internally and produces, depending on its physical features, an “emotional”behavior as outcome (Rincon Ardila et al 2019).

In robotic emotion research, the external / social / expressive (see the Basic Emotion Theory example) and internal / individual / regulative aspects of emotion (see PAD or Appraisal Theory) have been distinguished. External emotional features would be a simulation of emotions, whereas an internal emotion generating mechanism (emotion models) would lead to genuine emotions. Therefore, these two distinct ways of creating affective abilities have also been rated as true / real vs. false / fake emotions (Damiano et al. 2015). According to that, it seems to be preferable to have regulative emotion mechanisms rather than visible emotional expressions – maybe because this would provide areal-world-correlate to our imagination?

“Genuine” emotions in artificial systems

First of all, dividing internal and external mechanical emotional abilities by true /real vs. false / fake seems to be misleading. How could we tell which human emotion is real or fake, if we go past evolutionary or basic emotions that are necessary for survival? What if we look at social emotions? In these cases, we could possibly measure whether the person smiling at us is smiling with a Duchenne smile – or possibly, we just cannot detect anything in the emotions of others and have to trust what the person reports about her emotion. Emotions can surely be “artificial” also in humans (Stephan2003) in the sense of true / real vs. false / fake.

But what is a “genuine” emotion in case of artificial systems? It seems to me that if we want to understand the work roboticists and computer scientists are doing and if we aim to collaborate in reflecting and developing mechanical affective abilities, we should aim to hold the emotion definitions, theories, and models from other disciplines in high regard. For instance, collaborations between psychologists and computer scientists had been a problem because for psychologists, the emotion theories the computer scientists used seemed too old-fashioned (Broekens 2010). We should keep in mind here that if the aim is to model affective abilities in artificial systems,there are limited possibilities of translating the wordy theories into a relatively simple and at the same time more complex model and notably into numbers. If we, however, accept that there can be adequate emotion definitions that may not fully hold for a human being (as well as emotion definitions made for humans may not hold for other kinds of systems or even children, too – as we have seen in the brief distinction between cognitivist and non-cognitivist emotion theories), we can claim that a “genuine” emotion comes out of an artificial system if an emotion theory is translated and modelled into that system and if there is an outcome that results from the emotional program. With this view, we would at the same time avoid speciesism.

As already indicated, mechanical emotions may be very different to emotions of other systems – but not only to those. As there are many different artificial systems and different emotion theories that are used to enable their emotions, there are many different behaviors and mechanisms that can be understood as “emotions”. For instance, Domenico Parisi and Giancarlo Petrosino described very basic simulated virtual entities as “having emotions” because they had an “emotion circuit” supporting their motivating mechanism to better adapt to their environment. They were able to ascribe a functional role to emotions that was different from the role of motivation (Parisiand Petrosino 2010). At GV Lab at Tokyo University of Agriculture andTechnology, there is an open-source driven robot arm existing in an environment with different sensors (e. g. distance and temperature / humidity) and a PAD emotion program. The robot can execute different behaviors, depending on which sensor is activated (together with another one) in what way. If, for instance, a human being is coming too close, the emotion program is activated, and the robot shows a movement that represents an emotion. For instance, the movement representing “fear” reminded me of a snake poised to attack.

Damiano et al. provide a third possibility to understand emotions in artificial systems as “genuine” emotions: they suggest one way to make the external / internal-distinction obsolete and at the same time they do not exclude mechanical systems of a certain complexity from the possibility of having emotions. According to Damiano et al., interacting agents do not simply exchange “information about their supposedly pre-defined and individual emotional states, [they rather] mutually define—co-determine—their emotions during their ongoing interactions. […] [This view] requires us to abandon the traditional philosophical understanding of emotions as events that areindividually, internally, and thus covertly generated, and that then we can expressively communicate to others—i.e., the very conception of emotions which legitimates robotics to distinguish between the internal and the external aspects of emotions and empathy” (Damiano et al. 2015, p. 8). They call this approach a“relational conception of emotions”.

It’s all about imagination

I claimed that if we aim to avoid speciesism and wish to facilitate the understanding and the possibility to cooperate between roboticists, computer scientists, psychologists, and the humanities, we should accept that to the many emotion theories we already debate in philosophy, psychology, and other disciplines, other definitions and theories from the technical world can be added.

No matter which theory is used to enable affective abilities in artificial systems, in many cases there will be human beings interacting with these systems. One of the main goals of equipping artificial systems with affective abilities is to facilitate human-machine-interaction. This can be very useful in industrial settings where the worker is obliged to work with a robot that is very boring or that the worker does not understand very intuitively. In such cases, the amelioration of working conditions is possible. Robots with emotional abilities are also used for therapeutic settings with autistic children. There is evidence that autistic children prefer to interact emotionally with robots and that this can help to facilitate the interaction with other humans, too (Cabibihan et al. 2013). Another possibility of human-robot-interaction is within a capitalist context. For instance, especially in Japan but sometimes also in Europe, the robot “Pepper” can be spotted in sales or customer service environments, for instance in shopping malls, airports, and karaoke bars. Although in the media, Pepper has been advertised to be “the world’s first emotional robot” (Singh 2015), it seems not very convincing. Most of the time, passers-by do not pay attention to it. It doesnot seem that “Pepper” has any emotionality that humans like to react to –maybe it would have been more interesting if it would have raised his voice or just gone somewhere else to avoid this ignorance. There obviously is an affective difference between artificial systems and humans in their spontaneity and the emotion range that still may separate mechanical affective abilities clearly from those of living beings.

The crucial point here is imagination. If a human being has the tendency to anthropomorphize, she will likely compare the outcoming emotional reactions to human emotional reactions. Furthermore, depending on e.g. the personality or information and / or education about artificial systems,the person may have a completely different understanding of affectivity in general and of what artificial systems are capable of. Apart from the scientific discourse, the main sources of information about this topic are,besides one’s own affectivity, media and science fiction stories that sometimes tend to converge into each other (Herbrechter 2009). Thus, the most urging question is how to get together unrealistic ideas of mechanical affective abilities and what is indeed happening in science to finally break with the mysterious nature of artificial systems that are mostly mysterious because of human imagination (Sharkey and Sharkey 2007).

Bibliography

Broekens, Joost, et al. “Formal models of appraisal: Theory, specification, and computational model.” Cognitive Systems Research, vol. 9, no. 3, 2008, pp. 173-197.

Broekens, Joost. “Modeling the Experience of Emotion.” International Journal of Synthetic Emotions, vol. 1, no. 1, 2010, pp. 1–17. DOI: 10.4018/jse.2010101601.

Cabibihan, John-John, et al. “Why Robots? A Survey on the Roles and Benefits of Social Robots in the Therapy of Children with Autism.” International Journal of Social Robotics, vol. 5, no. 4, 2013, pp. 593-618. DOI: 10.1007/s12369-013-0202-2.

Damiano, Luisa, et al. “Towards Human-Robot Affective Co-evolution Overcoming Oppositions in Constructing Emotions and Empathy.” International Journal of Social Robotics,vol. 7, no. 1, 2015, pp. 7-18. DOI: 10.1007/s12369-014-0258-7.

Ekman, Paul. “Basic Emotions”. Handbook of cognition and emotion, edited by Dalgleish, Tim, and Power, Michael J., Wiley, 1999, pp. 45–60.

Herbrechter, Stefan. Posthumanismus. Eine kritische Einführung. Darmstadt: WBG, 2009.

James, William. “What is an Emotion?” In: Mind, vol. 9, no. 34, 1884, pp. 188–205.

Nussbaum, Marta C. “Emotions as judgments of value and importance”. Thinking about feeling. Contemporary philosophers on emotions, edited by Solomon, Robert C.,Oxford University Press, 2004, pp. 183–199.

Parisi, Domenico; Petrosino, Giancarlo. “Robots that have emotions.” Adaptive Behavior, vol. 18, no. 6, 2010, pp. 453–469.

Rincon Ardila, Liz et al. “Adaptive Fuzzy and Predictive Controllers for expressive robot arm movement during human and environment interaction.” International Journalof Mechanical Engineering and Robotics Research, vol. 8, no. 3, forthcoming 2019.

Russell, James A.; Mehrabian, Albert. “Evidence for a three-factor theory of emotions.” Journal of Research in Personality, vol. 11, no. 3, 1977 pp. 273–294. DOI:10.1016/0092-6566(77)90037-X.

Scherer, Klaus R. “Appraisal Theories.” Handbook of cognition and emotion, edited by Dalgleish, Tim, and Power, Michael J, Wiley, 1999, pp. 637–664.

Sharkey, Noel; Sharkey, Amanda “Artificial intelligence and natural magic.” Artificial Intelligence Review, vol. 25, no. 1-2, 2007, pp. 9–19. DOI: 10.1007/s10462-007-9048-z.

Singh, Angad. “’Emotional’ robot sells out in a minute.” CNN Business, 23 Jun. 2015,https://edition.cnn.com/2015/06/22/tech/pepper-robot-sold-out/index.html.

Stephan, Achim. “Zur Natur künstlicher Gefühle. ”Natur und Theorie der Emotion, edited by Stephan, Achim, and Walter, Henrik, mentis, 2003, pp. 309–324.

 

4 Comments

  1. Thinking of your final section, have you considered the responses to either the Gemma Chan robot (https://www.radiotimes.com/news/2016-10-08/scientists-have-built-a-real-life-gemma-chan-robot-for-a-new-channel-4-ai-documentary/) or the Philip K Dick head (http://www.slate.com/articles/arts/books/2012/06/philip_k_dick_robot_an_android_head_of_the_science_fiction_author_is_lost_forever_.html) as a another approach between what is possible with science (at that moment) and an emotional response?

    I’m intrigued by the industrial and educational robots, having seen some at the Science Museum’s Robot exhibition, that you mention. Is there anything that we can learn from their actions and in particular the contexts of their actions to learn about how they can be read or critiqued? Does it perhaps speak to our conceptions of imagination and emotion?

    Like

  2. Maike, I agree with your contention that “The crucial point here is imagination. If a human being has the tendency to anthropomorphize, she will likely compare the outcoming emotional reactions to human emotional reactions.” But it also strikes me that there is a lot of work in animal studies and certain strands of posthumanism which seeks to erode the species distinction by constructing models of empathy and kinship that are not dependent on the human-centered recognition that other organisms have a depth-model of feeling or a kind of second order self-awareness. Have you given any thought to how work done in the past decade (e.g. Rosi Braidotti, Donna Haraway) might be taken up by computer scientists seeking to understand non-human models of emotional response? Fear is, after all, not a sensation that is registered or performed in the same way across the species divide, yet we seem to expect that our machinic systems will imitate fear using the gestures and body language that we are familiar with in ourselves.

    Like

  3. Maike, thank you for your interesting text and the great overview on the practical implementation of the research on emotion in robotics!
    Did I understand you correctly in that you are pleading for more interdisciplinary collaboration especially between computer science and humanities, where you would go for more theoretical variety in thinking and modelling emotion? And especially the humanities are called to action here, by learning how to communicate their theoretical discussions in more technical discourses and by looking for ways of intervention into other disciplinary cultures?
    If that is so, what courses of action do you have in mind?

    One more thing: You conclude your paper by pointing at the „unrealistic ideas of mechanical affective abilities and what is indeed happening in science to finally break with the mysterious nature of artificial systems that are mostly mysterious because of human imagination“.
    But, the problem with the mysterious nature of AI, is it really one of imagination? Isn‘t the problem here rather one of empistemic injustice (who has technological know-how?) and systematic obscuring of what one can know about how these systems work and what they exactly do (by the simple fact that even their designers do not always know what they do, but also as a strategy of design that seeks to keep you out → check out Steve Krug‘s usabiliy classic Don’t Make Me Think: A Common Sense Approach to Web Usability (2000)).
    Looking forward to discussing with you!

    Like

  4. Hi Maike,

    Thank you for your post!

    I appreciate your problematization regarding the different models of emotions that are being implemented in robotics and computer science and that acknowledging this may not only lead towards avoiding speciesism, but also dismiss an universal approach in shaping and evaluating robotics. If I understand you correctly, then your argument is to acknowledge ways of mechanic emotions and not evaluated the expressed emotionality in terms of one’s own emotional experience – meaning not to expect that all technological tools “behave” the way we expect them to do. From your point of view, would that in a certain way “disqualify” functionality or communication optimization for the evaluation of robots and other technology? Should there be instead other factors for valuation? Would you argue for an autonomy of machinic or mechanic operators? If so, in what terms?

    You mention an example where robot scientists do not implement a theory of emotions into their programs, but developed a new notion of emotions deriving from their experiments. It would be great to hear more about that. If we understand emotions amongst other things as socially determined behaviors, in what way do the new mechanical/machinic forms of emotions change this sociality? Do they foster certain emotional behaviors and discard others? Are there even new ways of feeling that we can anticipate?

    And also, I would be interested to know, if you distinguish between mechanical and machinic operabilites.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s