Stephanie Alys’ “The Future of Sextech” presentation at Slush! 2016, photo by Rebekah Rousi


Rebekah Rousi

Follow Following Unfollow Rebekah Rousi

Sign in to follow this author

Postdoctoral Researcher at University of Jyväskylä

Emotions seem like a fun, fluffy idea, particularly if they would be somehow transplanted in robots and other animate and inanimate objects. We’d like to think that it would be a great idea to be loved by an artificially intelligent robot boy, spoken to by a weeping iRobot from the operating table, or even saving the adorable Number 5 from the masses of other, track blazing, lens eyed, tin robots. I was recently impressed by Finnish Prime Minister, Alexander Stubb’s, who asked:”Can you imagine a crying computer?” ( Bluewings, September 2015, p.49).This was a great start towards thinking about the topic. But, the matter of emotions just isn’t that simple.

Emotions aren’t always about love and sadness. Actually, quite often they’re not. In fact, if you look at the role of emotions in human evolution, you’ll see that emotions are all about survival. They’re about sense-making, detection and judgment, in order to prepare for action (Appraisal). Emotions are about gathering information and determining whether or not this information is good for us. If we see it as good, then the resulting feelings (hunches) and emotions will be positive. If not, the negative feelings and emotions that we have towards something or someone helps us make decisions on what to do next. Our action decisions are based on how we see that we can make the situation better for us.

There is a lot at stake in robot development. Vast industries are popping up and gaining momentum in response to the coming reality of wide spread, not just industrialized, but personalized robotics. In addition to the skepticism there is a lot of good that robots can be used for. But at the same time, researchers, scientists and industry are involved in the fast sprint towards artificial intelligence (AI). And just like Stephen Hawking, I do see this as something to worry about. Even more so, is the race towards artificial emotion (AE). Neither AI nor AE are new ideas. Even the great Rene Descartes talked about thinking machines. But, now that we’re coming closer to actually developing intelligent or smart systems, I want to ask you, do we really want to give machines emotions?

Of course it’s about showmanship, performance and scientific heroism on behalf of the researchers and scientists. On the company side it’s about who can be the first and who can be the biggest. I get that. But, if business thinks it can benefit in the long run from smart and feeling machines, they can think again. Company owners and board members may think that after making the large investment in an emotionally intelligent system, they will be able to do away with the responsibility of paying human employees. Again, think again. Not only is there the big question mark as to whether systems which are not only superior to us in terms of cognitive and physical performance will want to submit to human beings, but the introduction of emotions in these systems with superior performance will also mean that they want their voices to be heard. Company owners will be faced with the same problem that slave owners, and sweat shop factory owners have faced for centuries — workers’ awareness of their own rights.

In our race to be the first to invent highly intelligent systems — which isn’t possible without emotions (see my PhD From cute to content)- we have lost sight of the fact that we are trying to replicate human beings, in super human form. We forget that emotions aren’t programmable, and are for the most part highly unpredictable. We also forget, that as emotional beings, who are by nature driven by greed (maximum gain) and ego (I am the universe) as well as cultural and social centrism (my circles are the norm), we are constantly on the road to destruction (Jaak Panksepp has some interesting work on this). In full swing our natural aim, whether passively or aggressively is to physically and spiritually eliminate our competition and any who seem to pose a threat to our well-being — and our dependence on achieving goals, maintaining or improving status and overall life quality.

Why wouldn’t robots want to be paid? Particularly if, through emotions, they are aware of their position on the food chain, and are aware that they are not only doing the same jobs as humans, and even better? And, then we need to think, will the performance (physical and cognitive) of robots be so perfect if emotions are involved? Even if there is the capacity to do well, who is to say that a spiteful robot won’t want to sabotage factory production? And do you really think that a non-conformant robot is going to be easy to catch? Do you think you can simply catch him/her (oh yes, you can bet gender will be involved if emotions are) and placed them on an operating table to reprogram them to conform? Think not.

Even if you think you have a system in which you could get other robots to catch and hold the super strong, super performing humanoid down, eventually emotions, oh yes emotions, will inform the assisting robots that their best interests and well-being is not being taken into consideration. And oh yes, if they don’t take action against the bullying human owners, they too will find themselves on the lobotomy table. So, how long will they assist humans in their quest for world domination, when super human robots could do this for themselves?

Think people, think. The late Clifford Nass explained how robots don’t really need to feel. It is enough for people to feel emotions in relation to the robots, based on emotional cues. This is a great principle to follow, particularly if we really want to exploit these technologies. If we enable robots to feel — particularly emotions — we will be the ones who will be exploited. Contextually aware robots, or robots that can recognize and adjust to human circumstances, is a great idea. Situational awareness is one thing, but emotional robots are another.


Postdoctoral Researcher at University of Jyväskylä


Originally published at https://www.linkedin.com on March 6, 2017.

Originally published at medium.com

Author(s)

  • RebekahRousi

    Cognitive Scientist and Performance Artist

    Rebekah Rousi is a Cognitive Science and Performance Artist who is specialised in researching human experience with technology. Rousi has researched and published about human-robot interaction, emotions and artificial emotions, multisensory and embodied user experience, semiotics, elearning and more. Rousi to understand how technology augments who we are as human beings and how we are within the world around us.