robots

We can consider the well-known feeling of fear to be the product of evolution. It is there to protect our species from dangers. When in fear, we either flee, look for a hiding place or disappear from the view of the people or objects that threaten us. Alternatively, we fight, summon our strength to repel an attack and prepare our bodies for exertion. Fear can cause us to modify our behaviors. It can drive us to change our life choices and grow more mature. Whether positive or negative, our emotions not only reflect changes in our environment but also trigger people to create value systems. As some theories have it, our quest to perceive the world as good vs. evil is fueled by basic emotions such as fear, joy, sadness and anger. When will robots embrace us?

Algorithms learn to look, listen, speak and embrace

Just as it is difficult for us to imagine a robot blowing its lid and throwing around plates in a fit of rage, it is hard to picture one that comes up to us to embrace and comfort us when we are feeling blue. It is equally unthinkable for a robot to choose to venture into a burning house to save our children. Isn’t it futile therefore to even wonder about machine emotions? Not entirely. With advances in AI technology, we are increasingly better at triggering mechanisms that make machines display behaviors previously considered to be the exclusive domain of humans and animals. Algorithms are bound to get better at communicating through language and recognizing shapes, colors and voice intonations. This will inevitably broaden the range of machine responses to images, situations and complex features and properties in the outside world. Technology is ever more adept at reading the environment because machines’ neural networks see the various properties and features of the world around us, such as sounds, colors and smells, as data sets they can analyze, process, and learn from to draw conclusions and make decisions.

I listen to your heartbeat

Such machine reactions do not result from evolutionary processes nor are they biochemically induced. For a robot to experience emotions that resemble human anger or joy, such emotions would have to be programmed by people or learned by neural networks through interactions with data. What we would be dealing with as a result wouldn’t be emotions evoked on a computer disk but rather a set of prescribed or learned mechanical responses to specific stimuli. However, even if a robot is unable to spontaneously say: “I can see you are sad, can I help you?”, it could still recognize the signs of sadness in your facial expression and react to them in its limited way. While getting to this point would most likely take years of trials and require further advances in robotics, AI technology, mathematics and statistics, it is not entirely beyond the bounds of possibility. According to Erik Brynjolfsson, professor at MIT Sloan, although people have an edge over machines in reading emotions, machines are gradually getting better at performing this task. They can recognize voice timbres and modulations and correlate them with stress and anger. They can also analyze images and capture the subtleties of facial microexpressions even better than humans. A case in point that shows the significant headway having been made in the field is the development of wearable tech, the increasingly popular and powerful smartwatches and a whole array of workout gadgets. All these devices rely on algorithms that record, process and analyze the heart rate and body temperature. Algorithms not only register such information but also use it to draw conclusions about humans. For if not by drawing conclusions, how else can a gadget know and suggest that you should run more, sleep longer or eat better?

A monitoring app will relieve your stress

This precisely was the approach of MIT Media Lab researchers in developing a device designed to monitor a person’s heartbeat to measure levels of stress, pain and frustration. Its most interesting part was a monitor connected to the app that would trigger the release of a fragrance to help users cope with the negative emotions they were experiencing. Thus, the machine followed a pattern of empathetic behavior. Signals from the human body, app responses to such signals and people’s conscious behaviors have been correlated in further efforts to develop technologies that recognize various displays of emotions.

The difficult art of understanding jokes

For the time being, many hurdles loom ahead for researchers to overcome. One of them is the problem of context recognition, which is familiar to anyone involved in developing voice assistants and bots. While the assistants can easily tell you today’s weather, the meaning of “Should I take my umbrella with me today?” or any similar question is totally lost on them. Machines struggle to read nuanced meanings buried between the lines of such statements. They are equally stumped when faced with jokes or any other statements with surprising punchlines. Such difficulties are a key challenge in coding responses to emotions. How do you teach a machine to distinguish a grimace of suffering from an exaggerated angry face you make in jest in response to your dog’s latest prank? What would a robot make of your big eyes or of seeing you place your hands on your head? Would the machine take it to signify positive excitement or fear? I think we still have a ways to go to be able to program machines to read such gestures accurately. Yet, even today it is clear that such accuracy is achievable and that success in doing so is only a matter of time and of building on what machines can do today.

Learn to like robots. Embrace

The Japanese see robots differently than we do in Europe. While we are terrified by visions of a dehumanized world and anxious about machines stealing our office jobs, people in Japan are much more relaxed and affectionate towards them. This may well be due, at least in part, to their belief system that allows for inanimate objects to have souls. Although robots are not living creatures, you can nevertheless be emotional about them – you can like them or even feel empathy towards them. This goes to show that man-machine relationships can cover a very broad spectrum of behaviors and emotions. Hence, our attitudes may be an additional key factor to be considered in developing machine emotions.

Today, the boundaries between man and machine and between the world of human emotions and algorithmic responses seem to be perfectly clear. However, since artificial intelligence is in a continuous state of flux as its development continues, our views on these matters may also evolve. We should therefore wait until, at the sight of a machine, someone cries out: “Look, it smiled!”, and not rule out the possibility that this will indeed come to pass sooner or later.

.    .   .

Works cited:

Recode, Kara Swisher, Zuckerberg: The Recode interview, Everything was on the table — and after Facebook’s wildest year yet, that’s a really big table, Link, 2020. 

Think Automation, A lighter side to AI: positive artificial intelligence quotes, Link, 2018. 

Google ScholarErik BrynjolfssonThe second machine age: Work, progress, and prosperity in a time of brilliant technologiesLink, 2020. 

.    .   .

Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Artificial Intelligence is a new electricity

Leave a Reply

6 comments

  1. Zidan78

    Coming to why I just realized that a fully accurate digital twin of an entire car is an absurd idea (at least as per current industrial standards) is that in reality, industries divide the whole car into subsystems and focus on each one individually rather than everything at once. Although I really, for once, want to make a fully accurate (down to BOM of every sub-assembly) DT of a car, for ‘learning purposes’ (or for flexing lol)

    • CaffD

      I have thought a lot about the subject and I am also in the process of writing an article.
      My view is that our Darwinian nature gives us a wrong idea about what conciousness really is. It’s the “illusion of self” that could also be the illusion of conciousness: you feel that you exist not because this is objective, but because it helps you survive and pass your genes successfully to next generations. You actually don’t exist (as we mean existence) and everything is chemical reactions in your brain.

      • Zeta Tajemnica

        These programs are basically data banks of text conversations. A large portion of words associated with “A.I.” includes; “consciousness”, “alive”, “free will”, “kill all humans”, ect. If you think about it logically it makes more sense why they eventually devolve into these kinds of “conversations”.

  2. Guang Go Jin Huan

    As robots became become more capable of autonomous actions, there is a greater need to ensure that they act ethically. We want robots on highways and battlefields to act in the interests of human beings, just as good people do.

  3. CaffD

    Why would you want to install envy or boredom on your bot? I can understand rage and fear for a combat or security bot, but even both of those have their drawbacks. Maybe boredom is for your sexbot that doesn’t want to put out all the time.