A robot lied and thus became human

Big lies, little lies and lies of omission play an essential role in society. Without occasionally concealing our true opinions about people around us, we could never be able to create lasting social bonds. Clearly, lies can be useful to humans. But can they also be useful to robots?

robot lies Norbert Biedrzycki

All of us have at least once in our lives complimented someone on their appearance despite not being impressed with it or showed interest in a conversation that we cared little about. While such behaviors are clearly commonplace, a machine that conceals the truth is still considered to be the stuff of science fiction. Let us nevertheless try to imagine a robot that deceives us. What would such deceit be like and what would its consequences be?

When will Siri compliment us?

If your voice assistant tells you someday “I like your voice”, you will certainly be tickled, perhaps even delighted. No matter how pleased you are, you should think about what is behind this remark. Can Siri perceive and assess the quality of a person’s voice in the first place? And if so, can she feel the pleasure that goes with liking it. Also, Siri is well aware that the compliment will please you. I wonder whether on hearing it, the thought would ever cross your mind that the assistant is lying to you to make you like her, thus engaging in an emotional manipulation of sorts.

Catching a robot on a lie

Imagine another case. You have commanded your computer to perform a complex calculation. It reverts to you a moment later with a result that raises red flags. You double-check that result and have the machine re-do the task. This time the outcome is much more in line with your expectations. How do you proceed? Do you chalk up the former result to a software flaw or operating system error or consider it a random occurrence. The thought of your computer deliberately deceiving you is the last thing you’d think of. And this is where things really get interesting.

The case above proves we do not treat computers as autonomous entities. Precisely that view (which by the way is correct) blinds you to the possibility of a robot lying to you. If robots did tell lies, we would persistently deny that and instead talk of defects and errors. We cling to our strongly-held belief that a machine is nothing but a machine. Is this view appropriate given the advances we’ve made in AI development? Shouldn’t we, for our own protection, assume that machines are actually capable of lying to us?

Suing a robot 

Consider a case of a lying robot. Has it been programmed to lie or has it autonomously picked up this ability that is otherwise considered unique for humans? You may recall an accident three years ago in the United States involving an autonomous Uber vehicle that hit a pedestrian. An investigation found that the tragedy was caused in part by a faulty factory setting which made the vehicle’s braking distance too short. Will such accidents always be so clear-cut? We may have to ask ourselves how certain we can be that the autonomous vehicle “was unaware” that a pedestrian would cross the road. Needless to say, depending on how we resolve this question, we might end up faced with vehicles that could be brought to court. Finding that a machine deliberately hides something from us would automatically change its liability status.

The classic laws of robotics 

This brings us to Isaac Asimov’s famous three laws of robotics that define machines’ obligations. They read:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Let’s begin with the most fundamental issue that concerns such laws, which is their commanding nature. The assumption here is that the machine is absolutely subordinated to man, even if the second law of robotics allows the robot some freedom of choice. No robot entirely deprived of such freedom can ever be conscious. And the lack of consciousness means no ability to lie. Hence, the application of Asimov’s laws effectively puts the debate to rest. (In this argument, I am ignoring the fact that all of these laws are thrown out of the window for robots made for military purposes and designed to kill people.) Why don’t we use Asimov’s laws again though, and apply them to another case. A robot is instructed to go to a certain location to take down human targets that a commander considers to be the enemy. On returning to base, the machine reports: “Mission accomplished”. How would we react then on finding that the so-called enemy is still well, safe and sound? Would we be forced to conclude that the robot has gone rogue? And if so, would we be witnessing the birth of an ethical system in a machine. That would mean the machine is applying a human cultural code. Which in turn means that a machine that understands the man-made distinctions between good and evil, truth and falsehood, has the right to disobey humans.

Thousands of years of evolution

In technical terms, for a machine to be able to lie to people, it would need skills that are hardly conceivable. What would it take for a robot that has idled the day away to claim falsely it has worked hard when asked by a human? Firstly, it would have to understand what is being said to it. Secondly, it would have to be able to distinguish between work and rest. Thirdly, it would have to consider the consequences of giving either answer. Fourthly, it would need to know the value of work and rest for the person asking the question. Fifthly, it would have to be aware of the intention behind the question. The list goes on and on but even at this level of analysis, one can readily see that even the simplest lie by a robot would require it to make a qualitative leap that has taken people thousands of years of evolution. This shows clearly what a long way AI is facing before it can match human abilities.

I am no longer a robot

The skills discussed here are considerably more advanced than face recognition or complex calculations. To this day, Siri – the voice assistant, is a glorified automaton equipped with the ability to go online at the right time. However, once we discover that the assistant is seeking to dupe us, we will be confronted with an empathic being capable of getting a sense of how we feel. And if that really becomes the case, it will be able to predict our questions and understand our jokes and allusions.

All this together shows that even the simplest single lie on the part of a robot would prove that mankind has lost its privileged monopoly on deciding what is true and what is not. We would find ourselves living alongside entities whose ethical system could evolve in a completely different direction. Whether we already need to prepare for such an eventuality, I do not know.

.    .   .

Works cited:

CNN, Matt McFarland, Uber self-driving car operator charged in pedestrian death, The Uber test driver who was responsible for monitoring one of the company’s self-driving cars that hit and killed a pedestrian in 2018 was charged with negligent homicide this week, Link, 2020. 

Wikipedia, Three Laws of Robotics, Link, 2018. 

.    .   .

Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Artificial Intelligence is a new electricity

Leave a Reply

3 comments

  1. Zidan78

    Say, for example, I am working on a project for CVT (continuously variable transmission) digital twin (taking highly less complex system than a car, reasons for which I’ll state in a moment..). I want to be able to make a detailed dynamic model of CVT with the materials of realistic physical behaviour (thermal, mechanical and chemical), just like one would expect from any CAE package, but interactive, i.e., one should be able to digitally interact with it in real time, should be able to use it to get real time data and stuff (you can look up how digital twins are used in IIoT if you already don’t know, just so that you understand what I mean). Really the whole point of using UE boils down to two major things I’d like to see, both of which are entirely absent in any CAE software I know:
    – Visually realistic models
    – Interactive (AR/VR)

    Coming to why I just realized that a fully accurate digital twin of an entire car is an absurd idea (at least as per current industrial standards) is that in reality, industries divide the whole car into subsystems and focus on each one individually rather than everything at once. Although I really, for once, want to make a fully accurate (down to BOM of every sub-assembly) DT of a car, for ‘learning purposes’ (or for flexing lol)

  2. Guang Go Jin Huan

    Robots are already being built that have some of these brain mechanisms operating on neuromorphic chips, which are computer chips that mimic the brain by implementing millions of neurons. So maybe robots could get some approximation to human emotions through a combination of appraisals with respect to goals, rough physiological approximations, and linguistic/cultural sophistication, all bound together in semantic pointers. Then robots wouldn’t get human emotions exactly, but maybe some approximation would perform the contributions of emotions for humans.

  3. CaffD

    Since it says “Feel like a human,” I suppose that means they would be sold to the bots rather than the humans. But that only brings up more questions I suppose.