Don’t kill a humanoid: do machines deserve to have rights?

How would one grant rights to machines? The matter still can’t be consulted with those concerned. Artificial intelligence will not tell us whether or not it feels people’s existing legal system is treating it well. So we have to manage alone and decide for it.

machines rights Norbert Biedrzycki blog

In late July 2019, the world learned that the company Neuralink was close to integrating the human brain with a computer. The first interface hoped to enable the feat was unveiled. We may thus be in for an incredible leap in expanding our cognitive abilities. The consequences of such a leap would be varied and we would certainly not avoid having to make unprecedented legal and ethical choices. In view of such a breakthrough, the question about machine rights or humanoid becomes all the more relevant. Any human brain combined with processors will require special protection.

Humanoid awareness under protection

For transhumanists, this matter is of utmost importance. Their view is that technology deserves special respect that is in no way inferior to that given to humans. Since, as a species, we are facing an environmental disaster, we should seek to rescue ourselves by entering into a symbiosis with machines. The humanoid beings that come out of such a symbiosis (as a combination of living organisms and electronic systems) could survive civilizational catastrophes and break free from the limitations of biology. In postulating the conferral of rights on machines, transhumanists are convinced that the future will see the birth of a new kind of consciousness. The beings that will populate our planet will not only feel. They may also have their own consciousness. And can a conscious being – regardless of its kind – be destroyed with impunity? Wouldn’t that be murder?

A vacuum cleaner does not deserve privileges

The machine rights question no longer seems absurd once we realize that we grant rights to animals, and that businesses enjoy them too. The key issue is how far-reaching such laws should be. Nobody in their right mind would demand legal protection for a vacuum cleaner or iron. Perhaps then this would only concern technological entities with a certain degree of complexity. These could be devices capable of performing the kinds of actions we see as intellectual activity. Imagine an industrial robot that suddenly breaks down. An investigation reveals that it has been damaged intentionally. Sometime later, an offender is found who had access to the machine’s software. The company sustains losses and the perpetrator is legally liable. But then there might be other considerations here that will matter in the future. Assuming the robot is a sophisticated mechanism and that its operation relies on reasoning algorithms (some of which can write their own source code even today), the damage would be an act directed against a being and, as such, would constitute a breach of certain rights that are largely analogous to human.

The fact that these considerations no longer belong in science fiction is evidenced by the resolution enacted by the European Parliament in 2017 that will grant the status of “electronic personality” to particularly advanced robots. The precept was justified by the notion that enterprises and other organizations can exercise some of the rights associated with humans and enjoy the status of corporate personality.

Will machines take responsibility for themselves?

There is another interesting aspect to the whole affair. Since we agree that machines really deserve certain rights, they should consequently be responsible for their decisions and actions. I will use the example of self-driving vehicles. Based on machine learning, an autonomous car makes a number of independent choices on the road. These affect the driver, passengers and other road users. Anticipating a potential accident, the algorithm must decide who to protect first – the driver, the passengers or the pedestrians. To make such determinations, an autonomous vehicle needs to adopt a certain value system, and make ethical choices that apply to people, even if it is unaware of it the way a human would. Briefly put, one way or another, it becomes accountable from the human viewpoint. Another matter though is how to enforce such accountability in the case of an accident? Do we continue to hold the designers of the algorithms used to produce the car accountable? Or do we presume that the car is not only endowed with privileges for its protection but also has obligations that require it to respect the rights established by people.

The black box is haunting us ever more

We are now faced with the so-called black box dilemma. I am referring to circumstances in which people no longer understand the principles that drive intelligent technologies in the field of artificial intelligence. Such technologies grow independent causing some to believe that things are slipping out of our control. And they can get worse. We may not be able to understand why AI weapons begin to take down certain targets, why chat bots suggest certain loan solutions and not others, and why voice assistants give us one answer but not another. Will the liability of devices for their actions be a particularly pressing issue? Cynics say that machine rights are only conceived for the convenience of the creators of AI devices who themselves want to avoid responsibility for what they make and for the situations that their creations may cause.

I am aware of the sheer amount of speculation there is in our approach to this problem and how we keep asking more and more questions. I also think that the ethical implications of robots being brought to our workplaces, the army, the police and the judiciary, will get ever more complex. This means, among others, that we need to tackle such issues as our responsibility for ourselves and for the machines that for now are unable to do it themselves.

.    .   .

Works cited:

CNET, Stephen Shankland, Elon Musk says Neuralink plans 2020 human test of brain-computer interface. “A monkey has been able to control a computer with his brain,” Musk says of his startup’s brain-machine interface, Link, 2019. 

FORBES, Sarwant Singh, Transhumanism And The Future Of Humanity: 7 Ways The World Will Change By 2030, Link, 2020. 

SLATE, RACHEL WITHERS, The EU Is Trying to Decide Whether to Grant Robots Personhood, Link, 2018. 

.    .   .

Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Will AI save the labor market?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Artificial Intelligence is a new electricity

Leave a Reply

18 comments

  1. Guang Go Jin Huan

    Given the scale of demand, developers believe that rather than robots being an alternative to human caregivers, the choice may sometimes be between robots and no care at all. But even if that’s the case, the question remains: Can they do the job?

  2. CaffD

    Interestingly, the fine-text notes to ‘abstain from operating heavy machinery’ while using the product.
    It would appear Robot Emotions are akin to human intoxicants.

  3. Zoeba Jones

    IMHO every intelligent, self-aware being (not only humans) should have rights

  4. Tom Aray

    Fantasy: THEY’RE GOING TO MAKE ARTIFICIAL INTELLIGENCE THAT CAN PERFORM SUPERHUMAN FEATS, INSTANTLY HACK EVERY ENCRYPTION EVER CREATED, TAKE CONTROL OF THE INTERNET, CONTROL OUR LIVES, ETC.
    Reality: We finally have AI that can actually accurately scan and compare human faces in a real world scenario! Yay!

    • Aaron Maklowsky

      Advanced AI + oppressive dictatorship is a scary combination. Dictatorships are going to be a lot more effective at suppressing dissent when their AI can identify true political threats with nearly 100% accuracy. Night of the Long Knives will be very surgical.

  5. Marc Stoltic

    It’s already acknowledged that black box algorithms in machine learning is a problem. Often they can’t even explain their results to the programmers themselves.

    The first to develop AI will be the first to experience unexpected results. Not necessarily an advantage.

    • Zeta Tajemnica

      This is why discussions about limiting AI development in the West are just plain naive. Our adversaries are moving full speed ahead.

    • John Macolm

      If the AI work they are doing is high security, high clearance work then they probably would need to be moved to a secure location to do their work. Letting them do their work in a university lab would make it extremely easy for foreign nations to spy and steal their technology.
      To my knowledge, nothing like this has happened in the US, Russia or China (as far as I know).

      • Jang Huan Jones

        This has been my stance for a while too. No one as far as I’ve heard has been able to make a goal-setting AI, or even an AI that can properly take an order and break it down into smaller tasks, solve those tasks, and then solve the bigger problem.
        4 year olds probably are more capable of independent action than the ‘best’ AI we have out there right now, and Elon makes it out like we’re going to have super intelligence any time soon.

        • Laurent Denaris

          They’re already winning. Look at all the bots on Twitter and Reddit.
          Except for me, I’m not a bot. Harharhar

  6. Andrzej

    Technologies for computer-based manipulation of knowledge have been developed in artificial intelligence. The areas of ecological science in which this technology is likely to prove important include: modelling and simulation, integration of qualitative and quantitative knowledge, theoretical development, and, natural resource management.

    • Mac McFisher

      Making an AI is one thing. Making it self-aware is another. Making an ASI is yet another thing. And then, making an ASI capable of existential danger is finally another.

    • Guang Go Jin Huan

      Basically, our brains are similar to a computer hardwares – contain many complex parts and receive different information, then process them, create emotions and influence you to take some actions.

  7. Simon GEE

    Possibilities and the way forward presented in your article could address Stephen Hawking’s worry that “AI could spell the ned of human race” (once AI is used un-ethically)

    • Zeta Tajemnica

      Does anyone here actually work in AI research? This is just corporate propaganda promoting the nationalization of Artificial Intelligence… An industry that has the potential to shake every large, bureaucratic crony corporation to its core, as AI will give the average user the power/expertise of entire industries! Read between the lines, and keep corporate controlled government AWAY from AI!!!!!