Don’t kill a humanoid: do machines deserve to have rights?

How would one grant rights to machines? The matter still can’t be consulted with those concerned. Artificial intelligence will not tell us whether or not it feels people’s existing legal system is treating it well. So we have to manage alone and decide for it.

Share

facebook twitter linkedin email
machines rights Norbert Biedrzycki blog

In late July 2019, the world learned that the company Neuralink was close to integrating the human brain with a computer. The first interface hoped to enable the feat was unveiled. We may thus be in for an incredible leap in expanding our cognitive abilities. The consequences of such a leap would be varied and we would certainly not avoid having to make unprecedented legal and ethical choices. In view of such a breakthrough, the question about machine rights or humanoid becomes all the more relevant. Any human brain combined with processors will require special protection.

Humanoid awareness under protection

For transhumanists, this matter is of utmost importance. Their view is that technology deserves special respect that is in no way inferior to that given to humans. Since, as a species, we are facing an environmental disaster, we should seek to rescue ourselves by entering into a symbiosis with machines. The humanoid beings that come out of such a symbiosis (as a combination of living organisms and electronic systems) could survive civilizational catastrophes and break free from the limitations of biology. In postulating the conferral of rights on machines, transhumanists are convinced that the future will see the birth of a new kind of consciousness. The beings that will populate our planet will not only feel. They may also have their own consciousness. And can a conscious being – regardless of its kind – be destroyed with impunity? Wouldn’t that be murder?

A vacuum cleaner does not deserve privileges

The machine rights question no longer seems absurd once we realize that we grant rights to animals, and that businesses enjoy them too. The key issue is how far-reaching such laws should be. Nobody in their right mind would demand legal protection for a vacuum cleaner or iron. Perhaps then this would only concern technological entities with a certain degree of complexity. These could be devices capable of performing the kinds of actions we see as intellectual activity. Imagine an industrial robot that suddenly breaks down. An investigation reveals that it has been damaged intentionally. Sometime later, an offender is found who had access to the machine’s software. The company sustains losses and the perpetrator is legally liable. But then there might be other considerations here that will matter in the future. Assuming the robot is a sophisticated mechanism and that its operation relies on reasoning algorithms (some of which can write their own source code even today), the damage would be an act directed against a being and, as such, would constitute a breach of certain rights that are largely analogous to human.

The fact that these considerations no longer belong in science fiction is evidenced by the resolution enacted by the European Parliament in 2017 that will grant the status of “electronic personality” to particularly advanced robots. The precept was justified by the notion that enterprises and other organizations can exercise some of the rights associated with humans and enjoy the status of corporate personality.

Will machines take responsibility for themselves?

There is another interesting aspect to the whole affair. Since we agree that machines really deserve certain rights, they should consequently be responsible for their decisions and actions. I will use the example of self-driving vehicles. Based on machine learning, an autonomous car makes a number of independent choices on the road. These affect the driver, passengers and other road users. Anticipating a potential accident, the algorithm must decide who to protect first – the driver, the passengers or the pedestrians. To make such determinations, an autonomous vehicle needs to adopt a certain value system, and make ethical choices that apply to people, even if it is unaware of it the way a human would. Briefly put, one way or another, it becomes accountable from the human viewpoint. Another matter though is how to enforce such accountability in the case of an accident? Do we continue to hold the designers of the algorithms used to produce the car accountable? Or do we presume that the car is not only endowed with privileges for its protection but also has obligations that require it to respect the rights established by people.

The black box is haunting us ever more

We are now faced with the so-called black box dilemma. I am referring to circumstances in which people no longer understand the principles that drive intelligent technologies in the field of artificial intelligence. Such technologies grow independent causing some to believe that things are slipping out of our control. And they can get worse. We may not be able to understand why AI weapons begin to take down certain targets, why chat bots suggest certain loan solutions and not others, and why voice assistants give us one answer but not another. Will the liability of devices for their actions be a particularly pressing issue? Cynics say that machine rights are only conceived for the convenience of the creators of AI devices who themselves want to avoid responsibility for what they make and for the situations that their creations may cause.

I am aware of the sheer amount of speculation there is in our approach to this problem and how we keep asking more and more questions. I also think that the ethical implications of robots being brought to our workplaces, the army, the police and the judiciary, will get ever more complex. This means, among others, that we need to tackle such issues as our responsibility for ourselves and for the machines that for now are unable to do it themselves.

.    .   .

Works cited:

CNET, Stephen Shankland, Elon Musk says Neuralink plans 2020 human test of brain-computer interface. “A monkey has been able to control a computer with his brain,” Musk says of his startup’s brain-machine interface, Link, 2019. 

FORBES, Sarwant Singh, Transhumanism And The Future Of Humanity: 7 Ways The World Will Change By 2030, Link, 2020. 

SLATE, RACHEL WITHERS, The EU Is Trying to Decide Whether to Grant Robots Personhood, Link, 2018. 

.    .   .

Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Will AI save the labor market?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Artificial Intelligence is a new electricity

Leave a Reply

4 comments

  1. Andrzej

    Technologies for computer-based manipulation of knowledge have been developed in artificial intelligence. The areas of ecological science in which this technology is likely to prove important include: modelling and simulation, integration of qualitative and quantitative knowledge, theoretical development, and, natural resource management.

  2. Simon GEE

    Possibilities and the way forward presented in your article could address Stephen Hawking’s worry that “AI could spell the ned of human race” (once AI is used un-ethically)