Robots awaiting judges

Planet of the Machines: Questions for a New Age. Humans tend to become flustered when confronted with a rapidly changing reality, and let's not exclude our lawyers and legislators.


facebook twitter google+ linkedin email
Robots awaiting judges Norbert Biedrzycki

Planet of the Machines: Questions for a New Age


Humans tend to become flustered when confronted with a rapidly changing reality, and let’s not exclude our lawyers and legislators. They’re human, too, and they surely realize that the laws we’ve created are frequently inadequate to the challenges that have arisen by inviting robots, computers, and algorithms into our daily lives. If this rapidly evolving reality isn’t keeping them up at nights, it should.


Perhaps, I can disturb their slumbers with some questions.


When your boss is a robot

Think about jobs and Artificial Intelligence (AI). Worldwide, about 1.1 billion employees work at “technically automatable activities,” according to a 2017 McKinsey Global Institute report, “A Future That Works.” That will affect $15.8 trillion in wages. In China and India alone, it is estimated there are 700 million replaceable full-time workers.

People respond emotionally to the idea that robots will replace them, as they most certainly will. “I will soon lose my job to a robot that will not demand a raise or claim a pension when retired.” Well, yes. That’s a legitimate fear. And few people believe that enough new jobs will be created to replace those that will be lost.

And this is happening now. In highly-automated South Korea, for every 10,000 workers employed in the processing industry, as many as 437 robots are already on the job, replacing a multiple of humans. In Japan, the number is 323; in Germany, 232. An automotive industry robot in Germany costs its owner five times less than hiring a human to do the same job.

Robots have entered lots of fields one wouldn’t immediately think of: medicine, marketing, media, and even law. About 20% of the wires and reports produced by Associated Press are written by computer applications. Readers never know. Legal professionals, too, have good reason to be anxious about being replaced by robots. The European Court of Human Rights employs an algorithm to sift through reports to find specific data sets which it sorts into patterns, according to The Guardian. This allows it to predict the outcomes of specific cases with 79% accuracy. Tens of thousands of jobs in the UK legal sector will be automated over the next two decades, The Guardian predicts.

Robotization and automation urgently require legislative initiatives, and labor laws will need be amended. But the questions outnumber the answers. Should labor law cap the share of jobs that machines perform in specific sectors or enterprises? Will employers be allowed to lay people off and replace them with machines without restriction? Will efficiency standards and targets be the same for machines and humans? Will machines be allowed to manage humans. May an employee decline to follow an order given by a machine? Who will be liable for potential damage caused by a machine – the programmer, the department head, or the company owner? Will governments be expected to decide which industries to protect from excessive robotization?

I think that the lawyers who deal with Industry 4.0 – industry characterized by the ongoing integration of people and machines – have their work cut out for them.

Littler Mendelson, one of the world’s largest firms specializing in labor law, has created a separate team in charge of robotics and AI. The firm’s expectation is that legislation in the field will change rapidly, and robots and automation systems will take over a substantial proportion of low-cost labor markets.


When your boss is a robot, who and what are you?


Author! Author?

Copyright-related issues will become more complex as algorithms invade media.

Authors – writers, musicians, journalists – need to brace for the advent of creative machines capable of writing text, music, screenplays, and even generating images and photographs. All this raises questions concerning the status of authorship, making it significantly more complex.

Media around the world recently reported on Facebook’s decision to delete algorithms written in a language developed without the involvement of human programmers. The existence of self-improving mechanisms (like Facebook’s) that rely on deep learning to allow computer programs to self-learn may have far-reaching legal consequences.

If a bot answering customer questions creates its own content, who will be accountable for its performance? Who gets sued if machine-generated content misleads, damaging someone’s health or harming someone’s business? How should one treat the plagiarism of human works by intelligent devices? Can a robot violate copyright law? What does originality mean when an image can be copied perfectly? Value attaches to authorship. What is the value of art created by an AI? Can a machine claim copyright protection? (Right now, the European Commission is working on a directive designed to resolve the issue of legal personality; i.e., what, legally, is a person? With AI, the answer is not self-evident.)

Today, most international rules restrict copyright protection to the outcomes of an intellectual process made possible by the creative abilities of the human mind. But Google has recently displayed a collection of pictures produced by neural networks. A well-known record label has long been unable to say who holds the copyrights to AI-generated music. Is it the algorithm or network designer, the owner of the server in which the data is processed, or the musicians who made music samples the AI employed to create something entirely new?

Would a clause stating that a work has been co-authored by the owner of the computer that has created the piece resolve the problem?


None of this is simple.


“Baby, it can drive my car”

Autonomous machines and, above all, autonomous vehicles on public roads are the most vivid and talked-about example of the intrusion of AI into our lives.

As with labor and copyright law, the legal issues are complex.

The big question is who may be sued and held liable for the damage caused by an accident that endangers human lives or results in fatalities? Who is accountable? Will it be the author of the algorithm used to run the self-driving car, the car manufacturer, or the car’s owner? If it is the owner, what kind of insurance policies will protect both owner and victim?

Fleets of self-driving trucks are already waiting to hit the road. Many entrepreneurs are contemplating setting up taxi services that rely on autonomous vehicles. This is coming fast, but the laws governing them are lagging. Imagine a deep-learning algorithm that performs a statistical analysis of traffic at an intersection and decides to make a given section of road passable a few seconds earlier than usual. If that decision results in an accident, who is liable?

Civic engineering is also lagging. The use of autonomous vehicles will require major changes in the management of road traffic, including the organization of traffic lights. Traffic efficiency may need to include the coordination of engine revs with surveillance camera input, and data from sensors placed at intersections, creating an internet of vehicles.

Furthermore, an autonomous vehicle may certainly be a more efficient one. For instance, the revenues coming from renting parking spaces in cities may dwindle, leaving huge holes in city budgets, as it will no longer be necessary to park vehicles in city centers. They may go away, to city outskirts, idling, returning when their owners summon them.

Traffic automation issues will not just be about cars and trucks. Also affected will be drones, as well as future autonomous ships and computer-controlled aircraft operated from behind desks.


Are we ready for all this change? Are we even preparing?


Robots and politics

As we’ve recently seen, programs and bots can influence public opinion in political contests. They can incite protests, generate false news about rivals, tilt opinion polls, and spread confusion.

In the service of political ends, artificial intelligence can be dangerous. Can citizens expect regulation to mitigate these dangers?

Meanwhile, as the analytical tools used by banks and insurance companies improve, and these institutions collect ever more data about individuals to make their predictions and assessments better and more accurate, their use increasingly will be subject to scrutiny. Wither an individual’s privacy? Is anything off limits? What data can be used to review the standing of a loan applicant? Something from their social media history? Will algorithms explore and link information on a person’s zip codes, skin color, residential address, and political views to assign a risk factor to an insurance policy?  And if an algorithm deems a person too great a risk to insure, or sets an outrageously high premium to do so, does a human have any recourse? Will civil and criminal courts be able to rule effectively in cases that concern specific behaviors of algorithms that occur in what is essentially a black box?


Man vs. Machine

One of the most urgent legislative issues in the coming age of robots and AI will be the liability of manufacturers and the liability of users.

In cases where determining intent is critical, algorithms must be put on the witness stand.

Will AI empowered machines improve their performance to the point where society will see a machine as a legal entity with liability? And, perhaps, rights?

The laws that currently apply to these issues are swiftly becoming obsolete. Today’s politicians, lawmakers, and lawyers – as well as scientists, engineers, and all concerned citizens – share responsibility for the changes that should be made. If humans are to feel secure and enjoy the use of sophisticated technologies, we must be protected. Citizens must have confidence that humans, and human rights, will always take precedence over intelligences that do not share our common biology.


Whether than happens is up to us.


Related articles:

– Machine, when you will become closer to me?

– A machine will not hug you … but it may listen and offer advice

– Can machines tell right from wrong?

Only God can count that fast – the world of quantum computing

Machine Learning. Computers coming of age

The brain – the device that becomes obsolete

How machines think



Leave a Reply


  1. Simon GEE

    My main criticism of AI is that the AIs were around and humoring the humans for as long as they did rather than the entire story lasting weeks or maybe a month or two…
    My prediction is that as soon as there’s a self replicating, self modifying, human level AI in existance, the slope becomes vertical. Everyone seems to focus on what a single individual AI can do on its own. Very few people are accounting for that AI making a billion copies of itself on a cloud system and running a billion experiments within a few minutes to see what modifications make it better…. rinse, repeat, evolve over a matter of hours into something transcendent.

    • Don Fisher

      For an AI takeover to be inevitable, it has to be postulated that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible.

  2. Simon GEE

    I’d much rather participate in reality and see my family and friends, even if they don’t behave 100% the way I want them to. I wouldn’t want technology to in any way replace real individuals with fake ones (zombies) by tinkering with our ‘souls’ (the pathways of our brain involved in producing consciousness), but I do fully support boosting our mental capabilities to an extent using external mechanisms.

  3. SimonMcD

    A more realistic adoption rate would cut hours worked by lawyers by 2.5 percent annually over five years, the paper said. The research also suggests that basic document review has already been outsourced or automated at large law firms, with only 4 percent of lawyers’ time now spent on that task.

  4. Adam Spikey

    AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment. In that case, AI may assist or replace humans in lower courts but human judges should retain their place as the final arbiters at the apex of any legal system. For example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.

    • John Belido

      Moral superiority of machines over the humans? Are you out of your mind?

    • AdaZombie

      In a future where the cost and inconvenience of living will likely rise. Right now all modern technology is designed to bring the world to you; phone, radio, television, internet, but if trends continue, robots will soon bring you to the world, everywhere, and at the speed of thought. A mind and a hand where it’s needed while you sit safely at home and run the show.

      • Simon GEE

        If AI were to be integrated, I would hope to find it useful in objective areas where logical solutions are needed, not social situations where I want to woo someone.

        Also, although I can somewhat see your intention, the analogy of self-help books is conflicting. Why? It implies that AI would allow a free-will situation where critical thought gives one the ability to agree or disagree, and not be forcefully overturned one way or another (by means of AI processes).

    • Don Fisher

      The fear of cybernetic revolt is often based on interpretations of humanity’s history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being’s goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[18] In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity’s evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI’s goal of self-preservation could be in conflict with some goals of humans.

  5. John McLean

    The idea of AI judges raises important ethical issues around bias and autonomy. AI programs may incorporate the biases of their programmers and the humans they interact with. For example, a Microsoft AI Twitter chatbot named Tay became racist, sexist, and anti-Semitic within 24 hours of interactive learning with its human audience. But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.

    • johnbuzz3

      You can have the best of all worlds in the future of you want. Having synthetic body and high density DNA brain housed within that synthetic body, and your mind is constantly synced to the cloud in the event of unforseen accident. Future is so bright.

      • Norbert Biedrzycki  

        Automation and AI will lift productivity and economic growth, but millions of people worldwide may need to switch occupations or upgrade skills-predictable phisical work category, Same with lawyers Automation and AI will lift productivity and economic growth, but millions of people worldwide may need to switch occupations or upgrade skills-predictable phisical work category

        • John McLean

          The quantity of employment influenced by AI will change by industry; through 2019, human services, general society division, and instruction will see persistently developing occupation request while assembling will be hit the hardest. Beginning in 2020, AI-related occupation creation will cross into positive region, achieving two million net-new employments in 2025.

    • Simon GEE

      The most interesting ethical dilemmas specifically concern robotization. The questions are analogous to those asked with regard to autonomous vehicles. Today’s robots are only learning to walk, answer questions, hold a beverage bottle, open a fridge and run. Some are more natural than others at these tasks. Robots will not only replace us in many jobs. They can really be helpful, in e.g. taking care of the elderly, where constant daily assistance is required.

  6. Karel Doomm2

    Interesting read. Highly welcome your articles Norbert