Is it possible to regulate AI and do we need such regulation?

I do not believe that attempts to regulate artificial intelligence are an unnecessary nuisance or a curb on the free distribution of ideas and business freedom. A technology that develops beyond our control and that self-improves without a programmer’s intervention, becomes powerful indeed.

regulate AI regulations Norbert Biedrzycki blog

My article in Data Driven Investor published 30th of March 2020 on Is it possible to regulate AI and do we need such regulation? 

There is an ongoing debate about how to regulate artificial intelligence. Lawyers, politicians and business people alike feel that the laws in place are failing to keep up with technological advances. The primary and secondary laws that are currently in force do not regulate technology properly. Is it possible to regulate artificial intelligence efficiently and do we need such regulation?

Not only are we struggling to grasp the logic behind algorithms, we – the citizens – are also in the dark about the way companies, institutions and services employ modern technology to surveil us in our day-to-day existence. Shouldn’t we be better protected while using computers, drones, applications, cameras and social networks? Shouldn’t someone make sure we don’t end up having algorithms elect their president?

I do not believe that attempts to regulate ​​artificial intelligence are an unnecessary nuisance or a curb on the free distribution of ideas and business freedom. A technology that develops beyond our control and that self-improves without a programmer’s intervention, becomes powerful indeed. In the face of such technology, the principle of unlimited business freedom becomes somewhat archaic and falls short of resolving many issues.

To regulate or not to regulate 

Needless to say, views on whether to regulate the development of autonomous, smart technologies are deeply divided. Elon Musk regularly raises concerns about the fate of our planet, speaking of the need for robust mechanisms to protect people from technology-induced threats. On the opposite end of the spectrum stands Mark Zuckerberg, who champions a strongly liberal approach (although his views have been shifting lately).

Generally, the predominant approaches in US industry differ widely from those in Europe. The European Group on Ethics in Science and New Technologies of the European Commission has been working towards an international agreement with a view to creating a legal framework for autonomous systems. It is of the opinion that: “… autonomous systems must not impair the freedom of human beings … AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and autonomous systems can bring.”

It should also be noted that China’s views on the matter are diametrically different. China aspires to be on the cutting edge of AI development, which it sees as a vital tool for surveiling the public and controlling social behavior.

Observation, education, dialogue

New technology experts play a key role in our changing world. They can answer the burning questions that members of the public may pose. They can tell us whether we can rest assured that the AI we use is “fair, transparent, and accountable”. This answer alludes to the name of one of numerous seminars (entitled: “Fair, Transparent, and Accountable AI”) held by Partnership on AI. The organization’s mission is to study practices relevant for the presence of AI in human lives and explain new developments in the field to the general public. It is worth quoting the sentence that appears in the description of the event at https://www.partnershiponai.org: “through techniques like identifying underlying patterns and drawing inferences from large amounts of data, AI has the potential to improve decision-making capabilities. AI may facilitate breakthroughs in fields such as safety, health, education, transportation, sustainability, public administration, and basic science. However, there are serious and justifiable concerns—shared both by the public and by specialists in the field—about the harms that AI may produce. “

I will name four fields (selected, of course, from among many others), whose rapid change is propelled by artificial intelligence. Some of them may in time require specific regulatory mechanisms for the comfort of the users of this technology.

At odds with the law 

All around the world, the police rely on algorithms. AI helps them process data, including information related to crime, searches for criminals, etc. One of the many advantages of machine learning is its ability to classify objects (including photos) by specific criteria. This is certainly of value to organizations that need to quickly acquire information vital for their investigations. Unfortunately, the algorithms that assess the likelihood of re-offending (which affects the decisions to release inmates on parole) are susceptible to abuse. Therefore, lawyers around the world are establishing organizations (among them The Law Society’s Public Policy Technology and Law Commission) to oversee the use of the technology by the police and courts.

The universal nightmare of fake news

This topic, which has been heatedly debated of late, raises questions about the credibility of information and the responsibility of social networks to monitor their content. Since 2016, the first year to have been marred by a huge number of fake news scandals, not a week goes by without the issue hitting the headlines. AI is central to this story because, as we know, it has played a huge role in the automatic generation of content (bots). I think that the credibility of information is one of the biggest challenges of our time, which can rightfully be labeled the age of disinformation. It definitely requires a global reflection and a concerted international response. Every now and then, initiatives for credible news (as in the Pravda site set up by Elon Musk) are put forward, but the challenge remains enormous. Since it would be utopian to regulate it, I am afraid we may be forced to wrangle with the problem for years to come.

Who rules the assembly line?

The robotization of industry is among the most emotionally-charged aspects of AI. People have a big problem accepting robots as their work buddies (or accepting that robots will put them out of a job). I have written about this on numerous occasions, and so I will refrain here from presenting statistics or arguments for or against robotization. The matter is certainly a major problem and there is no point pretending it is going to go away. On the contrary, social unrest may increase as the trend unfolds. I think that in view of its social impacts, it is highly critical to lay down the rules that will govern this field. One possible tool to be used is taxation designed to prevent corporations from excessively relying on robots.

Autonomous vehicles 

Enthusiasts quote numerous studies which find that autonomous vehicles will make roads considerably safer. I share that view. And yet, autonomous vehicles raise a lot of questions. One of the key ones concerns vehicle behavior during an accident. Who should algorithms protect as their first priority: passengers, drivers, or pedestrians? Will a driver who causes an accident at a moment of distraction have a claim in court against the manufacturer of his autonomous vehicle and will the driver be able to win his case? How should vehicles be insured? Who should be liable for accidents: the driver / the passengers, the vehicle owner, the manufacturer or the software programmers? Another upcoming conundrum is the future of other autonomous means of transportation such as airplanes, ships and road vehicles that will move cargo for us (deliver shopping, etc.). Legislation around the world varies in how it requires driverless vehicles to be tested. The debate on how to boost user safety is ongoing no matter where you look.

Technology for the people

The progress achieved through the use of smart technologies is unquestionable. However, in addition to such business concerns as the cost of optimization, efficiency, the bottom line and automation (all of which benefit from AI), I think it is vital to remember about some of the less measureable aspects. We should remind ourselves that the good old well-being of individuals, people’s security, knowledge and fulfillment derived from interactions with new technologies are of utmost importance. After all, technology is there to make our lives better. Let us keep a close eye on the experts who influence the drafting of laws designed to protect us from the undesired impacts of algorithms.

Link to this article 

Related articles:

– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Hello. Are you still a human?

– Artificial intelligence is a new electricity

– How machines think

Leave a Reply

19 comments

  1. Aaron Maklowsky

    Developing AI is like learning to communicate and teach an alien race.

    Yes, now you simply teach them to solve a problem or task, but once the AI learns how to do that, we don’t know how it arrived there. And it only gets exponentially more difficult to understand with complexity.

    • Guang Go Jin Huan

      As robots become more capable of autonomous actions, there is a greater need to ensure that they act ethically. We want robots on highways and battlefields to act in the interests of human beings, just as good people do.

  2. Tom Aray

    I’ve seriously been reading essays about AI all day and then this just pops up on the front page.

  3. Marc Stoltic

    Surely ASI has happened an innumerable amount of times in the Universe, where is it? Does it wink out of existence at a certain point, a sort of ascension? Accelerated intelligence would only hold true if the AGI->ASI was locked into its original programming. Free to truly reprogram its self, it might just as easily self-terminate or do any number of things we can’t predict. It might even simulate the entire evolution of the Universe just to know where it came from and what it is.

  4. Jang Huan Jones

    NLP is the danger!

    What did Russia just do last election? Bunch of spam accounts.
    They can have a whole country of spambots guiding political discourse online.
    I could be a spambot just trying to confuse you tho…

  5. Krzysztof X

    good read and the tip of the iceberg, unfortunately

  6. Krzysztof X

    I think it all comes down to the commonly observed scenario nowadays – the simpler models that are aimed at predicting the outcome based on a thoughtfully selected set of variables are not as effective as the complex models that work on a massive amount of data. The latter ones are extremely hard to comprehend, but that’s the price we pay for the accuracy

    • Mac McFisher

      My previous employer had a big AI group trying to extract information for invoices. They reached 90+% accuracy, which helped customers a fair bit.
      My neighbour country standardized their invoice formats, leading to automatic reading of invoices with 100% accuracy without any AI. Standardization is the bigger job killer.

    • John Macolm

      Its intelligence is only to solve problems related to their purpose. Don’t believe Hollywood movies where robots feel mistreated and rebel against us for vengeance, it’s just impossible. Software is software. Ones and zeroes. Nothing more, nothing less

  7. John Accural

    some – yes. but leave this to tech programmers and designers

  8. Jack23

    Not at all. Tech giants are always ahead of regulators 🙂

  9. And99rew

    There’s a lot of meta data associated with transactions, they seem to have used machine learning to create a program particularly good at picking up fraudulent ones. Which is really just finding the right criteria for success, turning on the machine, and waiting for it to spit out a working product.

    • Jang Huan Jones

      I am honestly beginning to wonder if Elon Musk and Putin even know what AI are, because they seem to use the term like a buzzword while acting as if all of this magical sci-fi shit is about to happen.

      • Marc Stoltic

        While everything in US seem to work on AI technology to deliver ads in Russia that focus is very likely on fake news technology. Moral objections aside it is quite fascinating. You get a lot of information from public sources and internet (like comments, favorite sites, shopping habits, interests from facebook, etc) and build self-learning system that figures out what subjects like and how they react and maybe predict how to influence them. So far it was done mostly manually which was expensive. If they can figure out the way to make AI smart enough then they can do much more stuff with fewer people.

        • Tom Aray

          Elon Musk has it all wrong. AI is the future of “humanity,” not meat-bodied humans.

        • Aaron Maklowsky

          Humans can comprehend what ai is capable of. They also program it to do tasks. This is not a boogeyman technology anymore than flight or guided weaponry is. The evil computer will not kill all humans firstly because it’s told not to do that and secondly it will never have the capabilities to act without human intervention.

          • Guang Go Jin Huan

            Emotions are judgments about the relevance of the current situation to a person’s goals. For example, if someone gives you $1 million then you will probably be happy because the money can help you to satisfy your goals of surviving, having fun, and looking after your family. Robots are already capable of doing at least a version of appraisal, for example when a driverless car calculates the best way of getting from its current location to where it is supposed to be. If emotions were just appraisals, then robot emotions would be just around the corner.

  10. AndrewJo

    As soon as possible, otherwise, we are doomed