How to regulate artificial intelligence?

Lawyers, politicians and business people alike feel that the laws in place are failing to keep up with technological advances. Is it possible to regulate artificial intelligence efficiently and do we need such regulation?

AI legal regulations Norbert Biedrzycki blog

There is an ongoing debate about how to regulate artificial intelligence. Lawyers, politicians and business people alike feel that the laws in place are failing to keep up with technological advances. The primary and secondary laws that are currently in force do not regulate technology properly. Is it possible to regulate artificial intelligence efficiently and do we need such regulation?

Not only are we struggling to grasp the logic behind algorithms, we – the citizens – are also in the dark about the way companies, institutions and services employ modern technology to surveil us in our day-to-day existence. Shouldn’t we be better protected while using computers, drones, applications, cameras and social networks? Shouldn’t someone make sure we don’t end up having algorithms elect their president?

I do not believe that attempts to regulate ​​artificial intelligence are an unnecessary nuisance or a curb on the free distribution of ideas and business freedom. A technology that develops beyond our control and that self-improves without a programmer’s intervention, becomes powerful indeed. In the face of such technology, the principle of unlimited business freedom becomes somewhat archaic and falls short of resolving many issues.

To regulate or not to regulate 

Needless to say, views on whether to regulate the development of autonomous, smart technologies are deeply divided. Elon Musk regularly raises concerns about the fate of our planet, speaking of the need for robust mechanisms to protect people from technology-induced threats. On the opposite end of the spectrum stands Mark Zuckerberg, who champions a strongly liberal approach (although his views have been shifting lately).

Generally, the predominant approaches in US industry differ widely from those in Europe. The European Group on Ethics in Science and New Technologies of the European Commission has been working towards an international agreement with a view to creating a legal framework for autonomous systems. It is of the opinion that: “… autonomous systems must not impair the freedom of human beings … AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and autonomous systems can bring.”

It should also be noted that China’s views on the matter are diametrically different. China aspires to be on the cutting edge of AI development, which it sees as a vital tool for surveiling the public and controlling social behavior.

Observation, education, dialogue

New technology experts play a key role in our changing world. They can answer the burning questions that members of the public may pose. They can tell us whether we can rest assured that the AI we use is “fair, transparent, and accountable”. This answer alludes to the name of one of numerous seminars (entitled: “Fair, Transparent, and Accountable AI”) held by Partnership on AI. The organization’s mission is to study practices relevant for the presence of AI in human lives and explain new developments in the field to the general public. It is worth quoting the sentence that appears in the description of the event at their web page: “through techniques like identifying underlying patterns and drawing inferences from large amounts of data, AI has the potential to improve decision-making capabilities. AI may facilitate breakthroughs in fields such as safety, health, education, transportation, sustainability, public administration, and basic science. However, there are serious and justifiable concerns—shared both by the public and by specialists in the field—about the harms that AI may produce. “

AI legal regulations Norbert Biedrzycki blog 1

I will name four fields (selected, of course, from among many others), whose rapid change is propelled by artificial intelligence. Some of them may in time require specific regulatory mechanisms for the comfort of the users of this technology.

At odds with the law 

All around the world, the police rely on algorithms. AI helps them process data, including information related to crime, searches for criminals, etc. One of the many advantages of machine learning is its ability to classify objects (including photos) by specific criteria. This is certainly of value to organizations that need to quickly acquire information vital for their investigations. Unfortunately, the algorithms that assess the likelihood of re-offending (which affects the decisions to release inmates on parole) are susceptible to abuse. Therefore, lawyers around the world are establishing organizations (among them The Law Society’s Public Policy Technology and Law Commission) to oversee the use of the technology by the police and courts.

The universal nightmare of fake news

This topic, which has been heatedly debated of late, raises questions about the credibility of information and the responsibility of social networks to monitor their content. Since 2016, the first year to have been marred by a huge number of fake news scandals, not a week goes by without the issue hitting the headlines. AI is central to this story because, as we know, it has played a huge role in the automatic generation of content (bots). I think that the credibility of information is one of the biggest challenges of our time, which can rightfully be labeled the age of disinformation. It definitely requires a global reflection and a concerted international response. Every now and then, initiatives for credible news (as in the Pravda site set up by Elon Musk) are put forward, but the challenge remains enormous. Since it would be utopian to regulate it, I am afraid we may be forced to wrangle with the problem for years to come.

Who rules the assembly line?

The robotization of industry is among the most emotionally-charged aspects of AI. People have a big problem accepting robots as their work buddies (or accepting that robots will put them out of a job). I have written about this on numerous occasions, and so I will refrain here from presenting statistics or arguments for or against robotization. The matter is certainly a major problem and there is no point pretending it is going to go away. On the contrary, social unrest may increase as the trend unfolds. I think that in view of its social impacts, it is highly critical to lay down the rules that will govern this field. One possible tool to be used is taxation designed to prevent corporations from excessively relying on robots.

Autonomous vehicles 

Enthusiasts quote numerous studies which find that autonomous vehicles will make roads considerably safer. I share that view. And yet, autonomous vehicles raise a lot of questions. One of the key ones concerns vehicle behavior during an accident. Who should algorithms protect as their first priority: passengers, drivers, or pedestrians? Will a driver who causes an accident at a moment of distraction have a claim in court against the manufacturer of his autonomous vehicle and will the driver be able to win his case? How should vehicles be insured? Who should be liable for accidents: the driver / the passengers, the vehicle owner, the manufacturer or the software programmers? Another upcoming conundrum is the future of other autonomous means of transportation such as airplanes, ships and road vehicles that will move cargo for us (deliver shopping, etc.). Legislation around the world varies in how it requires driverless vehicles to be tested. The debate on how to boost user safety is ongoing no matter where you look.

AI legal regulations Norbert Biedrzycki blog 2

Technology for the people 

The progress achieved through the use of smart technologies is unquestionable. However, in addition to such business concerns as the cost of optimization, efficiency, the bottom line and automation (all of which benefit from AI), I think it is vital to remember about some of the less measureable aspects. We should remind ourselves that the good old well-being of individuals, people’s security, knowledge and fulfillment derived from interactions with new technologies are of utmost importance. After all, technology is there to make our lives better. Let us keep a close eye on the experts who influence the drafting of laws designed to protect us from the undesired impacts of algorithms.

.    .   .

Works cited:

Business Insider, Prachi Bhardwaj, Mark Zuckerberg responds to Elon Musk’s paranoia about AI: ‘AI is going to… help keep our communities safe.’, Link, 2018. 

European Commision, European Group on Ethics in Science and New Technologies, Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, Link, 2018. 

New York Times, Tomas Chamorro-Premuzic, Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras, Link,2019. 

The Washington Post, Peter Holley, Pravda: Elon Musk’s solution for punishing journalists, Link, 2020.

 

Related articles:

– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Hello. Are you still a human?

– Artificial intelligence is a new electricity

– How machines think

Leave a Reply

23 comments

  1. Aaron Maklowsky

    The vast majority of the best AI scientists and most cutting edge AI technology still resides in the US and US tech companies. Even the Chinese realize this. Sure they can copy and run with whats already out there, but can they push the boundaries? If you don’t foster an open culture of innovation and out of the box thinking, your chances of reaching the general-AI holy grail first is not great. Application /= innovation or breakthroughs.

  2. Marc Stoltic

    Elon Musk is serious about one thing, building Tesla as a brand by being your cool politics and technology uncle.

  3. John Macolm

    And I seem to recall a number of articles about a year ago talking about how private companies were scooping up AI experts left and right from academia. The perspective reported at the time was that it was leaving a significant shortage of AI educators at leading institutions, but I could believe that the bigger story could be hidden project.

  4. Acula

    Expanding resources available to AI (a trick that worked for regular computers) tends to make AI more stupid rather than more intelligent as having such capability it tends to memorize instead of generalizing. Hence creating complicated AI systems capable of thought seems to be well ahead of us. The future is just an illusion in physics. This is just one possible way of ordering events. And not particularly remarkable one – outside of the way ur mind operates – that we can remember the past but cannot remember the future.

    • Mac McFisher

      Well, social media and companies like Cambridge Analytica are making that seem like a reasonably possible outcome. We’re already perfecting manipulation based on info that’s been gathered on individuals and groups for years now.
      I can’t wait to see how an incredibly smart AI would use that.

    • Jang Huan Jones

      This is interesting, though perhaps not for the reasons Musk thinks it is. In particular, it’s reasonable to worry about an international arms/technology race concerning AI while also not worrying about the popular picture of some strong AI takeover.
      For my part, I am extremely doubtful that there will be anything at all like general AI posing a threat to humanity within, say, an 100 year window (and barring some kind of paradigm shift in the most basic materials and structure underlying contemporary computers). But I am also confident that “soft” AI addressing more local problems will be profoundly “disruptive” very soon. In domains up to and including political and military strategy. This article is cool to me bc I’m always finding myself trying to tamp down ppl’s (usually wildly uninformed) speculations about intelligent machines while also agreeing with them that this kind of technology may radically change society within our lifetimes.

    • Guang Go Jin Huan

      That’s the kind of interactions we want for our robots. We want to mimic the human-human interaction

  5. CaffD

    Cynical perspective kicking in: if Facebook says Congress could regulate AI effectively, it means Facebook believes it could get Congress to do what would be to Facebook’s benefit. Accept some regulation of Facebook itself as long as other regulations hit Google much harder.

  6. John Macolm

    Google’s role starts with recognising the need for a principled and regulated approach to applying AI, but it doesn’t end there. We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.

    • And99rew

      ML is a great prevention tool. If you can reduce the number of people coming in, you’re doing loads of good. Definitely the top tier solution.

      But if there are people who need to come in, I understand the want for a tool like this. It’s being used at my workplace currently. It allows facilities to pinpoint areas of high risk, and adjust workflows/traffic flows in the building if there are hot spots. It’s certainly SUPER dystopian and has a lot of inherent risk, but it at least has some utility.

      • Marc Stoltic

        I don’t see any power in it though. We already have real intelligence, nothing about artificial intelligence will make anything more dangerous.

      • Marc Stoltic

        Surely ASI has happened an innumerable amount of times in the Universe, where is it? Does it wink out of existence at a certain point, a sort of ascension? Accelerated intelligence would only hold true if the AGI->ASI was locked into its original programming. Free to truly reprogram its self, it might just as easily self-terminate or do any number of things we can’t predict. It might even simulate the entire evolution of the Universe just to know where it came from and what it is.

        • CaffD

          I have thought a lot about the subject and I am also in the process of writing an article.
          My view is that our Darwinian nature gives us a wrong idea about what conciousness really is. It’s the “illusion of self” that could also be the illusion of conciousness: you feel that you exist not because this is objective, but because it helps you survive and pass your genes successfully to next generations. You actually don’t exist (as we mean existence) and everything is chemical reactions in your brain.

  7. PiotrPawlow

    The EU and the US are already starting to develop regulatory proposals. International alignment will be critical to making global standards work. To get there, we need agreement on core values. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.

  8. PiotrPawlow

    Nature published our research showing that an AI model can help doctors spot breast cancer in mammograms with greater accuracy; we are using AI to make immediate, hyperlocal forecasts of rainfall more quickly and accurately than existing models as part of a larger set of tools to fight climate change; and Lufthansa Group is working with our cloud division to test the use of AI to help reduce flight delays.

    • Jang Huan Jones

      Even assuming a utopian “robots serve us and we all live happily consuming entertainment and travel” scenario, what about population growth? All this free time, free resources and robo-nannies may encourage people to start having kids again, as why not? You spent your 20s, 30s consuming entertainment (and school was fun too, because why learn work related skills you don’t enjoy), popping out some babies is probably going to seem appealing.
      But as the world only has a fixed size, in this scenario you’d need population growth control… Either obviously through legislation, or subversively through discouragement… Sounds a bit scary to me

      • Marc Stoltic

        It’s possible and apparently quite likely according to Mr Musk. It’s anyones guess whether nested ASI is an inevitablity. Personally I think we are in a simulation/creation of sorts. Like Mr Musk, I find it extremely unlikely we’re the original-pioneer beings.

      • Aaron Maklowsky

        Technology is everything, Christ how is it not obvious. It’s been true since we started killing people with iron all the way until now, where we kill people with lead.

  9. AndrzejP34

    Your insightful read has revealed some valuable thoughts towards my ongoing development while seeking absolute truth. Thanks for effectively sharing your ideas.

    • And99rew

      You grossly underestimate the ability of management to minimize costs.

      • Mac McFisher

        Humans create AI and it soon becomes mans best friend. Dogs won’t stand for this.