We test our confidence in Artificial Intelligence and study the ethics lesson

Proponents of autonomous vehicles often cite the results of repeatedly repeated research results, which show that for 90-95 percent. The driver is responsible for accidents, not technology. Interestingly, modern technology - raising disputes and controversies - makes us think about basic values such as good and evil.

Forbes zaufanie do sztucznej inteligencji Norbert Biedrzycki

My article published in FORBES on September 12th, 2018.

 

The autonomous vehicles debate has shown us just how many challenges lie ahead in our relationship with intelligent technology. As we test autonomous vehicles, we also test our confidence in them and (hopefully) learn lessons in ethics.

Advances in algorithms and robotics commonly evoke extreme, even irrational reactions. Our fascination with technology often goes hand in hand with almost childlike fear. As UK physicist, futurist, and science fiction writer Arthur C. Clarke once wrote, “Any sufficiently advanced technology is indistinguishable from magic.”

In an unsettled world, emotional anxiety about the present, let alone the future, is perfectly understandable. This anxiety is not all bad; it allows us to focus on our fundamental beliefs and values, such as what is goodand what is evil?

 

Safer than people

Autonomous cars are a vivid example of how technological advances polarize debate. Personally,I see self-driving vehicles as an opportunity to improve road safety. I am not in the least surprised by the upbeat predictions that 15 years from now it will feel odd to think back to a world in which cars were still driven by humans.

However, to make sure changes move in the right direction, we need universal standards to govern the new technology. By investing heavily intoautonomous vehicles, their manufacturers make a commitment, as it were, to society. Their promise isthat widespread acceptance of driverless cars on public roads is going to prevent tens of thousands of traffic casualties. (In 2017, over 40,000 people died in motor vehicle accidents in the United States.)

The proponents of autonomous vehicles cite studies that show that 90 percent to 95 percent of crashes involve human error.

 

And yet…

Last March, an autonomous vehicle killed a woman crossing a road in Tempe, Arizona. Neither the car’s safety systems nor the person seated behind the wheel managed to prevent the fatality. Thisinspired some to call the accident a momentous event: the first human victim of artificial intelligence. Soon, people were speaking out against allowing profit-driven producers of imperfect technologies to use public roads to test their inventions. Skeptics and critics expressed doubt over whether people would ever embrace autonomous vehicles. Early this year, a Reuters poll found that two-thirds of Americans were “wary” of driverless vehicles, and that was before the Tempe accident.

The accident revealed some troubling problems, among them overworked test drivers and halving the number of test drivers per vehicle. (Initially, two drivers would always oversee vehiclesduring a test drive, not one.) By choosing to increase driver comfort (and reduce frequent braking), testing companies compromised vehicle responsiveness to emerging obstacles. Journalists uncovered internal reports revealingthe inability of test vehicles to drive more than a mile without human intervention. Video from the Tempe collision showed the driver busying himself with his smartphone and failing to watch the road.

One could call this human error, but the error runs deeper, and it’s even more human.

 

Link to the full article (in Polish)

 

Related articles:

– The “sharing economy” was envisioned nearly 100 years ago

– Who will gain and who will lose in digital revolution?

– When will we cease be biological people

– Artificial intelligence is a new electricity

– Robots awaiting judges

– Only God can count that fast – the world of quantum computing

– Machine Learning. Computers coming of age

 

Leave a Reply

25 comments

  1. Tom Jonezz

    A self-learning neural network can’t really be reviewed by humans.

  2. Zidan78

    I would like to add a correction. AI has been a thing for a while, but 2010 a innovation called deep learning has a huge break through, it is base off nerual networks and the ability to recognise pattern and “meaning” by itself without programmers giving it a goal. The innovation is a way of thinking which branch off a few alogrithms that accelerated the process. So yes, big data help, but deep learning is the game changer.

    • Norbert Biedrzycki  

      In 2019, AI will expand to cover new dimensions such as media, healthcare, retail, manufacturing, communication, research – in fact, almost every area of modern life has the potential to be influenced by AI applications.

    • Tom Jonezz

      Because the notion of the algorithm in construct is to alleviate the burden upon AI to expedite ‘more’ or somehow ‘better’ justice upon the people(s), whatever the concern in question to be judged, the total second variable value is not an ‘absolute value’ because people are always hurting out of hurt and criminalized by crimes so the second variable is essentially negative.

  3. John Accural

    I prefer to define it as a set of technologies which is able to aggregate data and present it in a way to facilitate the process of decision making. To allow for fully autonomous weapons to take action on decisions which are made based on weighed statistical data analysis and normalization is akin to playing a “smarter” version of Russian Roulette. Even if we ever reach a level where machines are self-aware, they should always be nothing more than tools aiding the decision making process of humans – people in this case who are tasked with “pressing the button” to either launch or defer the launch of a weapon. Some of the greatest battles and resistance wars – from the Battle of Thermopylae, to the partisan resistance in various countries occupied by the nazis in WWII had no “statistical” business of being fought – yet people did and people defeated enemies with far superior resources. Sometimes the best decisions are made based on “gut feel”, not computational analysis.

    • Acula

      Change is coming.. inevitably. Or rather.. change is inevitable. Agree?

  4. Tom299

    I have a bad news for you. Robots are/will be better in most of the manual work where the only asset needed is human physical body, human sensors (eyes, ears) and just a little of intelligence to make quite simple decisions. It is a very natural thing to just let things be done by the one who does that (much) better.

    • Norbert Biedrzycki  

      If Europe develops and diffuses AI according to its current assets and digital position, it could add some €2.7 trillion, or 20 percent, to its combined economy output, resulting in 1.4 percent compound annual growth through 2030

      If Europe develops and diffuses AI according to its current assets and digital position, it could add some €2.7 trillion, or 20 percent, to its combined economy output, resulting in 1.4 percent compound annual growth through 2030

    • TomCat

      I agree somewhat but the common denominators here are consumer convenience, accessibility and the concept of dis-intermediation which were all mobilized through and facilitated by new technology and in most cases new digital platforms

    • SimonMcD

      Absolutely true Norbert Biedrzycki – In almost all cases listed above, and many others, the companies or industries that tried to avoid change, keep the status quo and manifest their market position or monopoly made room for disruption. Even here in Africa. The Hospitality and Tourism Industry is overdue for disruption but most non-africans don’t even see the opportunity and the existing businesses are happy to keep no longer sustainable business models in place, to the absolute disadvantage of the destination and its business and people. It is high time to change how African destinations do business and gain back control on their visibility and distribution. It is a no-go to further give away 60-80% of the profit to overseas middlemen, not contributing to the destination.

      • Norbert Biedrzycki  

        Picture – let we chceck missing likk. Apologies 🙂

  5. Tom Jonezz

    This is possible because of something known as neuroplasticity, the capacity for the neurons in our brain to make new connections and reconfigure its network in response to new stimuli, information, trauma, or dysfunction.

    Examples include learning new skills, remembering information, people, or events, making complex movements with our bodies without consciously thinking about it, and taking the cacophony of stimuli around us and making sense of it all. It’s how we go through life with part of our vision being obstructed by our nose though we simply don’t notice it.

    • Norbert Biedrzycki  

      Artificial intelligence can be really great for analyzing patterns humans wouldn’t be able to notice or detect and being able to solve problems humans can’t come close to resolving, but AI also has the distinct problem of being able to wipe everyone out mercilessly. Humanity is on the razor edge of self-annihilation because an AI that is determined to kill humans will make the atomic age look like the stone age.

      • Tom Jonezz

        Unfortunately, I doubt that it matters what any of us want. The day is coming, and fast, when the degree of computer automation in what we might call the ground traffic control system will rival or exceed the degree of automation in the air traffic control system. Some will say that the day is coming much too fast and in too many spheres. Computers are already in almost complete control of the stock market. They’re gradually taking over medical diagnosis. Some even want to turn sentencing decisions over to them. Perhaps things are getting out of control.

  6. Adam Spikey

    There is one key test that matters for defining AI, the Turing Test, which in itself is somewhat quite arbitrary because it makes an average human as the reference frame. Taking average human intelligence as a workable reference for AI, we can only discuss if artificial intelligence (assumed non-biological, thus artificial, utilizing whatever means possible, usually machines to learn) is sub-Turing (humans win consistently) or Turing (indistinguishable) or (potentially) super-Turing (outperforms humans consistently). At least that suggests something with a reference and is verifiable. Terms like AGI or Strong/Weak AI only add to the semantic confusion already there, being essentially without a reference frame and thus meaningless. There are many areas where machine learning methods can increasingly be deemed Turing level AI. Regards.

  7. Karel Doomm2

    I think it is fascinating that we can press the boundaries of what science will allow scientists to build. without any regard what this means to humanity and the real life consequences. The danger of this recipe at hand is the following: 1) scientists/technologists have been given free reign on defining the future of humanity; 2) companies that do not want to be regulated writing papers to influence politicians that are unaware of steering policy. I think if you put some of these scientists/technologists/politicians in a room with Mohammed, Jesus, Moses, Buddha, and other religious leaders they may leave the room thinking differently. These companies can stand outside of their corporate facilities smiling and showing what great things they are doing for humanity well we should have a governance committee backed by international laws determine if that is the case.

    • Jacek Krasko

      According to Gartner:
      “China and the US will be neck-and-neck for dominance of the global market by 2025. China which will account for 21 % of global AI power, ahead of the US on 20%. However, the US wins in terms of AI revenue (22 % vs 19 %). The third largest market is predicted to be Japan with 7 %”

      As regards the number of Industrial AI market:
      A new report from GSMA Intelligence forecasts that China is poised to lead the global Industrial AI market and could have as many as 4.1bn of the 13.8bn global AI Reve estimated to exist by 2025.

    • Tom Jonezz

      While the algorithm may not see bias in the way people see bias, because of the inherently flawed dataset where all people will develop at slightly different rates and their experiences, even those of equal social force/value, will cause varying impressions which may cause differing effects; any algorithmic mechanisms applied will observe and apply a simple value to each circumstance without the understanding to know which evolutionary processes have or have not been impressioned upon.

  8. TonyHor

    Brainclouds will augments current cloud environments. Why can’t a person’s innovation be tapped into while they sleep – machine to human interface to explore untapped opportunities.

    • TomHarber

      I think some of that data has already been harvested for the algorithms in building AI. I do believe we should not give away all privacy and the face of what makes us human.