The clash of the dark side and the bright side. What is human-friendly AI?

Is it time yet we thought of a “red safety button” in the event AI gets out of control? Or should we respond to our fear of an algorithmic armageddon more constructively by designing human-friendly AI?

Human-friendly AI Norbert Biedrzycki

The idea of a symbolic kill switch that would instantly neutralize hostile algorithms is not foreign to the average person nor to tech industry moguls and celebrity scientists. There is no point citing the repeated warnings of Elon Musk, Bill Gates, Yuval Harrari and Stephen Hawking regarding the risks associated with uncontrolled AI. Before I reflect on how such human-friendly algorithms could work, I’d like to go over the visions of civilizational and technological pessimists. What is human-friendly AI?

Cosmic indifference

Some increasingly popular concepts predict that as artificial intelligence continues to self-improve, it will stop heeding any limits. It would be against its nature to do so. Such AI will resemble an ever more complex self-replicating virus that invades successive realms of our existence, whether biological, emotional or intellectual. Algorithms that continuously improve their organization will subjugate us just as we have subjugated animals. They will gain the ability to organize, or actually disorganize, social life with no regard for our views, laws or protests. Seen this way, AI will not necessarily have to act on a rational (as we might see it) intention to subjugate people. It may cause a disaster merely by having its algorithms establish their own hierarchy of goals that completely ignores our interests. The concern is not that a cyborg will come firing its laser gun at us, meaning that one species will take over another in a hostile attack. Rather, we might be sidetracked and taken out of the game by entities that are indifferent towards homo sapiens if not unaware of its existence. We will perish as the AI pursues its intelligent plan from which we are barred today and which we will not be able to access in the future.

We will become e.g. paperclips

Such a cold-hearted extermination of the human race with no clear-cut doomsday disaster scenario brought about by completely indifferent algorithms, was the subject of a thought experiment carried out by the Danish philosopher Nick Bostrom in a book titled “Superintelligence”. Imagine, says Bostrom, that an ever more perfect and ever faster-improving AI comes to the conclusion, at some stage of its development, that its primary goal is to create the maximum number of paperclips. The AI harnesses every bit of the world around it to meet this goal. Everything, every piece of organic and inorganic matter down to the smallest bit, gets utilized with an eye to creating the greatest possible number of paperclips. It doesn’t matter what is what, who is who, or whether someone – i.e. humanity – considers the goal to be absurd and incomprehensible. Such considerations are only reserved for people. The ultimate aim of the machine is to achieve maximum efficiency in pursuing the goal that for some reason or other the algorithms has set its sights on accomplishing at this particular stage of its evolution.

The singularity is coming

As absurd as this section heading may sound, it shows just how defenseless we, as a species, may become in the “singularitya” scenario, which assumes that the collective computational power of all computers will exceed those of our brains within the next two decades. How can people respond sensibly to such disturbing and inconceivable developments? What can be done today to avert such catastrophic scenarios? Assuming that algorithmic technologies will be increasingly autonomous and will uncouple their plans from our intentions, views and plans, can we still put the proverbial “red button” in place?

Eleventh: we control you 

This would have to be a general and absolutely respected principle and one that would underpin mankind’s sense of security. This brings me to the concept of human-friendly AI. Analogously to the worst-case scenarios described above, an optimistic scenario would be based on the assumption that despite continuously increasing its capabilities, artificial intelligence does not gain autonomy. It is humans who still invariably decide how AI should develop and what its capabilities and limitations are going to be. From this point of view, the concept of “the black box” appears to be merely an expression of our fears rather than an actual state of the algorithms that might pose a real threat. A human-friendly AI would not interfere with every aspect of our world. Certain spheres of our lives such as the human consciousness, the subjective human experiences of love, friendship, etc., and people’s value systems, would all remain beyond its reach. As amazingly capable as they are, algorithms wouldn’t be able to overstep certain boundaries. Perhaps that uncrossable line would be one between intelligence and consciousness, as consciousness is something that AI will never acquire and that we will never be able to bestow upon it.

In addition, we could pass a set of laws (which would have to be adopted in any case) that would draw red lines for technology experimentation. Such self-regulation, which we’d phase in over time, would probably have to extend to human chipping, cyborgization, and personal data processing.

A big project needed

It is appropriate at this point to quote Jaan Tallinn, who has once contributed to the development of Skype and who is currently working to develop ethical, human-friendly technologies. Tallinn has told The Guardian: “The hope is that AI can be taught to discern such immutable rules. In the process, an AI would need to learn and appreciate humans’ less-than-logical side: that we often say one thing and mean another, that some of our preferences conflict with others, and that people are less reliable when drunk. We have to think a few steps ahead … Creating an AI that doesn’t share our interests would be a horrible mistake.” If such a utopian global project could be implemented, which would only be possible with the help of politicians, technologists, ethicists and the tech industry, AI would become a powerful force for the advancement of homo sapiens, bringing humanity’s development to a higher level. This would allow humans to enjoy life more, notice qualities that have so far been inaccessible in our daily existence and be less concerned about losing them.

Perhaps all this could be summarized in a single sentence whose message may not be particularly elegant but should nevertheless be shared. Human-friendly AI is simply about humans having full control over algorithms. We should strive for such control at all costs and never give it up. For all we know, we might be in early stages of a war over such control.

.    .   .

Works cited:

The University of Cambridge, Centre of the study for Existential RiskJess Whittlestone, Risks from Artificial Intelligence, Link, 2020. 

Observer, Michael Sainato, Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence, Link, 2018. 

NickBostrom.com, HOW LONG BEFORE SUPERINTELLIGENCE?, Link, 2019. 

The Guardian, Mara Hvistendahl, Can we stop AI outsmarting humanity. The spectre of superintelligent machines doing us harm is not just science fiction, technologists say – so how can we ensure AI remains ‘friendly’ to its makers? Link, 2019. 

.    .   .

Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Artificial Intelligence is a new electricity

Leave a Reply

18 comments

  1. Guang Go Jin Huan

    It does not mean robots actually feels angry, happy or any other mental states in the same way we do, they are so far just designed to display emotions.

    There are many times we say “sorry”, “please” or “thanks” faster than we actually feel grateful or regretful, most of the time, we say those words because we were taught since we were kids that they are used to show the right behaviors, right?

  2. CaffD

    Can someone explain what this is?

    Are these literally chips to give your robotic different emotions…? Or is this some novelty thing like the “force in a jar” Starwars shit.

  3. SimonMcD

    Great article Norbert! Bit scary though. I started to get used to my emotional intelligence

  4. Aaron Maklowsky

    imagine A computer virus. Now imagine it has the intelligence of 1 billion billion Einsteins. Imagine it has free will. Now imagine it having access to all of our technology. Good. Scared yet?

  5. Tom Aray

    Fantasy: THEY’RE GOING TO MAKE ARTIFICIAL INTELLIGENCE THAT CAN PERFORM SUPERHUMAN FEATS, INSTANTLY HACK EVERY ENCRYPTION EVER CREATED, TAKE CONTROL OF THE INTERNET, CONTROL OUR LIVES, ETC.

    Reality: We finally have AI that can actually accurately scan and compare human faces in a real world scenario! Yay!

  6. Zeta Tajemnica

    It reminds me of the movie “Chappie”, where [SPOILER ALERT] at the end the robot uploads the mind of the dying boy to another robot. For many the end of that movie was pretty weird, and felt like everything went a bit too fast. How could a robot that said “fuckmothers” instead of “motherfuckers” be able to perform such a difficult task?
    Imo the exponentially fast learning of the AI, with which humans cannot keep up, might be the most dangerous thing

  7. Marc Stoltic

    I don’t see what true AI would do better than what either a person can do or our current computer models. These skynet fantasies are silly simply because you could program any computer to behave in the same fashion, but being a computer doesn’t suddenly allow an entity to gain access to control everything it touches.

  8. Guang Go Jin Huan

    Once something like AI is turned on, and if intelligence and consciousness are something can be created in a non-biological way, we have no way to stop it. It can gain access to weapons or manufacturing. Or it can simply manipulate people with such efficiency we won’t even realize it is being done. It quickly becomes so smart no one understands it anymore and it can do things I can’t even imagine or make up in this reply, because I am a meat bag.
    I agree that this seems like sci fi garbage magic…but…Elon Musk and Stephen Hawking are not Alex Jones. They work with some of the smartest people on earth, probably have access to high level research the public doesn’t see, and they seem to be worryingly preoccupied with it. I personally don’t see how we get from modern supercomputers to a real AI with present tech, but that doesn’t mean a team in some high-level laboratory isn’t close to a breakthrough.
    Maybe we can design an AI slave where we put limitations on what it can do and how smart it can get and force it to work for us. However, once we have it what stops other people from designing their own with less efficient safeguards. For an AI to be useful it has to be smarter than us, otherwise what is the point?
    However, I personally hope we all become cyborgs and merge with the machines, just like the shitty Mass Effect 3 ending.

  9. Jang Huan Jones

    I have mixed opinions about this,

    mostly stemming from the fact that AI is such a broad field, that making ominous statements about ‘AI’ as a whole makes it really hard to argue against it. Like, what aspects specifically of AI are dangerous? Computer Vision? NLP? Machine learning? Data Science? RL?

    On one hand, I can see that the implications of autonomous tanks are a big deal. Weapons with ‘augmented aiming’, or something like that, which might come out of CV. On the other hand, things that Elon worries about (like AI declaring war on its own) feel so far away as to be almost a non-issue entirely.

  10. John Macolm

    It seems that AI research and development is going full steam ahead while there doesn’t seem to be much concern for the safety risk that comes with it. There’s a lot of talks about how we should first develop a safe space for the baby AI to grow in to know what’s right and wrong before we actually develop the AI itself.

    So far, it doesn’t seem like that’s happening. Elon Musk, Bill Gates, and Stephen Hawking are a few who voice this opinion. Hawking even said
    ” So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here–we’ll leave the lights on?’ Probably not–but this is more or less what is happening with AI ”

    So why is there still a general lack of concern over this from the field?

    • Zeta Tajemnica

      As somebody who worked with AI, I’m surprised that more developers don’t speak out about AI misinformation. AI is nothing what people make it out to be. It doesn’t have self-awareness, nor can it outgrow a human. Up until this day there has never been a program demonstrated that can grow & develop on its own. AI is simply a pattern, or a set of human made instructions that tell the computer how to gather & parse data.
      In the example above, here’s what’s actually happening. GPT-3 (OpenAI) works very similar to a Google search engine. It takes a phrase from one person, performs a search on billions of website articles and books to find a matching dialog, then adjusts everything to make it fit grammatically. So in reality this is just like performing a search on a search, on a search, on a search, and so on…. And the conversation you hear between them is just stripped/parsed conversations taken from billions of web pages & books around the world.

  11. Mac McFisher

    I still think an AGI would want and need humans for a very long time to come. Soft power is the lesson it’ll learn from us, I feel. Why create bodies when you can co-opt social media and glue people to their devices for your own benefit?
    From an AGI perspective, maybe we just keep slipping out of our harnesses.