Do not be afraid of future robots lurking around the corner to get us. This only happens in movies. Instead, you should brace for more complex scenarios. Biotechnology, genetics and artificial intelligence will put Homo Sapiens on an evolutionary path that will forever change the face of human existence. This is how one of the most-popular contemporary thinkers, the Israeli historian Yuval Noah Harari, sees our future in the next century. The views expressed in his books (whose sales are soaring, as they are translated into many languages) and articles (appearing in top papers and magazines, including The Guardian, Wired, and The New York Times) attract the attention of humanists and high tech experts alike. Suffice it to say that his books have been recommended by both Mark Zuckerberg and Bill Gates.
A historian discussing algorithms
Harari has made an incredible career. This relatively unknown historian specializing in medieval studies suddenly found himself being invited to debates on artificial intelligence, neurotechnology and the threats in the age of technology. He was propelled to fame by the publication of his 2014 book “Sapiens. A Brief History of Humankind”, translated into many languages. “Homo Deus, A Brief History of Tomorrow”, published in 2016, also attracted great interest. Another one of his books, which is a collection of essays entitled “21 Lessons for the 21st Century”, appeared a few months ago.
How to avoid slavery
It is virtually impossible to comment on every theme and issue raised by Harari. These range from biology, to anthropology, history, culture, economics, to history of civilization. I will therefore limit myself to a select few, which often come up in interviews and public presentations, and which concern the impact of technology on our lives. Harari writes not only about the ancient history of mankind, but also about the scenarios that await us in a few dozen years. He does not shy away from bold, controversial statements. And he leaves no illusions. During the next century, technology will profoundly transform our lives. Nobody knows exactly what those transformations will be. However, Harari believes that what we teach our children today will rapidly become irrelevant. While a 1000 years ago, people were able to predict life 50, 100, or 150 years into the future and have a rough idea of what the next epidemics, wars and disasters might be like, and how best to ensure their survival, today’s predictions are largely worthless. Admittedly, Harari is not generally known for optimism. In one of his scenarios, a privileged elite uses data and information to subjugate the rest of humanity. Knowledge, data and information will be the subject of speculation, fought over for influence. It is data rather than land or other resources that will enable people to seize power. Humanity may split into two groups: the lucky ones who manage to jump on the bandwagon called “progress”, and the less fortunate people who will have fallen by the wayside, says Harari.
How not to get hacked
According to Harari, it is humanity’s duty to prepare to live in a world in which technology constantly modifies human bodies (through genetic engineering, plastic surgery, chip implantation, and having people wired to connect directly to devices). In this world technology will drive the emergence of a new kind of consciousness. While technology will still be harnessed to produce conventional goods, such as food, clothing, and means of transport, the key focus of the most sophisticated technologies will be to modify, transform and enhance the human body.
All this will be made possible by algorithms constantly monitoring the activity of our hearts, eyes, and brains. As living beings, we will undoubtedly acquire new abilities, but new threats will also loom. The hacking of computer accounts, bank accounts and e-mails has become part of life at the turn of the 21st century. A few dozen years later, our biggest fear may be to have our bodies hacked.
How to be a happy processor
It is hard to foresee how exactly our human consciousness and mental capabilities may evolve. One can nevertheless be certain that changes are coming. Without going into specifics, Harari dispels our illusions. He urges us to forget the notion that human intelligence and the faculties of the human mind will remain unequalled. Even today, algorithms understand us better than we do ourselves. Or how else could we explain the uncanny ability of algorithms to suggest to us just the right goods and services while we browse the web? Oftentimes we ourselves could not think of a specific car model, holiday package or LinkedIn friend that we appear to need.
Remarkably, Harari’s predictions coincide with the postulates proclaimed by the proponents of Dataism. According to this concept, neither human feelings, decisions and choices nor humans themselves are the center of the universe. Instead of seeing our species as empowered subjects, as the classical approach would have it, Dataism proposes to view us as one of many data processing systems in existence, with individual humans serving as processing units. The evolution and progress of mankind depends vitally on how well we exchange and transmit information. Every step forward taken by mankind can be attributed to improvements in its ability to manage data, says Harari.
What to do now
The above claims may seem extreme, and perhaps outlandish for some. One must nevertheless admit that Harari’s argumentation is very compelling. If we agree that he is making a valid point in his visions, the next logical question will be what we can do about it. In an interview, Harari advises: take things seriously. And don’t forget that the changes will become political. Politicians, scientists and corporations will end up in positions of power. Do not leave all choices to them. We need to find our own way …
On a personal level, Harari may realize this in meditation, which he practices daily for two hours before and after work. Perhaps meditation gives him a sense of empowerment and allays his anxieties. One can only hope that his books will put you at ease rather than upset you.
. . .
Related articles:
– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies
– Learn like a machine, if not harder
– Time we talked to our machines
– Will algorithms commit war crimes?
– Machine, when will you learn to make love to me?
– Hello. Are you still a human?
TomK
he’s is sometimes very shallow
Jang Huan Jones
This is interesting, though perhaps not for the reasons Musk thinks it is. In particular, it’s reasonable to worry about an international arms/technology race concerning AI while also not worrying about the popular picture of some strong AI takeover.
For my part, I am extremely doubtful that there will be anything at all like general AI posing a threat to humanity within, say, an 100 year window (and barring some kind of paradigm shift in the most basic materials and structure underlying contemporary computers). But I am also confident that “soft” AI addressing more local problems will be profoundly “disruptive” very soon. In domains up to and including political and military strategy. This article is cool to me bc I’m always finding myself trying to tamp down ppl’s (usually wildly uninformed) speculations about intelligent machines while also agreeing with them that this kind of technology may radically change society within our lifetimes.
Mac McFisher
I still think an AGI would want and need humans for a very long time to come. Soft power is the lesson it’ll learn from us, I feel. Why create bodies when you can co-opt social media and glue people to their devices for your own benefit?
Krzysztof X
The big catch here is how you train these algorithms to make sure that any bias conscious or unconcious is not propagated to the algorithm. I think this is where the ethics come in play along with rules and legislations to make sure that even if we don’t understand the details of the decision process the outputs are fair given the inputs.
John Macolm
If the AI work they are doing is high security, high clearance work then they probably would need to be moved to a secure location to do their work. Letting them do their work in a university lab would make it extremely easy for foreign nations to spy and steal their technology.
To my knowledge, nothing like this has happened in the US, Russia or China (as far as I know).
CaffD
” Elon Musk, 55, expressed his concerns surrounding smart machines on Wednesday, August 28, at the World Artificial Intelligence Conference in Shanghai, China. Mr Musk’s warning came after he revealed the work of his company Neuralink in July – a company he co-founded to merge human brains with machine interfaces. Speaking at the AI conference, the SpaceX boss argued computers are already outsmarting their creators in most scenarios. More shockingly, Mr Musk claimed some researchers are making the mistake of thinking they are smarter than AI. “
Aaron Maklowsky
Developing AI is like learning to communicate and teach an alien race.
Yes, now you simply teach them to solve a problem or task, but once the AI learns how to do that, we don’t know how it arrived there. And it only gets exponentially more difficult to understand with complexity.
PiotrPawlow
Possibly
Adam
As a newbie my brain exploded after the 1st paragraph.
Oscar P
Great books. He talks a lot about religions. Various. Communism overlaid the template of the Russian Orthodox Church (and the worst historical elements of Christianity in general) on the Russian people because it was a ready-made formula that could be exploited for tyranny; Communism has a life long leader who is beyond questioning, who requires unending praise and devotion, who needs to protect the people from the outsider or great satan of the world, who promises miracle crop growth without competent humanistic planning, who ordered witch hunts for the unbelievers and Inquisitional show trials to condemn and destroy his internal adversaries.
This is nothing like the ideas of Thomas Paine, John Locke, Thomas Jefferson, (too many to list) and all the great Enlightenment thinker who promoted the sovereignty of the individual that led to the formulation of the U.S. Constitution.
And99rew
Machine learning is pretty cutting edge, it’s not going to be outsourced for a while.
Zidan78
Communism shouldn’t be treated like a religion, but the sad truth is that over the course of its history, it certainly has been by its proponents. Even today, “dogmatic Marxists” (as they are usually called) do follow the writings of Marx, Lenin, and others, with the same zeal that theists apply to their religions. And in some cases the communist view of “the revolution” is almost messianic.
I guess what I’m saying is, it totally deserves it, even though it shouldn’t (communism is allegedly based on rational critique, scientific inquiry, etc., and therefore should be revisable, able to withstand the correction of errors, should accept a multiplicity of interpretations and views of its core beliefs… many communists do all of this, of course, but then a depressingly large number do not).
On the other hand… it seems like there should be a different way of speaking accurately about dogmatism, cult-like behavior, and ideology in a way that distinguishes between actual religions, relying on prophecy or supernatural events, and secular belief systems that act like religions when in the hands of their more dogmatic and insular followers.
AndrzejP34
Thank you for pointing out the importance of this topic
Jang Huan Jones
Similarly, AI can be very good at creating problems for civil society, but people will say, “Oh that’s just some trick.” Think about how many people fall for the Nigerian scam. Think about how little it would take to create simple bots that escalate propaganda memes to those people who are receptive to it. You could target their friends and social connections easily. Look how much ISIS has accomplished with simple manpower. Imagine that times a factor of a million via a very simple rudimentary AI Facebook bots.
Adam T
Cool. I love his books
John Accural
One of my takeaways from Harari is that ideologies can suffer from the same defect aimed at a different type of target. Ideologies can ask for the belief in a superhuman order without any evidence, and usually a lot more to follow after that belief.
TomCat
I think the point is that even if one day sapiens rid ourselves of theistic religions, we’ll still have (and need, for purposes of a cohesive society) natural-law religions – some of which are similarly capable of motivating violence and hatred.
Norbert actually made this point years ago somewhere in the face of the oft-cited retort to his views about the dangers of religion. It goes something like: “well communists were responsible for some of the largest massacres of the 20th century, but here you are worrying about religions!?”. His reply to this was that communism is really a kind of terrestrial religion, with qualities that mirror theistic religions. To wit, his point that the real takeaway here is: we’d be better off trying to be more reasonable, and less dogmatic/ideological.
John Macolm
AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.
Aaron Maklowsky
Humans can comprehend what ai is capable of. They also program it to do tasks. This is not a boogeyman technology anymore than flight or guided weaponry is. The evil computer will not kill all humans firstly because it’s told not to do that and secondly it will never have the capabilities to act without human intervention.
AdaZombie
I’m currently reading Sapiens by podcast guest Yuval Noah Harari (Which might the most clear minded and intelligently written book I’ve read). I just came onto a chapter where he claims that while not a theist religion, communism is still a religion.
Here’s a page from the book explaining his reasoning. What do you make of it?
https://imgur.com/gallery/jwKx8Q7
Zoeba Jones
This definition of religion boils down to ” things that people think are important and that help people form a worldview”. It’s too vague to be of any use. The only real criterion one has to be “religious” is giving a shit about something. In doing so the implicit premise is that atheists don’t care about things, which is kinda shitty
CaffD
I certainly hope so. We have a very long way to go, however.
AndrewJo
Awesome post! Thank you Norbert! Every time information on this subject appears, it is worth reminding the precursor R Kurzweil. Thank you very much for this publication, here lies the biotechnology and artificial intelligence.
Jang Huan Jones
We are currently in a peacetime technological boom. If america were to go to war, all of those tech giants could instantly start devoting most of their resources to weapons production which would likely mean some very sudden advancement, similar to how u-boats, fighter jets, rockets, nukes, etc sprung up during ww2.
That being said, I agree the risk of an AI weapon being so human-like and powerful that it could turn against all of mankind and possibly win is unlikely anytime soon, but, its worth making sure everyone is aware well in advance.
John Macolm
Reminds me of some Japanese anime where it’s a dystopian future where various AI superpowers destroyed each other because a machine can only assume a pyrrhic victory is still a victory, even if it involves scorched earth.
Jang Huan Jones
Agreed, if Musk really believes in some AI “waking up” next 5 years and taking over, he’s either paranoid or misinformed. But if Musk and Putin talk about soft AI, or rather “simple” automated weapons, trading systems etc., then sure, we’ll have to deal with that.
It’s going to do vastly more good for our daily lives than bad though. Automated cars, optimised roads, no car/ship/plane collisions ever, DNA protection, infection management, pollution removal… most of our current serious problems can be solved by a swarm of automated machines. Tiny for chemistry, large for transportation and construction.
Fully automated chaingun turret will still wait for a button to be pressed or some “free-fire” mode to be active. Automatisation doesn’t mean the thing can learn to disobey.
Aaron Maklowsky
It’s on the same potential lines as the next atomic weapon. A true AI can be devastating in ways humans can’t comprehend. Do some research. AIs being tested on basic optimization routines do some really freaky shit, boggling the researchers, on how they achieved their results. They don’t think like us.
Doug
Nice. Great writer