The end of the age of humans

Big Brother has always been watching us, except that these days, he does it far more efficiently and thoroughly. And it is going to get worse. Because soon assembly lines in factories will be “manned” by emotionless robots…


Big Brother has always been watching us, except that these days, he does it far more efficiently and thoroughly. After all, he is online, meaning he can access our e-mails, bank statements, phone calls and social media posts. He can also easily structure the information he retrieves to produce reports on how valuable specific individuals are for society. And it is going to get worse. Because soon assembly lines in factories will be “manned” by emotionless robots. The few, chosen ones that will get to keep their jobs will be subjugated to the decisions of taciturn metal-encased supervisors with very few humanoid features. Their speech synthesizers, cameras and their ability to process natural language will enable them to ask us questions, monitor our behavior and keep track of our efficiency. To what end? To assess whether we are still fit for purpose.

Neither will we find relief in our smart homes. Surveillance will grow ever more permanent and pervasive, extending even to our bedrooms and bathrooms. Wall-embedded sensors will follow our every move. Our morning cough will be noticed and instantly reported to our health insurer. The pharmacist will prepare the relevant medicine ahead of our arrival. Even our beloved self-driving cars will lull us into lowering our guard. They will unwittingly transform us from drivers to passive passengers, left at the mercy of the computer under the hood. And that computer will be busy, constantly processing algorithms. Algorithms of life and death that will determine whether we have the right to live in this neatly arranged society.


I am still myself

Dear reader. I am not being paranoid, nor suffering from a nervous breakdown. My blog account has not been hacked and these words really come from me, not an anarchist movement leader. I like the Black Mirror series and don’t believe it is a documentary. To my knowledge, my life has hardly changed from yesterday. I still work at a company staffed by nearly zero robots, although with a substantially higher automation level. I can go for extended periods without social media, although I find it fairly difficult. On Saturdays, I switch off my cell phone, relax in a forest, and often pay with cash in restaurants or stores.

This, however, is the unusual way I have chosen to begin yet another post on artificial intelligence. It concerns the myths on AI found in the media, on the web and in all of our heads. Causing fears which, I dare to say, are largely groundless.


MYTH 1. We will be watched constantly

It is indeed true – big data will enable us to rapidly access data on any topic. As rapid information is the future, only companies capable of retrieving it faster than others will survive on the market. It is possible that computers with specifications matching those of IBM’s Watson will one day populate every office and answer our EVERY question. This may not be all that desirable from our viewpoint.

Does this mean changes in the privacy protection law? Will the kinds of situations in which we can expect to retain our anonymity become considerably fewer? Will we be required to use social media, and will we be prevented from switching off our smartphones? Will we be FORCED to pack our homes with electronics? None of those are foregone conclusions: yes, technology is going to AUTOMATE the majority of social contexts and affect our decisions.

Obviously, as the digital culture moves forward, we become vulnerable and are subject to certain mechanisms that digitize our lives. That is why we should choose some of our behaviors in a more informed manner. When cars became commonplace, we had to accept the fact that we would have to be particularly cautious when crossing a road.


Big data, big threats?


MYTH 2. Robots will take our jobs

Some jobs are just asking to be robotized. Does it make sense for humans to prepare hundreds of thousands of packages in a shipping company’s warehouse? Can an enterprise that saves money by using robots not create other jobs by training qualified personnel that will serve customers online? Savings from employing robots and drones may finance the development of quite a few industries, or be redistributed to society to reduce deficiencies.

Robots will not be able to build relationships in companies, provide soft incentives to workers, come up with creative concepts or, not for a long time at least, draw constructive conclusions. Neither will they be able to sell creative ideas to managements. Out of many industries, only a few will really be able to benefit from the opportunities presented by robotization.


Modern technologies, old fears: will robots take our jobs?


MYTH 3. Algorithm errors will spark chaos

Granted, there have been cases of computers ascribing specific information or features to wrong people. Some such errors have been racially biased. There were also people who lost their driving licenses after being mistakenly blamed for having caused an accident. One can also show that conscious manipulation of information can significantly affect political choices.

But then do information transmissions, structuring and use have to be perfect? I don’t know where the idea that the digital ecosphere must be free of errors and dangers came from.

I have already written that algorithms can be wrong at times. Their errors may even become more common, and that simply has to be considered. One must therefore mainly trust one’s senses, thorough analysis and common sense.

Or perhaps we should CHOOSE definitively that, as humans, we MAY never allow machines to think or do things for us? As human beings, we have an existential duty to remain self-reliant.


According to our computers… you don’t exist


MYTH 4 We will become half-robot, half-human

One of our characteristics as a species is pessimism. Pessimism is useful, perhaps even necessary. Without a doubt, many great books would never have been written and many incredible movies would never have been made without it. There would be no intriguing stories about the inevitable downfall of civilization at the hands of machines. Or, in fact, at the hands of organisms that combine computers with the human brain.

However, all such theories about our minds gaining a new dimension thanks to implants under our skin are just that – theories. Will nanorobots circulating in our bloodstream “digitize” us for good, and will our brains really become permanently linked to the Internet? The matter is neither as simple nor as graphic as the excellent movie Transcendence would suggest. Its protagonist is reborn in a “digital” form after his death. Maybe this will remain in the realm of the theoretical, because we will be unable to fit some pieces of this futuristic technological puzzle into the rest of the picture?

The outstanding futurologist Ray Kurzweil, whom I have mentioned on multiple occasions, likes to describe himself as an optimist. He claims that we are entering the era of post-humanism. This, in a nutshell, carries massive implications for our ontological status and what we will become. As a species, we are ceasing to be human, while artificial intelligence may become one of the many forms of life on Earth.


Ray Kurzweil: The Coming Singularity


While respecting such reflections and scenarios, I also remain humble in cognitive terms. I believe that we are UNABLE to predict the SPECIFIC consequences of the fact that computers will think faster than us within a few years.

Besides, when in doubt, I remember there is still someone like Elon Musk. He is one of the central figures influencing the development of our civilization and a person who is keeping a cool head. Despite the crazy ideas he deploys, he has never lost sight of the threats we may be facing along the way. He warns us against them while doing his thing, confident we will use the opportunities to become better beings. As people. And all this thanks to… robots, drones, autonomous vehicles and space travel.



Related articles:

Medicine of the future – computerized health enhancement

Only God can count that fast – the world of quantum computing

Machine Learning. Computers coming of age

Synthetic biology. Matrix, Dolly the Sheep and the bacteria of the future

Blockchain – the Holy Grail of the financial system?

Fall of the hierarchy. Who really rules in your company?

Internet bubble 2.0



Leave a Reply


  1. Adam T

    Guns don’t kill people. Programmers kill people.

    • TomK

      God great there are people here with the cognitive chops to have a productive conversation on this topic.

        • Adam Spikey

          Does it come from the article that appeared on Or did McKinsey make own study?

  2. JohnE3

    It is not enough for experts to understand the role of AI in society. We also have a professional obligation to communicate that understanding to non-experts. The people who will use and buy AI should know what its risks really are. Unfortunately, it’s easier to get famous and sell robots if you go around pretending that your robot really needs to be loved, or otherwise really is human – or super human!

  3. DDonovan

    Our weakness is that we assume technology is neutral, but it was obviously made for wars. The obvious logical conclusion of having super advanced AI is super advanced interstellar fighting. They already blew up their homes, therefore we should use their technology to defend ourselves.

  4. Check Batin

    At least make it a teensy bit more explicit that the choice is for the AI to demonstrate its humanity by saving lifes. Get rid of the guns and ticking time bomb and shit blowing up and the small ragtag band of revolutionaries is fighting against the AI. You go several days into the future when almost everyone is quickly assimilated against their will and all the problems plaguing the world have been fixed. No more war.

    • Adam Spikey


      8 Billion USD to 185 Billion USD in market cap, just in 8 years is indeed a phenomenal success.

      Their contents seem to be largely addictive. Many users resort to binge watching.

      They are creating content many different languages as well.

      Most importantly, they are not hesitating to raise prices to extract highest value from the market.

      At this moment, everything seems to be going great for them.

    • Check Batin

      On the flip side, I can look over the fact that the AI makes some Borg shock troops that literally do nothing more than move heavy objects around to block the road for a single car and climb a ladder. I can look past that. I can just ignore it and think about, I dunno, bashing the writer’s head in for a few minutes.

      • Adam Spikey

        Even when you consider the exponential development of computing power (and after reading all of Ray Kurzweil books) this particular technology seems too far fetched – certainly to be just 25 years away. But you don’t need to go so far to worry about the collapse of political, economical and social structure over the next 20-30 years. If nanotechnology delivers some of Kurtzweil‘ other predictions such as rejuvenating our cells and correcting our genetic flaws – it will make the healthcare gap between rich and poor, even if just temporarily, too wide to manage; or if you believe his vision of artificial cortex that connects our brains directly to a cloud of unlimited knowledge, then what meritocratic structure will helps us to organize?
        We already to live a world where technology companies talk about connecting the world and democratizing their technology but the extremely uneven distribution of the economic benefits they create are stressing the political system. And it’s just the beginning.

  5. Adam Spikey

    AI can’t simulate a real human at any real level of detail. Just as we can’t map the weather in any real level of detail, because of the fact that in the time it takes to measure something in great detail, it will have changed. This is due to the laws of physics. You know, that whole uncertainty principle and all… 🙂

  6. Karel Doomm2

    Right, but we don’t normally get to choose when we die. We obviously don’t want other people making that choice for us, but we are still not in control. One day our time here is up and away we go to whatever comes next. Is that immoral on the universe’s part? Or is that understood to be part of the deal?
    And why would a computer not be able to grasp this? Maybe when it’s primary functions have been fulfilled and it’s circuitry is starting to deteriorate, it will observe us coming along to switch it off and think ‘Ah, my lifecycle is complete. Adieu!’. Unless we specifically programme it to fear death (like we have been), why would it?

    • John McLean

      Except the AI will know our organic half will die, but the AI doesn’t want to die. So they will view our organics as obsolete and unnecessary. Tying AI survival to our own will only make them want to break that tie even more in the future. Sorry Mr. Musk, you are still bringing the apocalypse upon us.

    • Mac McFisher

      The head of IBM’s Almaden Research Center Jeffrey Welser, who has spent close to five decades developing artificial intelligence, offered this simple answer: “The human mind cannot crunch numbers very well, but it does other things well, like playing games, strategy, understanding riddles and natural language, and recognizing faces. So we looked at how we could get computers to do that”.

    • TomK

      That is why politician should talk about fundamental change of education system not about … taxing robots. The problem is not robotization but that after 16 Y of hard work at school our kids will have to compete with robots as they learn there only what machine can easly do after 2 weeks of being programmed !

    • John Accural

      check please this example once again. This only said we as a humans have free will vs robots

    • TomHarber

      I hate that when people think of ethics in artificial intelligence the focus is on sentient robots of the future. I think the more interesting and relevant questions are about how to use artificial intelligence. When is it okay to replace a human with an AI? How much should we allow AI judgement to replace human judgement? Should an AI be able to end a life?

      • Simon GEE

        good point of judgement of a machines. What about a legal regulations ?

    • TommyG

      thank for sharing but IMHO not Korea would pull the trigger here

  7. JohnE3

    Nice read. There’s a really fundamental difference to the way humans perceive a smile which grows slowly on the face and seems to light it all up, against a quick almost reflexive smile which never touches the eyes. At best the latter looks perfunctory, at worst as forced, even totally fake and insincere. The problem is that it’s hard to train yourself to smile the former way, it almost has to be natural.

    • John McLean

      I was wondering this after reading an article about technological innovations in the future. Does the Bible/Christianity account for this type of question? Do you have to be married to the robot first or..?

      • Norbert Biedrzycki  

        Not now but in the future, who knows?

  8. DDonovan

    Well assuming the AI is intelligent, which it very well may be in the near future, then there is a huge difference! Not people killing people, but machines choosing to kill people. It may seem laughable to the average person but the concept has alarmed many very smart people. If we don’t take the laws of robotics seriously today, who says we ever will? It may be too late by the time we choose to follow Asimov’s laws.

  9. CaffD

    The relativistic nature of morality, the necessary subjectivity of it, does not mean you should allow others to do what they think is right if you think it’s wrong, at least if you think it to be so wrong that you believe yourself warranted to infringe upon their free will, to stop them from doing, or encourage them to do, what you believe to be the wrong things they are doing, or what you believe to be the right things they aren’t doing, respectively. Morality is what you make it. It is your own, and nobody else can tell you that you’re wrong. But you can’t make it anything. The set of personal moral parameters must be determined by what is important to you, by what you believe makes an action right or wrong.

    • Don Fisher

      It’s no good for us to just be technologists in a vacuum independently of the social and political consequences, build technologies that we think may or may not be useful, while we throw them over the wall

  10. TomCat

    There is no need to concern ourselves with imparting a moral compass to a machine as it will only be confused by it. This is why most texts involving strong AI results in the AI destroying man for his own good.

    Maybe this will be a better explanation: We cannot impart knowledge about ethics to an objective thing, when we have no objective understanding of it ourselves.

  11. John McLean

    Well the designer programs what he wants the robot to think. In this video I guess the guys point is that the robot must be programmed to make decisions but, that robots ability to make decisions have only been programmed for it’s purpose. So the designer is actually 100% ethically responsible because the robot will only make a decision based on that purpose. I doubt any computer will ever have the ability to reason the way a human brain does, so if you design a car that is meant to swerve out of the way of children then that’s all it will think. It won’t suddenly ponder the reason of its existence or why children are important to the world. It will simply make the decisions of what is a human and hopefully if programmed well, it will be able to make the decision. So IMO I think the designer programs the robots ethics so he is responsible

    • johnbuzz3

      I think that it is actually a good idea to implement concepts into robots that we have no objective understanding of. I believe that putting “human-like” concepts such as trust, emotions and ethics into robots will only help us to understand those notions better.
      To give an example: a popular field in the design of deliberative agents (basically, computational entities that can reason about their actions) is based on the BDI (belief-desire-intention) model. This model comes out of the philosophy of Bratman, but is by no means an established “fact”.

      So I am not saying that it will be good thing necessarily, but I’m saying that as robots evolve, we need to investigate these possibilities and see if they can get us any further. We cannot say in advance whether it will work or not.

    • TomHarber

      About as silly as forming an internal combustion engine ethics board around the turn of the last century. Nobody was in a position to understand the implications of climate change, or the depth of our dependancy on the technology, nor would their astute and well-intentioned navel-gazing have done anything at all to alter the outcome.

      • John Accural

        But climate change is real. Unfortunately