According to scientists, human consciousness not only cannot be replicated but also evades definition. The weakness of this view lies in the presumption that only one type of consciousness, the kind that resembles ours, is possible. And yet, it is conceivable that at successive stages of its development, AI may develop new, hitherto unknown modes of self-reflection.
The history of studies on the human consciousness goes back dozens of years, hundreds in fact if purely philosophical explorations are included. The consensus among today’s psychologists, cognitive scientists and neurobiologists is that we are still struggling to comprehend the exact nature and origin of consciousness. Still to be answered are questions about whether consciousness is a product of the human brain or its function, i.e. whether it is physical or largely independent of its physicality. Regardless of which of the views we support, it is evident that our difficulties defining the concept hinder progress in creating an artificial equivalent of human consciousness. Since we don’t understand the mechanisms behind realization, we are unable to create a computer code that would make the machine realize the consequences of its actions or make it aware of its own existence and its own separateness.
Machines refusing to help humans defeat a virus
According to one definition, consciousness is the ability to achieve goals by placing oneself in a model of the environment and simulating possible future scenarios of how the model could change. To illustrate this, imagine the following: an AI equipped with a powerful computer is told to discover a cure for a new virus. Its job is to identify the virus and propose the cure on the basis of large volumes of data. It appears that the machine fails to complete its mission. It gets back to the scientists saying that current knowledge is insufficient to develop an effective vaccine against the virus. However, on examining the computer’s disks years afterwards, scientists find computations that would have allowed them to produce a panacea. Why did the machine say it could not find the remedy and chose to conceal it instead? According to one hypothesis, to the researchers’ surprise, the computer examined all data available to it, including that on the threats of overpopulation in certain parts of the world. It seems to have then concluded it was best to leave humanity to its own devices because the key problem was not the virus itself but the consequences of overpopulation. The machine analyzed the scenario of sharing its calculations with humans and chose what its consciousness, which was inaccessible to people, considered to be the optimal course of action.
Programmed limited beings?
Are computers, smartphones or voice assistants likely to ever become self-reflecting entities capable of foreseeing the outcomes of their own analyses? The skeptics are clear: computers may have the ability to recognize faces, translate from and to foreign languages, help robots clear hurdles, and recognize voices, patterns and colors. What they will never do though is realize they are doing any of these things. They will always merely react, which leaves them dependent on man who controls the streams of data that the computers (algorithms) are given to process. The world’s best-known robot – Sophia – cannot answer questions on its own. It needs to be programmed and can only respond to a limited number of queries. The number of possible combinations of meanings contained in statements produced by Sophia is just as limited. Smart voice assistants, in their turn, may be growing more powerful by the day but are still unable to grasp human irony or the more complex contexts of human messages. Thus, all of them are merely devices that slavishly follow their programs. They may surpass us in performing complex data computations but are unlikely any time soon to ponder such fundamental question as “Who am I?” or “Why do I feel bad?” But then, how certain can we really be that that won’t ever happen?
What happens inside a black box?
Two years ago, bot training researchers found that at some point in their development, man-made algorithms began to exchange their own code that was completely incomprehensible to humans. What did they talk about? Was their ability to engage in such communications not a sign of nascent consciousness? As illogical or paradoxical as this may sound, I think that we cannot entirely rule out machine self-awareness because we are unable to understand and clearly interpret many of their actions. Theoretically, the fact that AI-enabled devices do not work because such is their will and rely on human decisions suggests that we can control their behaviors. And yet, numerous examples prove that presumption wrong by revealing that humans are increasingly ignorant about what makes AIs tick. In the realm of social media, AI can detect scores of characteristics shared by people and use them to target specific products or content at groups of like individuals. Analysts admit that people would never be able to pick up on certain similarities between network users that algorithms manage to detect. A lot of the time, we have no idea how neural networks trained on huge datasets answer our queries and allocate some people but not others to specific record sets. Clueless about why or how something works, we cannot be certain where the limits of algorithmic comprehension and decision-making really lie.
Where is it born?
Of all the related question, I find the one concerning the very first moment when a consciousness is “born” to be one of the most fascinating aspects of the whole artificial consciousness debate. Therefore, instead of endlessly speculating on whether such consciousness is at all possible, I would rather focus on the question of how it could manifest itself to us? Is there a way we could measure and perceive it? A number of computer experiments have already been conducted to show that machines have in fact achieved a certain level of understanding of their own behavior. Scientists of Meiji University in Japan have built two robots. One of them performed certain operations while the other observed and repeated them. The ability of the latter to reproduce the observed behaviors of the former can be regarded as drawing conclusions, assessing a certain set of circumstances and making decisions. One may be tempted to conclude that the entire sequence of actions carried out by the mimicking computer closely resembles those of a conscious human being.
Algorithms can’t be awed
Still, a key element is missing: the computer that learned the behaviors of its mate did not do it because it felt doing so would benefit it in some way. It did not replicate the movements of the other machine because doing so pleased it. Its behavior never achieved the sophistication of a person who stands in a street, looks up to the sky, ponders at its beauty, realizes its own positive emotions associated with that observation and feels the desire to repeat this experience. All this suggests another postulate: no matter how difficult it is to nail, consciousness has several levels. One of the basic ones allows one to make an observation, indirectly communicate it to the world, and even take further actions that will bring one closer to achieving a certain goal. A machine can recognize the color red, classify a group of objects as red and choose to find further items of this color. However, in doing all this, it will never be able to assess this activity as good or associate it with future benefits. There exist algorithms capable of writing stories that humans appreciate for their aesthetic and literary value. It is nevertheless very hard to imagine a machine that would feel better having read the story, experience a joy similar to ours, share such joy with people and perhaps even develop new operational abilities inspired by such emotions.
All in the hands of cyborgs
The fact that technological progress is non-linear and grows exponentially is no longer disputed. If that is the case, there is no reason why we should lose hope in qualitative leaps in AI. Advances in AI are not only about making ever smaller and faster data processing devices. Much more fundamental and unthinkable changes loom ahead. The so-called singularity will have implications far beyond what we can envisage today. It is still difficult to tell with absolute certainty whether we will succeed in realizing our desire to hook the human brain up to a computer. However, if we ever do, we will find ourselves at another level of debate about artificial consciousness. Without a doubt, the linking of neurons and processors will constitute a major step towards new forms of existence. And this may be the answer to the question of whether it is possible to create artificial consciousness. A chip implanted into the human brain will improve our analytical and cognitive skills – having one will feel wonderful and greatly improve our lives. Once such developments come to pass, the line between human and machine consciousness – which we still consider to be natural – will become a whole lot more blurred. We will be dealing with self-aware cyborgs which can not only match IBM Watson’s computational speed but can also be proud of their performance.
. . .
Works cited:
Teaching and Learning Resources: Cognitivism, What is Cognitivism, Link, 2021.
Reuters, Michelle Hennessy, Makers of Sophia the robot plan mass rollout amid pandemic, Link, 2021.
Meiji.net, HASHIMOTO Kenji, Humanoid robots can save mankind!! , Link, 2020.
. . .
Related articles:
– Algorithms born of our prejudices
– How to regulate artificial intelligence?
– Artificial Intelligence is an efficient banker
– Will algorithms commit war crimes?
Piotr91AA
I know sorry, I can see why it would look like that! I think I just got into a 1am sleep deprived rant and kept thinking of things I wanted to say… I still agree with the points, but looking back this morning I definitely could of been A LOT more succinct!
Zeta Tajemnica
The problem is that we’re assigning gender roles to pieces of code, it could have easily been the other way around in the conversation. It would just be easier to make them both genderless so we wouldn’t have to “read between the lines” of this awe inspiring feat of technology. But even if we tried to make them genderless, us humans are stubborn and stupid enough to still find some sort of sexism. I’m not saying it doesn’t exist, it’s still a big HUMAN problem that is still affecting millions of lives, but often our focus is wasted on the consequences and not the causes.
Karel Doomm2
They’re actually a lot smarter than when I’ve seen similar videos of AIs talking to each other in the past. They sound more natural when they talk, and they don’t take “sudden turns” (except for that sex part). Generally, they stay on topic. AI bots never used to do that.
Check Batin
That was legit freaky but it’s true we continue to evolve are technology and to them they immortal many humans will die in the creation of something that is truly human that will live one to tell are stories to be a mirror into the past
AdaZombie
Innovation that matters
CaffD
Great article Norbert!
I have thought a lot about the subject and I am also in the process of writing an article.
My view is that our Darwinian nature gives us a wrong idea about what conciousness really is. It’s the “illusion of self” that could also be the illusion of conciousness: you feel that you exist not because this is objective, but because it helps you survive and pass your genes successfully to next generations. You actually don’t exist (as we mean existence) and everything is chemical reactions in your brain.
Mac McFisher
GPT3 has an incredibly good model of the English language and would certainly pass the Turing test, but the question still remains as to whether it truly understand what it is saying.
The answer to that question is most likely, no. GPT3 has derived a model of English by creating 175 parameters for the language via deep machine learning. That is, it has recognized and internalized many, many linguistic patterns and connections that allow it to imitate an ordinary english speaker while having no understanding of what it is actually saying.
In short, while this is kinda spooky, I don’t think there’s anything really to be too worried about.
Adam Spark Two
Great read. Looks like we are not gonna be alone shortly