My article in BrandsIT dated July 4th, 2018.
All the existing definitions of cognitive computing share a few common features. Generally speaking, the term refers to a collection of technologies that result largely from studies on the functioning of the human brain. It describes a marriage of sorts of artificial intelligence and signal processing. Both are key to the development of machine consciousness. They embody advanced tools such as self-learning and reasoning by machines that draw their own conclusions, process natural language, produce speech, interact with humans and much more. All these are aspects of collaboration between man and machine. Briefly put, the term cognitive computing refers to a technology that mimics the way information is processed by the human brain and enhances human decision-making.
Cognitive computing emulates human thinking. It augments the devices that use it while empowering the users themselves. Cognitive machines can actively understand such language and respond to information extracted from natural language interactions. They can also recognize objects, including human faces. Their sophistication is unmatched by any product ever made in the history of mankind.
In essence, cognitive computing is a set of features and properties that make machines ever more intelligent and, by the same token, more people-friendly. Cognitive computing can be viewed as a technological game changer and a new, subtle way to connect people and the machines they operate. While it is neither emotional nor spiritual, the connection is certainly more than a mere relationship between subject and object.
Owing to this quality, computer assistants such as Siri (from Apple) are bound to gradually become more human-like. The effort to develop such features will focus on the biggest challenge of all faced by computer technology developers. This is to make machines understand humans accurately, i.e. comprehend not only the questions people ask but also their underlying intentions and the meaningful hints coming from users who are dealing with given problems. In other words, machines should account for the conceptual and social context of human actions. An example? A simple question about the time of day put to a computer assistant may soon be met with a matter-of-fact response followed up by a genuine suggestion: “It is 1:30pm. How about a break and a snack? What do you say, Norbert?”
I’d like to stop here for a moment and refer the reader to my previous machine learning article. In it, I said that machine technology enables computers to learn, and therefore analyze data more effectively. Machine learning adds to a computer’s overall “experience”, which it accumulates by performing tasks. For instance, IBM’s Watson, the computer I have mentioned on numerous occasions, understands natural language questions. To answer them, it searches through huge databases of various kinds, be it business, mathematical or medical. With every successive question (task), the computer hones its skills. The more data it absorbs and the more tasks it is given, the greater its analytical and cognitive abilities become.
Machine learning is already a sophisticated, albeit very basic machine skill with parallels to the human brain. It allows self-improvement of sorts based on experience. However, it is not until cognitive computing enters the picture that users can truly enjoy interactions with a technology that is practically intelligent. The machine not only provides access to structured information but also autonomously writes algorithms and suggests solutions to problems. A doctor, for instance, may expect IBM’s Watson not only to sift through billions of pieces of information (Big Data) and use it to draw correct conclusions, but also to offer ideas for resolving the problem at hand.
Link to the full article (in Polish)
Related articles:
– Machine, when you will become closer to me?
– A machine will not hug you … but it may listen and offer advice
– Can machines tell right from wrong?
– What a machine will think when it looks us in the eye?
– End of the world we know, welcome to the digital reality
– Work of the future – reinventing the work
Guang Go Jin Huan
silicon valley is miles ahead in AI tech compared to russia and china. putin is making those statements to weaken the US position in the field (“look at america, theire building dangerous stuff”)
Adam
Very interesting topic that needs attention and discussion. As you mentioned, the focus should be on the way humans and AI complement each other and work to mitigate impending challenges that AI brings, indeed humanity needs to be at every step of AI invention in order to achieve this. 👍
tom lee
Very good read Norbert
CaffD
One question, Norbert. Why do you think the universe is random ? Number of factors explaining why life on earth we know is possible and why we can’t observe it in the known parts of the universe is extreme.
John Accural
I have a real challenge with the label of “Artificial Intelligence”. Artificial Intelligence is not one until it becomes self-aware (sentient). Until that time comes, saying “AI” is in my opinion more of a marketing term than anything else. At this time, I prefer to define it as a set of technologies which is able to aggregate data and present it in a way to facilitate the process of decision making. To allow for fully autonomous weapons to take action on decisions which are made based on weighed statistical data analysis and normalization is akin to playing a “smarter” version of Russian Roulette. Even if we ever reach a level where machines are self-aware, they should always be nothing more than tools aiding the decision making process of humans – people in this case who are tasked with “pressing the button” to either launch or defer the launch of a weapon. Some of the greatest battles and resistance wars – from the Battle of Thermopylae, to the partisan resistance in various countries occupied by the nazis in WWII had no “statistical” business of being fought – yet people did and people defeated enemies with far superior resources. Sometimes the best decisions are made based on “gut feel”, not computational analysis.
Guang Go Jin Huan
Here’s a little reality check for you…
https://en.wikipedia.org/wiki/ACM_International_Collegiate_Programming_Contest
12 wins for Russia… 6 consecutive wins since 2012 to 2017. The last time US won was in 1997. Let that sink in. Silicon Valley is thriving because of global talent. Russian government can easily recruit their top programmers and computer scientists by throwing money at them.
Norbert Biedrzycki
Tom Jonezz
AI has surged massively in the recent (<5) years because computing power reached a threshold where they're able to learn on their own. i.e. Machine Learning.
Machine learning is essentially instead of a programmer telling the computer exactly what to do, we feed the computer a ton of data let it figure out on their own via trial and error (hence the increased computing power required.) This is how Youtube is able to make accurate video suggestions, or how Facebook suggests ads specific to you. This is why all the tech companies wants your data. What do you think you're doing when filling out those "prove that you're not a robot" questionnaires? That's right. Teaching a robot to recognize those images.
This is where China's advantage comes in. Huge population + Lack of privacy laws = Machine learning paradise. Many Chinese companies are already kicking western company's ass when it comes to AI. Only the biggest privacy offenders like Google, Facebook, and Amazon are putting up a fight. You can even see that Apple is falling behind with Siri compared to its competitors.
Jacek Krasko
According to Gartner:
“China and the US will be neck-and-neck for dominance of the global market by 2025. China which will account for 21 % of global AI power, ahead of the US on 20%. However, the US wins in terms of AI revenue (22 % vs 19 %). The third largest market is predicted to be Japan with 7 %”
As regards the number of Industrial AI market:
A new report from GSMA Intelligence forecasts that China is poised to lead the global Industrial AI market and could have as many as 4.1bn of the 13.8bn global AI Reve estimated to exist by 2025.
Tom Jonezz
Likewise, it’s no surprise that armies around the world are eager to lead the way into the new frontier of transhumanism, generals and war leaders have always sought any means to give their army the upper hand over an opponent.
The US Defense Advanced Research Projects Agency (DARPA) has come right out and said that humans “[were] the weakest link in Defense systems.” Some examples of DARPA’s research into transhumanist technologies include allowing humans to convert plant matter to glucose, threat detection through optical implants, and even a way for humans to cling to the surface of a flat wall the way lizards do.
Norbert Biedrzycki
Actual artificial intelligence. People don’t really grasp how dangerous it really is. Ever play a game against someone using an aimbot and notice how they aren’t 10 or 50 or even 100% better than the best players in the lobby, they are winning by a factor of 10?
Guang Go Jin Huan
Not sure about this, but I got the impression that he as a person with real power today seizes the opportunity to shift the focus onto a highly speculative technology as defining power in the future to distract from his moves with real implications today. As in “I’m just a humble human compared to this technology, it has way more influence than me, he who brings it to this (distopian/distant) level rules, not me. I wouldn’t, so let’s open source it”.
I’m convinced that ML combined with big data has some insane transformational power, but for the people in power right now it’s consequences might bring more problems than opportunities. That’s why the doomsday scenario seems so tempting, because it’s hard to refute and if at all distant enough that it probably doesn’t have any consequences for them.
I would be carefull with anyone in power right now forming strong opinions for the doomsday scenario, if they appear to have some profit from it (attention, distraction, founding, etc)
Check Batin
Will be fascinating to see if the scaling problems can be overcome – MIT described it as potentially an inverse Moore’s law
Oscar2
Good!
The Midas touch of the digital age.
It is worth to predict an increase in demand by ………. for strong computing units. -:)