My article in BrandsIT dated July 4th, 2018.
All the existing definitions of cognitive computing share a few common features. Generally speaking, the term refers to a collection of technologies that result largely from studies on the functioning of the human brain. It describes a marriage of sorts of artificial intelligence and signal processing. Both are key to the development of machine consciousness. They embody advanced tools such as self-learning and reasoning by machines that draw their own conclusions, process natural language, produce speech, interact with humans and much more. All these are aspects of collaboration between man and machine. Briefly put, the term cognitive computing refers to a technology that mimics the way information is processed by the human brain and enhances human decision-making.
Cognitive computing emulates human thinking. It augments the devices that use it while empowering the users themselves. Cognitive machines can actively understand such language and respond to information extracted from natural language interactions. They can also recognize objects, including human faces. Their sophistication is unmatched by any product ever made in the history of mankind.
In essence, cognitive computing is a set of features and properties that make machines ever more intelligent and, by the same token, more people-friendly. Cognitive computing can be viewed as a technological game changer and a new, subtle way to connect people and the machines they operate. While it is neither emotional nor spiritual, the connection is certainly more than a mere relationship between subject and object.
Owing to this quality, computer assistants such as Siri (from Apple) are bound to gradually become more human-like. The effort to develop such features will focus on the biggest challenge of all faced by computer technology developers. This is to make machines understand humans accurately, i.e. comprehend not only the questions people ask but also their underlying intentions and the meaningful hints coming from users who are dealing with given problems. In other words, machines should account for the conceptual and social context of human actions. An example? A simple question about the time of day put to a computer assistant may soon be met with a matter-of-fact response followed up by a genuine suggestion: “It is 1:30pm. How about a break and a snack? What do you say, Norbert?”
I’d like to stop here for a moment and refer the reader to my previous machine learning article. In it, I said that machine technology enables computers to learn, and therefore analyze data more effectively. Machine learning adds to a computer’s overall “experience”, which it accumulates by performing tasks. For instance, IBM’s Watson, the computer I have mentioned on numerous occasions, understands natural language questions. To answer them, it searches through huge databases of various kinds, be it business, mathematical or medical. With every successive question (task), the computer hones its skills. The more data it absorbs and the more tasks it is given, the greater its analytical and cognitive abilities become.
Machine learning is already a sophisticated, albeit very basic machine skill with parallels to the human brain. It allows self-improvement of sorts based on experience. However, it is not until cognitive computing enters the picture that users can truly enjoy interactions with a technology that is practically intelligent. The machine not only provides access to structured information but also autonomously writes algorithms and suggests solutions to problems. A doctor, for instance, may expect IBM’s Watson not only to sift through billions of pieces of information (Big Data) and use it to draw correct conclusions, but also to offer ideas for resolving the problem at hand.