Cognitive computing – a skill-set widely considered to be the most vital manifestation of artificial intelligence

Will computers ever amaze us again in any way? Some potential for amazement may lie in cognitive computing – a skill-set widely considered to be the most vital manifestation of AI.

Cognitive computing Norbert Biedrzycki blog

My article in Data Driven Investor published 29th of March 2020 on Cognitive computing – a skill-set widely considered to be the most vital manifestation of artificial intelligence. 

As its users, we have grown to take technology for granted. Hardly anything these days is as commonplace and unremarkable as a personal computer that crunches numbers and enables us to read files and access the Internet. Will computers ever amaze us again in any way? Some potential for amazement may lie in cognitive computing – a skill-set widely considered to be the most vital manifestation of artificial intelligence.

Back during my university days, and later at the outset of my professional career, I wrote software. I earned my first paycheck as a programmer. I often stayed up late and even pulled all-nighters correcting endless code errors. There were times when the code I wrote finally began to do just what I wanted it to, serving its intended purpose. In time, such moments became more and more frequent. I often wondered if programmers would ever be replaced. But how and with what? The science fiction literature I was into abounded with stories on robots, artificial intelligence and self-learning technologies that overstepped their boundaries and began to act against the rules, procedures and algorithms. Such technologies managed to learn from their mistakes and accumulate experience. It was all science fiction then. A computer program that did anything other than the tasks assigned to it by its programmer? What a delusion. But then I came across other concepts, such as self-learning machines and neural networks.

As it turns out, a computer program may amass experience and apply it to modify its behavior. In effect, machines learn from experience that is either gained directly by themselves or implanted into their memories. I have learned about algorithms that emulate the human brain. They self-modify in the search for the optimal solutions to given problems. I have learned about cognitive computing, and it is my reflections on this topic that I would like to share in this article.

As it processes numbers, a computer watches my face 

All the existing definitions of cognitive computing share a few common features. Generally speaking, the term refers to a collection of technologies that result largely from studies on the functioning of the human brain. It describes a marriage of sorts of artificial intelligenceand signal processing. Both are key to the development of machine consciousness. They embody advanced tools such as self-learning and reasoning by machines that draw their own conclusions, process natural language, produce speech, interact with humans and much more. All these are aspects of collaboration between man and machine. Briefly put, the term cognitive computing refers to a technology that mimics the way information is processed by the human brain and enhances human decision-making.

Cognitive computing. What can it be used for?

Cognitive computing emulates human thinking. It augments the devices that use it while empowering the users themselves. Cognitive machines can actively understand such language and respond to information extracted from natural language interactions. They can also recognize objects, including human faces. Their sophistication is unmatched by any product ever made in the history of mankind.

Time for a snack, Norbert

In essence, cognitive computing is a set of features and properties that make machines ever more intelligent and, by the same token, more people-friendly. Cognitive computing can be viewed as a technological game changer and a new, subtle way to connect people and the machines they operate. While it is neither emotional nor spiritual, the connection is certainly more than a mere relationship between subject and object.

Owing to this quality, computer assistants such as Siri (from Apple) are bound to gradually become more human-like. The effort to develop such features will focus on the biggest challenge of all faced by computer technology developers. This is to make machines understand humans accurately, i.e. comprehend not only the questions people ask but also their underlying intentions and the meaningful hints coming from users who are dealing with given problems. In other words, machines should account for the conceptual and social context of human actions. An example? A simple question about the time of day put to a computer assistant may soon be met with a matter-of-fact response followed up by a genuine suggestion: “It is 1:30pm. How about a break and a snack? What do you say, Norbert?”

Dear machine – please advise me

I’d like to stop here for a moment and refer the reader to my previous machine learningarticle. In it, I said that machine technology enables computers to learn, and therefore analyze data more effectively. Machine learning adds to a computer’s overall “experience”, which it accumulates by performing tasks. For instance, IBM’s Watson, the computer I have mentioned on numerous occasions, understands natural language questions. To answer them, it searches through huge databases of various kinds, be it business, mathematical or medical. With every successive question (task), the computer hones its skills. The more data it absorbs and the more tasks it is given, the greater its analytical and cognitive abilities become.

Machine learning is already a sophisticated, albeit very basic machine skill with parallels to the human brain. It allows self-improvement of sorts based on experience. However, it is not until cognitive computing enters the picture that users can truly enjoy interactions with a technology that is practically intelligent. The machine not only provides access to structured information but also autonomously writes algorithms and suggests solutions to problems. A doctor, for instance, may expect IBM’s Watson not only to sift through billions of pieces of information (Big Data) and use it to draw correct conclusions, but also to offer ideas for resolving the problem at hand.

At this point, I would like to provide an example from daily experience. An onboard automobile navigation system relies on massive amounts of topographic data which it analyzes to generate a map. The map is then displayed, complete with a route from the requested point A to point B, with proper account taken of the user’s travel preferences and prior route selections. This relies on machine learning. However, it is not until the onboard machine suggests a specific route that avoids heavy traffic, while incorporating our habits that it begins to approximate cognitive computing.

Number crunching is not everything 

All this is fine, but where did today’s engineers get the idea that computers should do more than crunch numbers at a rapid pace? The head of IBM’s Almaden Research Center Jeffrey Welser, who has spent close to five decades developing artificial intelligence, offered this simple answer: “The human mind cannot crunch numbers very well, but it does other things well, like playing games, strategy, understanding riddles and natural language, and recognizing faces. So we looked at how we could get computers to do that”.

Efforts to use algorithms and self-learning to develop a machine that would help humans make decisions have produced a spectacular effect. In designing Watson, IBM significantly raised the bar for the world of technology.

How do we now apply it?

The study of the human brain, which has become a springboard for advancing information technology, will – without a doubt – have broader implications in our lives, affecting the realms of business, safety, security, marketing, science, medicine and industry. “Seeing” computers that understand natural language and recognize objects can help everyone, from regular school teachers to scientists searching for a cure for cancer. In the world of business, the technology should – in time – help use human resources more efficiently, find better ways to acquire new competencies and ultimately loosen the rigid corporate rules that result from adhering to traditional management models. In medicine, much has already been written on doctors’ hopes associated with the excellent analytical tool – IBM’s Watson. In health care, Watson will go through a patient’s medical history in an instant, help diagnose health conditions and enable doctors to instantly access information that could previously not be retrieved within the required time horizon. This may become a major breakthrough in diagnosing and treating diseases that cannot yet be cured.

Watson has attracted considerable interest from the oncology community, whose members have high hopes for the computer’s ability to rapidly search through giant cancer databases (which is crucial in cancer treatment) and provide important hints to doctors.

Combined with quantum computing, this will become a robust tool for solving complex technological problems. Even today, marketing experts recognize the value of cognitive computing systems, which are playing an increasingly central role in automation, customer relationships and service personalization. Every area of human activity in which data processing, strategic planning and modeling are of importance, will eventually benefit from these technological breakthroughs.

The third age of machines 

Some people go as far as to claim that cognitive computing will begin the third age of IT. Early in the 20th century, computers were seen as mere counting machines. Starting in the 1950s, they began to rely on huge databases. In the 21st century, computers learned to see, hear and think. Since human thinking is a complex process whose results are often unpredictable, perhaps we could presume that a cognitive union of man and machine will soon lead to developments that are now difficult to foresee.

Machines of the future must change the way people acquire and broaden their knowledge, to achieve “cognitive” acceleration. However, regardless of what the future may bring, the present day, with its ever more efficient thinking computers is becoming more and more exciting.

Link to this article 

Related articles:

– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Hello. Are you still a human?

– Artificial intelligence is a new electricity

– How machines think

Leave a Reply

16 comments

  1. John Macolm

    We already have one. The information on it was released in the Snowden dump. At the time of the information release it was fed internet and phone meta data on 100 million Pakistanis and used to pick targets for drone execution in the Pakistan Afghan border area.
    No joke, it’s called skynet.
    Several years ago there was a big push in the media about how meta data wasn’t being used to track individuals. That was propaganda to try to hide the fact that they had the capability to do it years ago. Now they can likely do it to at least the entire US population if not the world.

    • Marc Stoltic

      AI is getting better and better over time. One day AI can be programmed to detect lies and corruption. When that happens everyone in power will be held to a new standard or “reprogrammed” themselves…

      • Tom Aray

        Good thing we have a reality tv star “business man” in charge with all those tremendous tweets, and absolutely no clue about how to even spell A.I.

      • Aaron Maklowsky

        It’s on the same potential lines as the next atomic weapon. A true AI can be devastating in ways humans can’t comprehend. Do some research. AIs being tested on basic optimization routines do some really freaky shit, boggling the researchers, on how they achieved their results. They don’t think like us.

  2. Mac McFisher

    Humans create AI and it soon becomes mans best friend. Dogs won’t stand for this.

  3. Acula

    Despite progress in AI it is still quite stupid. And we hard time make it more intelligent that say, a worm. This is because expanding resources available to AI (a trick that worked for regular computers) tends to make AI more stupid rather than more intelligent as having such capability it tends to memorize instead of generalizing. Hence creating complicated AI systems capable of thought seems to be well ahead of us. The future is just an illusion in physics. This is just one possible way of ordering events. And not particularly remarkable one – outside of the way ur mind operates – that we can remember the past but cannot remember the future.

    • Krzysztof X

      The massive measures started as a reaction to the “pandemic” can be described a little bit broader, I guess . If somebody call them “extraordinary measures of social surveillance and control” , he would not completely be wrong. To describe them in short: A,the introduction of applications for society-wide contact tracing

  4. And99rew

    “Machine learning” is the general term. It’d be like asking how long until electricity gets outsourced to other countries. Really comes down to how it’s applied which is where we see the innovations. Some day machine learning will like the internet all over again. Starts off small and then suddenly it’ll creep into everything.

    • Mac McFisher

      This is something I’ve thought about before. I work for a company where we have both millions of order documents and human verified database entries for those order documents, so it seems like I definitely have plenty of data to train with. I tried convincing people of the value of a ‘general order document importer’, but the old ways win out.

      • John Macolm

        We’re still decades from having the hardware to produce Human-level AI based on Moore’s law. Computers are still actually pretty fucking dumb, they’re just good at doing algorithms we discover really fast. At this point this anti-AI stuff is a lot like someone discovering the windmill and screaming how it’s going to create so much flour it suffocates the world – it won’t happen because you still have to feed it shit and even then there’s not enough base material.

        • Jang Huan Jones

          NLP is the danger!
          What did Russia just do last election? Bunch of spam accounts.
          They can have a whole country of spambots guiding political discourse online.
          I could be a spambot just trying to confuse you tho…

      • Marc Stoltic

        China would be real investor. They have booming science and technology academies. When Next generation sequencing came out they basically purchased 11 or 12 of the most cutting edge sequencers from Illumina Tec. And instituted the division for genomics. They are cracking the code by studying in Western countries and coming back to apply their skills. Unlike India as I would suggest at this point.
        China already has the top notch mobile computing technology, at least mass produced and cheaper than Snapdragon s but the point is China has more motivations with South China sea, Indian subcontinent, interactions with Japan and America over the years as the major conflict episodes.
        They are already building roads and automated check points in Tibet and nearby regions for ensuring troops reach the borders.