Unfortunately, many business managers still think of AI as something that will be impossible for them to understand

I believe that knowing the fundamental principles that underlie the new technologies can boost managers’ confidence in them and help them make their companies more innovative.

Norbert Biedrzycki sharing economy

My article in Data Driven Investor published 14thof March 2019.

The two key drivers of major smart technologies today are machine learning and deep learning. 

Machine learning is commonly described as the foundation of predictive analytics. Positioned at the nexus of IT and statistical modeling, machine learning enables applications to analyze data without prior programming, foresee the outcomes of actions and, quite frequently, reach autonomous decisions. 

Deep learning is a brand of AI that employs neural networks, today mainly to perfect voice recognition and natural language processing. Simply put, deep learning is a way of automating predictive analysis. 

Both machine learning and deep learning support self-learning machines in processing massive data sets. Their capabilities and applications vary widely. 

Machine learning algorithms allow computers to classify and utilize input data. However, machine learning is subject to one key restriction: the computer is limited to the data given it. The world’s most powerful computer (at the time), IBM’s Deep Blue, would never have defeated chess champion Garry Kasparov in 1997 if people had not fed it the right data and pointed it towards the right sources of knowledge. 

Deep learning takes the game to a whole new level. Deep learning mimics the working of the human brain. A computer employs neural networks comprised of sets of parallel processors that access massive amounts of data. Every time a computer gains a new experience, and acts on it, the links among its processors, and the information streams they use (which can be described as nodes), organize themselves – just as a child’s brain might as he or she gains experience. Over time, this organization becomes more sophisticated (or, grown up), allowing the machine to perform more efficiently.

This is an astonishing process.  

Why this and not that? 

A computer’s ability to resolve ever more complex problems based on prior experience and interactions with humans has brought about a breakthrough in the development of smart technologies. The question is, Why have these new machine skills popularized some technologies but not others?

Machine learning and deep learning computers analyze data in a number of ways, formulating decision-making rules either autonomously or with human assistance. They  analyze databases (an approach central to Big Data handling), classify objects (grouping images with common characteristics), detect errors (by comparing objects and states to assess the probability of distortions), offer recommendations (like what book or shirt you might like based on your purchasing history), and refine solutions (say, searching for the best route to grandma’s house). 

These processes have become essential for today’s consumers, and that drives business innovation toward a handful of nicely-packaged solutions.

That’s why certain technologies have been optimized, and others not. 

Cognitive Computing Norbert Biedrzycki blog BrandsIT

Three trends driving AI in business 

Through the power of business, marketing and social decision-making, these IT concepts are transformed into innovations and products. If data processing algorithms are of any predictive value, then it only stands to reason they should drive e-commerce projects in which stores recommend products to customers that are similar to the items they have bought previously. And, as an algorithm capable of compiling datasets based on common characteristics will be able to tell whether a picture represents a dog or an airplane, social media sites that hold billions of user photographs see the potential to increase user engagement. 

Today, there are three popular business drivers dependent on AI.

1. Error reduction

The reduction or elimination of anomalies is particularly important for the Internet of Things (IoT). I am referring here to the growing popularity of sensory devices that acquire data from the environment. The enormous amount of information collected on streets and in stores and even apartments (in smart homes) can be used to our advantage. It’s nice to come home that turns on the lights and the boiler when you approach. However, these uses are not without risks. For the average citizen, irregularities may be a nuisance (due to smart device failures) as well as a reason to feel vulnerable if the data your house is collecting on you finds its way into the hands of others. 

Eliminating errors is even more critical in the automotive and aerospace industries. Sensors are used in aircraft engines for real-time in-flight data stream collection and errors there can be fatal. Sensor systems also support autonomous vehicles and the correction of irregularities in the data collected by cameras and LiDAR sensors will have a tremendous impact on the future of these technologies. No one wants error-prone vehicles on the road.

2. Customer satisfaction

Predictive analytics (recommendations) are among the most coveted functionalities in both e-commerce and retailing. Corporations such as Amazon and Netflix have turned predictive systems into a valuable asset and competitive advantage, and in so doing have created a new de facto standard of consumer expectations. 

In the coming years, ​​e-commerce will see the development of technologies centered around customer voice and facial recognition. The functionality will be used to train bots to serve as digital assistants capable of better communicating with people. Today’s still-imperfect bots will improve steadily so, in time, they will do the jobs of all call center employees. The bots will recommend products that we may find to be of interest, book vacations, suggest loans offering the best terms and arrange doctor’s appointments for us.

3. Efficiency gains 

The predictive abilities of algorithms can have a huge impact on efficiency, which is essential for every business. As we all know, efficiency gains are a priority for many corporate boards in all industries. In the transport industry, for instance, managing directors are unlikely to ignore the potential of surveilling drivers and their vehicles in real time to cut costs by orders of magnitude. Devices that track driver reactions, engine performance, stopovers and road obstacles can suggest to drivers when it is best to accelerate, slow down, refuel and resume their trips. Not only will this generate substantial savings, but it will improve safety. 

Norbert Biedrzycki AI

Who appreciates the potential today?

In “Notes from the AI frontier: Applications and value of deep learning.” McKinsey Global Institute looks at those industries most eager to deploy machine learning and deep learning and most likely to benefit from predictive capabilities, image recognition and error detection. Today, the list is topped by retailers and considering the enormous amount of data on customers that retailers have collected it’s no wonder. Information on age, education, employment, and prior purchasing patterns allow companies to take the customer experience to a previously undreamed of level. The linking of customer purchase histories with their social media activity has become a prime global e-commerce tool. 

A number of factors have fostered the widespread adoption of AI in the tourism business. Travel agencies that can keep track of their customers’ travel histories and analyze the images they post online are better positioned to engage them, offer them new products and, ultimately, gain their loyalty, thereby improving the agencies’ bottom lines. And, in a virtuous circle, the enormous popularity of posting travel images on social media significantly increases the volume of valuable data available to the industry.

Every industry relies on its own mix of algorithmic functions. But it is only when combined with one another that recommendation systems, error detection, anomaly identification and optimized decision-making creates a technological foundation for business success.

Fear of the black box 

According to McKinsey Global Institute, out of all of the surveyed companies that claim to be aware of the potential of AI, only 20 percent are really benefiting from it. A number of factors stand in the way. One of them is cognitive discomfort, which boils down to: “How did the machine arrive at its conclusion?” 

Because deep learning is self-organizing, independent of the data fed it, its thinking is opaque to humans. This creates problems that go way beyond our human and sometimes irrational fear of the “other.” As a consequence of not knowing how the machine thinks, regulatory and certification systems in many industries have grown extremely complex, making it practically impossible to embrace innovative AI solutions either quickly or confidently. The health care, aerospace and banking sectors are unlikely to abandon their highly formalized control procedures and certification systems overnight to let their machines act autonomously.

Security constitutes another limitation. It is not easy to persuade the general public that facial recognition technologies will not harm people and compromise their privacy. Indeed, data processing as a whole is under careful scrutiny from both governments and non-governmental organizations. AI, which needs data like we need air, may suffocate due to our fears. Of course, money plays a role in limiting investment in AI technologies. Any company’s decision to invest in costly IT systems, set up business units to handle new data, and employ experts who know how to interpret it and make it accessible must be rational and act in ways that make financial sense..

Let the benefits meet the needs 

In my view, the key challenge faced by modern managements is to recognize and identify the potential to their individual businesses of the new technology and tailor their investments to their real needs. Expertise in AI by itself is not sufficient.  Tech start-ups and more mature tech companies alike must know their clients’ industries before presenting offers. Right now, I believe the extent to which AI will transform markets will remain, for the time being, modest. That, by the way, given the challenges that lie before us, is not such a bad thing. On the contrary, it means the market continues to be open to new opportunities.

Link to the article

.    .   .

Works cited

The Guardian, Luke Harding And Leonard Barden, From the archive, 12 May 1997: Deep Blue win a giant step for computerkind, link, 2011.

MIT, Dimitrios G. Myridakis, Anomaly detection in IoT devices via monitoring of supply current, link, 2018.

Ars Technica, TIMOTHY B. LEE, Why experts believe cheaper, better lidar is right around the corner, link, 2018. 

McGill University at Montreal, Frank P. Ferrie, Ruisheng Wang, Jeff Bach and Jane Macfarlane,A New Upsampling Method for Mobile LiDAR Data, link, 2018. 

HuffPost, Marquis Cabrera, NCCD, Netflix, and Amazon all use predictive analytics—what is different about child welfare is the consequences of mistakes, link, 2018. 

The NetFlix Tech Blog/ Medium, Chaitanya Ekanadham, Using Machine Learning to Improve Streaming Quality at Netflix, link, 2018. 

McKinsey Global Institute, Michael ChuiJames Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra,Notes from the AI frontier: Applications and value of deep learning, link, 2018. 

.    .   .

Related articles

– Artificial intelligence is a new electricity

– Machine, when you will become closer to me?

– Will a basic income guarantee be necessary when machines take our jobs? 

– Can machines tell right from wrong?

– Medicine of the future – computerized health enhancement

– Machine Learning. Computers coming of age

– The brain – the device that becomes obsolete

Leave a Reply

9 comments

  1. John Accural

    I don’t mind that though. I briefly worked with a PhD candidate in the field of sensori-motor learning (sub field of neurosci/kinesiology) and he taught me so much stuff about the science of how we learn – i directly applied that to my research in automous vehicles (reinforcement learning, deep-Q learning). You can draw inspiration and knowledge from so many other fields. Thats the primary reason I did a double undergrad in Physics & Computer Science.
    As a side note, ya its quite hard to pick up quantum mechanics with no background in physics (and especially the math skills). It took 3 years of background knowledge to even take an introductory course. What sucks is its not a topic you can really “explain”, because it only makes sense through the math. But I am cheering you on! I wish you well in your studies.

  2. Adam Spikey

    Netflix:

    8 Billion USD to 185 Billion USD in market cap, just in 8 years is indeed a phenomenal success.

    Their contents seem to be largely addictive. Many users resort to binge watching.

    They are creating content many different languages as well.

    Most importantly, they are not hesitating to raise prices to extract highest value from the market.

    At this moment, everything seems to be going great for them.

    • Machines don’t require frequent breaks and refreshments as like human beings. That can be programmed to work for long hours and can able to perform the job continuously without getting bored or distracted or even tired. Using machines, we can also expect the same kind of results irrespective of timings, season and etc., those we can’t expect from human beings.

  3. johnbuzz3

    There was a time when financial transactions needed a long time to go through. People had to physically go to banks and get their money transferred, which usually took quite some time even with digital transferring through banks. Now, with the development of Fintech applications and software, transferring money can happen in seconds, at any time of the day.

    • John Accural

      I’m hoping we’ll see a shifting focus less towards accuracy and beating the previous SotA deep learning architecture and more towards other issues like explainability and increasing robustness of existing algorithms against adversarial attacks, as well as more optimization for TPUs and GPUs and better integration of AI tech into the SDLC to make developing, testing, and deploying models easier.