My article in BrandsIT dated June 10th, 2018.
We are naturally delighted to see babies smile when they recognize our faces as we lean over them. We are amazed to see little ones utter their first words as they repeat after us. Then comes the time when a child begins to make up stories about the fairy tale characters we have read to it about. Finally, it learns to count, draw, write and correct its first school paper. And then one day we hear it express opinions about the world. Opinions that are often so extraordinary they leave us in shock. At this moment, we are witnessing a human being becoming intellectually independent.
It is not only people that learn from experience by acquiring knowledge from the outside world. We are now witnessing a time when sophisticated information technologies are also emerging from infancy. One development associated with Artificial Intelligence that captures the imagination of the entire world is machine learning. The name itself hints at a field of fully automated processes that rely on intelligent data processing and smart decision-making. AI is where the most ambitious R&D work is conducted in today’s world.
Machine learning is a field positioned on the borderline between mathematics, statistics and programming, i.e. information technology. Its goal is to create complex algorithms capable of reaching optimal decisions and, even more importantly, continuous self-improvement. The algorithms that underpin machine learning are specific and highly sophisticated. By and large, they rely on a dynamic model that processes inputs (data) to make specific decisions. What is significant is that such algorithms have the ability to “self-learn” as they actively process the datasets that are entered. However, the entire mechanism has one serious limitation. And that is, as the computer executes its tasks, it draws on experience of the “supervisor”. What it means is that man – a programmer, operator or teacher – critically influences the way information is processed. His or her job is to support the machine by entering data batches, manually checking the conditions that result from analyses and remove system blockages. The computer’s self-sufficiency therefore continues to be limited as it depends on an expert. The general consensus is that the first people to witness machine learning were the IBM experts who tried to develop algorithm to help chess players improve their game. A landmark along the path came with the developmentof the Dendral IT system at Standford University in 1965which automated chemical analyses. It is now recognized that the research led to the first computer-discovered compounds.
Some the latest research seeks to eliminate or at least strongly reduce the teaching role played by humans to ensure that algorithms learn independently.
Time for unlimited self-sufficiency
One of the most intensely explored areas today in the field of machine learning is deep learning, viewed as a subcategory of the former. Extensive mathematical structures that support multi-strand processing, referred to as neural algorithms, are capable of making decisions, correcting them by learning from their mistakes and, based on prescribed models, selecting from the available sets the data that most accurately addresses a given question or problem. In other words, they can learn independently using the deep learning method. Deep learning supports e.g. voice recognition, natural language, translation from various languages and image recognition. All these functions are particularly interesting to corporations such as Facebook and Google.