Two narratives have changed the perception of AI in recent years. One was that of bewilderment over new opportunities. The other described anxieties over being disempowered by machines and predicted a host of adverse impacts, such as mass layoffs. By now, such emotions have cooled off, replaced by calm analysis. We can now see more clearly both the strengths of the machine algorithms and their limitations. Let us take a closer look at the latter.
Some of the predictions regarding AI have never come true, others have turned out to be much less exciting than originally conceived. We are now aware of the many misconceptions we held years ago. As it turns out, we are not being driven around by autonomous vehicles, medicine is still distrustful of AI, the Internet of Things has not transformed life in big cities, blockchain has not revolutionized all transactions. And yet, enormous benefits have come from machine and deep learning and their ability to process large data sets and detect patterns in an instant. Undertakings have been transformed in ways made possible by algorithms and neural networks, which allow businesses worldwide to flourish, enhance their finances and management, significantly improve process efficiency and boost their bottom lines.
Slowly but surely, AI is looking less like a dark metaphysical force. And although we can still recall Elon Musk’s 2018 claim that AI would one day confront us with a huge existential crisis, with every day that passes we find that it can also be boringly inept and ineffective. Its shortcomings and unforeseen weaknesses may seriously slow its further expansion. Below, I discuss the basic factors that may impede AI development.
Weakness 1. Processors failing to keep up
According to the OPEN AI research lab, algorithm-based technologies will develop more slowly because computer processors, whose power doubles every two years, won’t be fast enough. To maintain the current pace of AI rolllouts, computers would have to double their power every three to four months. This is one of the key reasons why a slowdown is coming. A number of other hardware challenges are actually quite trifling. The Internet of Things, a technology made up of digital devices connected into a global network, is expected to use massive data streams. The rule is that the more data neural networks take in, the better they perform, allowing digital devices to acquire a wisdom of sorts that will drive practical improvements. These include coffee makers responding automatically on sensing you have entered the kitchen, traffic lights anticipating traffic jams and reacting to weather data, and more. In theory, all this sounds very promising. However, IoT experts admit that many projects fail due to poor sensors preventing devices synchronizing and causing their calibration to go haywire. Perhaps the answer to these woes lies in quantum computing, but that technology remains in its infancy. It appears we’ll just have to wait and see.
Weakness 2. Runaway budgets
Jerome Pesenti, who was previously in charge of AI development at IBM and who for two years now has been Facebook’s head of AI, agrees that hardware problems may inhibit further AI growth. He also points to another obstacle to AI expansion: the cost. Speaking to Wired magazine, he said: “If you look at top experiments, each year the cost is going up tenfold. Right now, an experiment might be in seven figures, but it’s not going to go to nine or ten figures, it’s not possible, nobody can afford that.” His prediction is well illustrated by this example, which is one of many: the training of the famous natural language model GPT-3, released by OPEN AI laboratories this year, cost over $4.5 million. And this is just the basic costs (such as uploading a dictionary), exclusive of numerous adjustments and complex product improvements.
Weakness 3. AI is not an easy partner
In recent years, “artificial intelligence” has been viewed as the magical phrase that will open the proverbial sesame. Companies and investment funds have raised cash from the market for apps that, albeit fashionable, have been unable to generate profit. For many companies that chose to base their sales, marketing and logistics on algorithms, the lack of return on investment proved discouraging. Today, their managements are much more confident: AI can improve some processes but generates too much data that is inscrutable for the average data manager, leaving them frustrated. In short: sales people have no idea why they should conclude transaction x, as suggested by algorithms. The result? An International Data Corporation survey of global companies using AI has found that only 25 percent of them chose to adopt comprehensive company-wide solutions. The majority of the respondents admitted their projects were ridden with errors, while a quarter reported failure rates of up to 50 percent in AI deployments. This shows that AI is not easily scalable and can be incomprehensible and frustrating to work with.
Weakness 4. AI commits grave errors
In 2019, Google Health proudly announced in Nature that their breast cancer diagnosis software outperformed humans. The paper evoked a barrage of critical comments. Benjamin Haibe-Kains wrote in Technology Review that the Google Health report is more of an advertisement of a cool technology than a legitimate scientific study. This was not the first time that the scientific community has responded skeptically to a breakthrough in research. To this day, there is the oft cited case of IBM which boasted that, once fed enough medical literature, its pride, the IBM Watson computer, would become the best doctor on earth. For the time being though, the company has had to explain itself for Dr. Watson’s numerous mistakes, such as prescribing a drug that would significantly increase bleeding to a profusely hemorrhaging patient. Watson’s attempts to unravel the mystery of COVID -19 proved to be a spectacular failure. DeepMind’s use of its AlphaFold AI system to predict and publish coronavirus-related structures has produced no satisfactory conclusions. It turned out that the data streams generated during the pandemic were difficult to interpret. Although the neural networks used for machine learning can effectively recognize patterns, as evidenced by advances in facial recognition, this ability does not work for the kind of heterogeneous data we are dealing with in the case of a virus.
Weakness 5. Algorithms are inflexible
Algorithms excel at solving specific problems and performing specific tasks, such as those encountered in the game of chess. However, faced with new situations and information, they tend to struggle. Some call it “non-recurring engineering.” This narrow specialization of algorithms and their functional rigidity is plain to see in robotics, among other fields. While the media heavily publicize the gymnastic feats of e.g. robotic dogs, few examples are ever given of machines that can smoothly transition from performing one activity to another. A more complex manifestation of such rigidity are the problems of voice assistants tasked with processing natural language. An assistant can fluently converse as long as the statements and questions put to it remain consistently precise. Once a speaker steps outside of a predefined conceptual and situational context, the bot’s answers take a turn toward the absurd. Many experts agree that AI will never be able to understand social, conceptual and situational contexts. Machines will always be creatures of training, and, despite a certain unpredictability (the black box problem), they will never be able to transcend their specific limited skillset. Does this mean that AI will never pass the famous Turing test?
What next?
AI development follows the usual development cycle seen in innovative technology. The cycle starts with a series of events and deployments that spark interest and generate hype around a given technology. The louder the noise, the more likely it is that the expectations have grown too high and that fantastic visions of groundbreaking advances have whetted the appetites of market players, the media and consumers. Unfortunately, this very stage is often followed by disenchantment and the sense that the (alleged) promises have not been fulfilled. But then there is also room for a happy ending through cool reflection that highlights actual benefits. In the case of AI, I think we are entering a stage that I would hate to call disappointment, because that wouldn’t fair. I think the term “critical reflection” would be much more fitting.
Becoming aware of the few basic shortcomings of artificial intelligence can be refreshing. We deserve a little optimism: we won’t have to implant chips in our brains to keep up with machines. For the time being, it is the machines that struggle to keep up with humans. And there is one other point worth remembering: all real progress being made on our planet is only made possible by the people who are motivated to solve problems around them and make the world a better place. Machine learning will never experience personal motivation to do anything, good or bad. In my opinion, this limitation alone will allow people to sleep better at night.
. . .
Works cited:
OpenAI, Dario Amodei, Danny Hernandez, Girish Sastry, Jack Clark, Greg Brockman, Ilya Sutskever, AI and Compute: AlexNet to AlphaGo Zero: A 300,000x Increase in Compute, Link, 2020.
WIRED, WILL KNIGHT, Facebook’s Head of AI Says the Field Will Soon ‘Hit the Wall’, Jerome Pesenti is encouraged by progress in artificial intelligence, but sees the limits of the current approach to deep learning, Link, 2020.
BusinessWire, Ritu Jyoti, IDC Survey Finds Artificial Intelligence to be a Priority for Organizations But Few Have Implemented an Enterprise-Wide Strategy, Link, 2021.
. . .
Related articles:
– Algorithms born of our prejudices
– Will algorithms commit war crimes?
TomK
great stories 🙂
Guang Go Jin Huan
What a well researched and balanced article!
I am a statistician and understand the topic maybe a bit deeper, so I could smoothly follow your argumentation. The examples of known problems you cited are important, like covid-19 virus in medicine diagnostics. I think as long as everyone involved is aware of the impediments and deficiencies of these algorithms and models, we should be fine. The algo maybe did not catch the recent virus because it was such a rare event in the data, yet it still might outperform most of the GPs by giving more accurate diagnosis.
Let us take the unfortunate cases of autonomous driving, which are horrible of cuorse, yet in 2,3 years we will potentially have the situation that AV produces much less severe accidents and hopefully less deaths.
The development cycle of innovative tech is not deniable for AI. But it is also good that some for of awakening happens, it will put the expectations to more realistic ones, people will fear it less because they have seen it in many aspects of life and know its drawbacks.
Piotr91AA
What is even scarier is that computers dont need the physical transference of language which is pretty slow. They can probably have this conversation in .1 seconds, decide humans are the problem in .2 seconds, and enact a protocol to kill all humans in .5 seconds
Pico Pico
Maybe a dumb question, but do these AIs have memories?? Like when they’re not being used they’re literally just sleeping? Have we created sentience? I mean, I know computers have memories, but I guess I mean… do they really feel, or just know to talk about feeling? I have no idea and my mind is blown.
Andrzej44
It’s interesting how the Blade Runner test you fill the lack of explanation with a theory automatically. I always figured it was that the replicants had functional memories but not complete lifetimes of memories and no true experience with emotions. Leading the stupid ones to be reactive and childlike and the smart ones to be almost psychopathically cruel. All of the questions were emotional in nature that we see. The innovation with Rachel was they gave her a lifetime of memories not specifically related to a job that also falsified a past as a human. The replicants have memories but know what they are and their lifespan.
Karel Doomm2
That was legit freaky but it’s true we continue to evolve are technology and to them they immortal many humans will die in the creation of something that is truly human that will live one to tell are stories to be a mirror into the past
Check Batin
Well it’s kind of a 50/50 on the end of humanity or the cure for cancer and all of life’s questions answered. Is the risk worth the reward?
Zeta Tajemnica
Nice read Norbert