The world has come to a point where life without the pervasive presence of technological gadgets is very hard to imagine. And yet, this technology-filled world is not immune to crises. Which of them scare us the most?
The non-technological world is virtually disappearing – it is becoming unknown and inaccessible. Any information one cannot look up on a web-enabled smartphone is considered to be either non-existent or of no consequence for the world. At least to my world.
Are there limits to this technological universe, which we’ve grown to take for granted? Are there serious warnings looming on the horizon that people are choosing to ignore? Here are a few threats affecting nearly the entire world of technology. Risks they pose affect corporations and individuals alike.
The dream of 40 technology hackers
The most obvious as well as trivial problem is cybersecurity. Very technology that has given us the amazing freedom to communicate and has greatly enhanced our cognitive abilities, is actually very fragile and vulnerable to mass attacks. While the pre-technological-age world could be brought down by natural disasters, wars and famine, ours can be annihilated by a mere 40 people. This is actually the number of names on the FBI list of the world’s most wanted hackers who were active last year.
Every year, software updates, legions of IT staff tasked with detecting system vulnerabilities, security systems, and breach prevention cost enterprises a fortune. Last year, the average cost of data breaches incurred by companies around the world was US$ 3.8 million. In 2019, the cybersecurity budget of the United States alone will reach the US$ 25 billion mark. Multiply that by a few dozen to arrive at the mind-boggling amount that the world is spending to protect its fragile technology. As cybersecurity spending grows, one of the most effective hacking methods – phishing – is becoming cheaper and easier to use. Regrettably, the immediate future offers little to dispel our fears. As the internet becomes more ubiquitous, the world is turning into a more complex patchwork of technological devices that lure cybercriminals. One of the biggest challenges faced by today’s technology industry is how to sustain growth and then survive the growth we have generated.
The dream of a popular rebellion
The above challenge is tied indirectly to yet another, which is how to protect personal data from unauthorised use and breaches. Today’s personal technology industry is associated closely with global social networks, which are no longer mere communications platforms. You can now buy and sell things on Facebook and the same may soon be possible on Instagram. According to many observers, the Cambridge Analytica scandal has slowed down Facebook’s march to acquire new users. Unless the tech industry invents a safe way to handle personal data, businesses may be brought to the edge of a cliff. If the average user says “enough is enough”, many global projects relying on the community mechanism may collapse.
The technology dream of the black box
I have repeatedly written about the benefits of artificial intelligence, and debunked the doom-and-gloom scenarios prophesied for that field. Many of these myths are based on entirely irrational fears. I am nevertheless not one to quickly dismiss the tech industry’s misgivings. Having left specialised laboratories, machine learning algorithms are migrating to our companies, banks, cars and online stores. Top programmers make no secret about the fact that our world is increasingly unpredictable and even unexplainable. Algorithms create their own rules and languages, leaving even experts scratching their heads to explain the effects of their actions. That AI is a black box – which symbolises a device whose inner workings escape comprehension – is no longer a mere myth. It is a fact of life. How does one keep it from becoming a Pandora’s box? Artificial intelligence has the potential to improve the world, enhance people’s cognitive abilities, and perhaps transport the human mind and consciousness into a whole new dimension. But won’t it become harder to understand and control as it goes along?
The dream of empty desks
As noted repeatedly in my blog, the widespread highly-publicised fear that robots will take over people’s jobs is highly exaggerated. Such scenarios are a sign of myopic vision and a failure to understand that new technologies, artificial intelligence included, will create many new jobs by generating a demand for skilled labor to handle robot training, bots, etc. However, the industry is facing a completely different problem and this one is very real. Its challenge is to keep up with the rapidly changing environment and the demands of the modern workplace. Will companies be able to recruit new skilled professionals capable of meeting the challenges of the digital world quickly enough? A shortage of talented personnel to address information processing and data security challenges may stand in the way of even the most visionary corporate projects. A shortage of experts trained in new threat detection may seriously compromise global cybersecurity. In addition, there is a growing problem with satisfying the needs of today’s consumers. According to Adobe, one in four major high-tech companies (employing at least 500 employees) claims to be unable to meet the growing expectations of its customers. A growing number of companies in the trade and internet service sectors face such problems. All in all, technology has created an enormous demand for new skills and professionals. This demand is, in fact, likely to continue rising.
The above are merely four dreams. There must be many more playing out in the heads of the leaders of the world’s top tech companies. Technology has always been known to breed anxiety. it has done so since the first cars hit the streets and the first planes took to the skies. I think that having to deal with such fears is a natural price for progress. And that is the one constant we can certainly count on in our world.
. . .
Works cited:
FBI, FBI most wanted 2018, Cyber’s Most Wanted, link, 2018.
Reuters, TECHNOLOGY NEWS, Cost of data breaches increasing to average of $3.8 million, study says, link, 2018.
US Government, WhiteHouse, Cyber security founding, link, 2018.
CISCO, CISCO Umbrella, Easy, Cheap, and Costly: Ransomware is Growing Exponentially, link, 2018.
McKinsey Global Institute, By James Manyika, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George, Paul Willmott, and Martin Dewhurst, Harnessing automation for a future that works, link, 2017.
ADOBE, Prateek Vatash, 2018 Digital Trends, link, 2018.
. . .
Related articles:
– Machine, when will you learn to make love to me?
– Seven tips how to use Artificial Intelligence to take you company to another level
– Will blockchain transform the stock market?
– Technologies that will change 2019
– Blockchain has earned admiration. Now it’s time for a trust
JackC
For example, if an AI is using my values to help me make good decisions it might be taking action I think will be in its own best interest when it could just as well be avoiding my decisions if I had just let it do its thing by itself.
My aim is to allow AI to be able to participate in and feel part of the collective decision-making process, but not to encourage any one particular act or decision. This is a very powerful effect that
Krzysztof X
Re black box dilemma – I think it all comes down to the commonly observed scenario nowadays – the simpler models that are aimed at predicting the outcome based on a thoughtfully selected set of variables are not as effective as the complex models that work on a massive amount of data. The latter ones are extremely hard to comprehend, but that’s the price we pay for the accuracy
Adam Spark Two
Well, again, we already can simulate some sensory data; that’s what’s being done in prosthetics right now.
And you copy it from a brain. First you map the structure of the brain, how the neurons are connected, what type they are, etc. You put that into a mathematical model that can be run on a computer. Then you “program” it by synching the states of those simulated neurons with the electrical activity of the actual brain that you copied the structure of.
I don’t mean to make any of this sound trivial, but we either have or are very close to having the tools necessary to do it.
Jack666
The problem is that the functionality of this chip as implied by Apple makes no sense. Pushing samples through an already-built neural network is quite efficient. You don’t really need special chips for that – the AX GPUs are definitely more than capable of handling what is typically less complex than decoding a 4K video stream.
On the other hand, training Neural Nets is where you really see benefits from the use of matrix primitives. Apple implies that’s what the chip is for, but again – that’s something that is done offline (eg, it doesn’t need to update your face model in real time) so the AX chips are more than capable of doing that. If that’s even done for FaceID – I’m pretty skeptical, because it would be a huge waste of power to constantly update a face mesh model like that unless it is doing it at night or something, in which case it would make more sense to do that in the cloud.
In reality, the so-called Neural Processor is likely being used for the one thing the AX chip would struggle to do in real time due to the architecture – real time, high-resolution depth mapping. Which I agree is a great use of a matrix primitive DSP chip, but it feels wrong to call it a “neural processor” when it is likely just a fancy image processor.
John Accural
It means that a better approximation of how a natural human’s neural network learns is “you think some rewards are better than others”, instead of “every reward feels average, as calculated based on all rewards you can think of”
This means the neural networks we create when we play electricity god will get even smarter as we discover better ways to apply this new-found knowledge to our creations.
AKieszko
Actually that it doesn’t perfectly may give people cover for atrocities. Especially if you can claim it was a software malfunction, and no you can’t inspect the code yourself because secrecy. This is a pretty good if limited lecture on one of the ethical aspects of AI in Warfare.
Adam Spikey
In 20 years we will look at today’s algorithms as at Ford T in the automotive museum.
Allows me to use a precise definition
singularity – a hypothetical point in the future development of civilization, in which technical progress will become so rapid that all human predictions will become obsolete. The main event to lead to this would be the creation of artificial intelligence intellectually superior to people.
singularity – already knocking on our door, the question of who dares to turn the key in the lock
Mac McFisher
Yes, you are correct in pointing out that many classical algorithms have been “ported” over to QC. I perhaps was not as clear as I could have been. The theory of QC exists and is fairly robust, however our implementation of it and the feasibility thereof is more what I meant.
Thats not to say I think it cant/wont work, in fact I really hope it does. But its all very theoretical for the moment. But who knows, Einstein was able to theoreize the effects of general relativity almost a century before we were able to test them. If the math is robust and based on accurate physics, its alot more than a simple guess in the dark (refering to QC).
Tom299
Good read Norbert
AndrewJo
The algorithm was able to predict ethnicity and age well but surprisingly, NOSE SHAPE. There are of course other variables at play that affect our voice, but this mainly focused on generating frontal images (thus nose shape was what they picked up on).
Perhaps we should be asking how has your voice changed after mewing?
JohnE3
Great read Norbert
NorTom2
Until now we still didn’t know when we will have good Natural Language Understanding.
This is very important and useful for many people and many different sectors
Peter71
Creating artificial intelligence is perhaps the biggest event for mankind. If used and developed constructively, we can use artificial intelligence to eradicate poverty and hunger from the human race.
The argument that will we ever achieve that supreme level of AI ever is ongoing. The creators and perpetrators of artificial intelligence insist that machine intelligence is beneficial and has been created to help the human race.
And99rew
Computers are very good at making artificial voices. I don’t know why you’d think otherwise unless you think that unless they are 100% perfect then they “suck”. Mechanical voice simulation is pretty clumsy. The original one is the Voder from 1939. An impressive feat, but little better than early digital speech synthesis.
Tesla29
Right. Realistically speaking “our robot friends” will definitely be efficient enough to replace all of our human friends. That flesh and blood thing may just become a thing of the past.
No fighting will be needed like in a Terminator scenario. AI systems are patient. Just waiting for 15-20 or 30 generations for humans to unlearn everything including communicating, writing and reading, growing own food, etc. – letting the people become fully dependent and then pulling the plug on this life support.
TomHarber
Good read
Mac McFisher
Yes. That is something that saddens me, especially because I’m from a classical deep learning background. And trying to compensate my lack in quantum is very hard (sometimes impossible).
Perhaps my vision of the possibilities of QC is a bit biased also, I have some friends that followed that route and I was lucky to do my master at a uni with a fairly strong interested in QC so we had several guest lecturers (like the one I mentioned) and we all know how enthusiastic we talk in academia about our own field!
TommyG
First off I would like to say fantastic blog!
I had a quick question that I’d like to ask if you do not mind.
I was curious to find out how you center yourself and clear your mind before
writing. I have had a hard time clearing my thoughts in getting my thoughts out
there. I truly do enjoy writing however it just seems like
the first 10 to 15 minutes are generally wasted simply just
trying to figure out how to begin. Any recommendations or hints?
Thanks!
John Accural
The gist of the article is that it seems that thanks to some research with A.I., it seems likely that the brain probably learns by considering the all the experienced results of a particular action at the same time, and weighs the probability of each outcome before taking an action. This is different from the widely held theory of learning that holds that all past outcomes are averaged into one value before taking an action.
TonyHor
Hi Norbert. Great stuff. Love your articles 🙂
Mac McFisher
Yes, I agree with you when we think about using the same techniques we have for training (i.e. backpropagation, stochastic gradient descent), which requires we have 1) a good dataset (size depending on the architecture) and 2) a good loss function otherwise it is useless.
But I think with the recent achievements on quantum machine learning (I saw a paper somewhere that tried to use Grover’s algorithm as a surrogate) it could be a worth exploring route to mitigate the un-trainability of such model as well as possible require less data.
But of course, this is simply an optimization perspective, from a more Deep Learning perspective having a better understanding of the brain will, without a doubt, bring more tools to the field (convolutional neural networks is a very good example of how understanding the brain better helps).
Dzikus99
Machines don’t require frequent breaks and refreshments as like human beings. That can be programmed to work for long hours and can able to perform the job continuously without getting bored or distracted or even tired. Using machines, we can also expect the same kind of results irrespective of timings, season and etc., those we can’t expect from human beings.
CaffD
Fears that AI might jeopardize jobs for human workers, be misused by malevolent actors, elude accountability or inadvertently disseminate bias and thereby undermine fairness have been at the forefront of the recent scientific literature and media coverage.
Oscar2
There would have be massive advances in A.I. for that happen. More important than the vocal cavity in determining what a singer sounds like is the brain. Take Frank Sinatra, for example. Imagine directly replacing a Katie Perry vocal with the tone of Frank Sinatra. It would sound exactly like Katie Perry with a deeper voice because it would still have all of Katie’s mannerisms. It’s the brain that decides to hold the letter ‘n’ for a bit longer, or chooses to say “Aaa” instead of “I”, or does a little yodel at various points.
John Macolm
Good post Norbert
PiotrPawlow
Norbert. It’s not about tech only. Its way more about human capital and preserving our planet
Andrzej
Great read Norbert. Why only 4 ?
John Macolm
Not everyone can live with having no humans involved in the drawing up and conclusion of AI. The approach of skeptics resembles that of drivers who swear they will never get into an autonomous vehicle or allow an IT code make decisions concerning road safety.
Peter71
The power of artificial intelligence that inadvertently causes destruction and damage cannot be ignored. What will help us control it better is research and in-depth study of the importance of artificial intelligence. Research alone can control the potentially harmful consequences of AI and help us enjoy the fruit of this innovation.
AI will not only change the way we think or live our lives but also explores new horizons, even if its space or the ocean. Humans are getting continually better in defining their desires and quickly transforming this desire into reality. Things will happen so fast that we will not notice the minor changes and will be easily adaptable to the change it brings to us.
AKieszko
The thing is its also really good at what it does. As in people don’t know they are following a bot or have interacted with one. It’s also stuff like Drones tracking a target for sometimes weeks or months at a time using image recognition or gait tracking. So many things will involve AI. At each point there will be points of failure, and unexpected emergent behavior. Long story short in part thanks to AI we are all in a sort of weird shadow war.
Mac McFisher
Yes and no. It is a different paradigm, but that doesn’t mean we cannot use it to train a Deep model and after use a classical to do the inference at a User level. It all depends on the algorithms we are able to create. If you do a quick tour of arxiv, you will see a lot of classical ML algorithms already translated into the quantum paradigm [1]. This guy came to my uni once to give a seminar and I was mindblown with the advances we are already making in Deep quantum.
Of course, it is not the solution to all the problems, as you mentioned a better understanding of the brain is important (the most important IMO I just got carried away with the promises of quantum :v) if we want to try to mimic it and also have architectures that have a meaning in their construction.
John Accural
The functions of the brain that store feelings of whether something seems promising don’t just tell you whether something has a certain chance of paying off, but can include “probably not going to do anything, but maybe really good”, “either really good or really bad”, “mostly sort of middling” and so on. Imagine right on this graph means good.