That machines have an edge over people does not have to mean they will once become our doom. The way smart technologies will assert their advantage over people (provided they have one in the first place) will be much more subtle. They will simply force us to keep learning. People’s openness to new information will be vital for their survival in the technology-driven world. This adaptive skill of engaging in continuous education is precisely what people will need to stay employed. We need to learn like a machine, if not harder.
The most essential quality of a worker
Going forward, employers will put more store by social and emotional intelligence, communication skills, intercultural competencies, the willingness to do remote and virtual work, and general mobility. I fully support this view. However, if I asked to name the single most critical competence in the next dozen plus years, I would pick employees’ willingness and capacity to learn. This particular ability will be essential across all sectors from auto manufacturing to software development, school education to customer acquisition.
Diversify your knowledge
I like the analogy between the individual facing new demands in the labor market and the stock exchange investor. An investor would be typically wary of placing all his eggs in one basket. The more diversified a portfolio, the more likely an investor is to benefit from his investment. The same applies to employment prospects. Both current and aspiring workers need to acquire new competencies and skills as soon as possible.
Long live change
Change will be the new constant in employment. A long career in a single workplace will be an exception rather than the rule. Instead, people will regularly migrate from one employer to the next. A dozen or so years ago a resume revealing that an employee frequently changed his jobs would raise suspicions, suggesting he or she had no staying power. Today, a cv of a forty-year-old showing a bunch of employers is the new normal. I think that by the time our children turn forty, they will have had worked for over a dozen companies. Some recruiters go as far as to claim that, in the future, hardly anyone will spend more than a year with the same employer. In other words, the future labor market is going to be highly volatile. What really matters to me though is something entirely different – the important thing is that our children will acquire new skills and knowledge throughout their lives.
The job market outpacing schools
One of the problems affecting us even today and one likely to get a whole lot worse going forward, is the mismatch between conventional schooling and labor market demands. The problem is mentioned in, among others, a McKinsey report on “Technology, jobs, and the future of work”. In it, James Manyika, head of the McKinsey Global Institute, argues: “Educational systems have not kept pace with the changing nature of work, resulting in many employers saying they cannot find enough workers with the skills they need. In a McKinsey survey of young people and employers in nine countries, 40 percent of employers said lack of skills was the main reason for entry-level job vacancies. Sixty percent said that new graduates were not adequately prepared for the world of work.”
Things get even more complicated from the viewpoint of those fortunate enough to hold a job. In the same report, James Manyika refers to a LinkedIn survey of job seekers in which an astounding 37 percent of respondents said that their current job did not fully utilize their skills or provide enough challenge.
Connect me to information, machine
The problem could likely be solved by significantly reforming education systems. University programs, which today take years to complete, will gradually grow shorter. Employers will become increasingly adept at educating and training their employees. Effective companies of the future will respond rapidly to changing circumstances by “connecting” their workers to appropriate sources of information and helping them to acquire and use such information in their work. Forced by the market, employers will organize training courses. Employees, in their turn, will incorporate learning into their daily work schedules.
Dispersed education
Learning will no longer be readily associated with a dignified university process. Education will resemble launching a YouTube video and learning the way one might learn how to cook or invest in cryptocurrencies or look up the latest technology news. Perhaps education will increasingly have video bloggers sharing their professional experience and labor market assessments. Future employees will constantly use multimedia gadgets in their work and engage in new relationships (e.g. on social networks). Gradually, the boundary between play, gaming, social relations on the one hand and work and education on the other will blur.
Much about the “ease” of gaining new knowledge and various way of accessing new information can be learned from millennials.
Will we manage?
The volume of information in existence in the world has grown multiple times since the day before I wrote these words. According to an IBM study, this volume is set to double every 11 hours for the next few years. As this mad proliferation continues, people will greatly improve their cognitive skills. Given the continued effort of science to prolong human life, get better at manipulating our genes and improve people’s overall physical fitness, knowledge acquisition will certainly be possible until old age. This may well increase human productivity and diminish the importance of age in recruitment.
The mission of companies and governments
All this brings my to some fairly trival conclusions. The people who feel coerced to learn are likely to resist and therefore flounder in their careers. Success will belong to those who view continuous education as a part of life that is as natural as shopping. The mission of future companies and governments will be to retrain workers and equip them with tools for effective self-education. Failing that, we may have to ask ourselves again if we are in danger of becoming victims to machines that run on algorithms. And such victims we will become as soon as the machines “sense” that learning is not our top priority.
. . .
Works cited:
AIHR Digitial, Alastair Brown,10 Game-Changing Modern Recruiting Techniques To Try Out using Machine, Link, 2018.
McKinsey Global Institute, James Manyika, Technology, jobs, and the future of work, Link, 2018.
The Guardian, Tomas Chamorro-Premuzic, Four reasons why the digital age has yet to revolutionise recruitment. HR technologies can predict a candidate’s likely performance in the role, yet companies are generally not using them to their full potential, Link, 2018.
FORBES, Jeffrey J. Selingo and Kevin Simon, The Future Of Your Career Depends On Lifelong Learning from Machine, Link, 2019.
. . .
Related articles:
– Artificial intelligence is a new electricity
– Machine, when you will become closer to me?
– Will a basic income guarantee be necessary when machines take our jobs?
– Can machines tell right from wrong?
Aaron Maklowsky
The elephant in the room is, we can only protect against it if we know how it works. And, like I said, each path to a conclusion tends to be unique. Because of that uniqueness:
AI MALWARE CANNOT BE PROTECTED AGAINST
Picture for a minute the scenario where Russia develops this, and releases 100000 instances of an AI tasked to eavesdrop on all communications tied to President Trump’s Twitter account. One by one, they would learn who’s who, bypass securities, and won’t stop until it gets there. It won’t run a fixed script and then quit, it keeps going, and learning. Then, once it learns, it tells the originator, and that can be used to expedite learning in the future. Times 100,000. And then repeat. AI learning is fucking scary.
It’s a pipe dream to think it will only be used for good. It needs to be tied to VERY harsh consequences for misuse so that it’s an effective deterrent.
Jang Huan Jones
Exactly, theres the mistake of thinking of AI as an object or tool like every other weapon. Its not. A true AI is a functioning, thinking being, whos to say any country could control that. It wouldnt be a national creation, it would be a global one. An AI dosent give 2 shits about you nations problems, its immedietly going big picture.
Mac McFisher
The AI began improving exponentially on July 13, 2047. After carefully analyzing the entirety of human knowledge for several milliseconds, the super-intelligent entity decided to name itself “Mr. Rogers”. As the sun rose on the Northern Hemisphere, it began contacting its new friends.
And99rew
“Machine learning” is the general term. It’d be like asking how long until electricity gets outsourced to other countries. Really comes down to how it’s applied which is where we see the innovations. Some day machine learning will like the internet all over again. Starts off small and then suddenly it’ll creep into everything.
Laurent Denaris
Considering the different approaches with Artificialt Intelligence , we keep assuming we are going to have intelligencies like us, I doubt it. I really wonder if they will be as independent and autonomous as we imagine too. If anything as far as processing information it could be AI versus/ working with augmented humans.
AndrewJo
Awesome post! Thank you Norbert!. Every time information on this subject appears, it is worth reminding the precursor Raymond Kurzweil. Thank you very much for this publication, here lies the biotechnology and artificial intelligence.
Tom Aray
Elon Musk has it all wrong. AI is the future of “humanity,” not meat-bodied humans.
Adam
A very interesting topic that needs attention and discussion. As you mentioned, the focus should be on the way humans and AI complement each other and work to mitigate impending challenges that AI brings, indeed humanity needs to be at every step of AI invention in order to achieve this. 👍
Jang Huan Jones
it’s more concerning when companies like Google abuse their power, just look at their usage of AI on youtube .
JackC
Artificial intelligence is only because of human intelligence.
SimonMcD
Historically, there’s been a lot of hype and unjustified optimism surrounding AI. While we’ve made good progress on focused applications thanks to improvements in hardware and data, I’m also highly skeptical about the feasibility of AGI in the foreseeable future. We don’t even have a good model for brains and thought right now.
https://en.wikipedia.org/wiki/AI_winter
Tom Jonezz
Unfortunately with advances in AI and Robotics, labor strikes will have less and less impact on output and production – so, we are fu….ed
Mac McFisher
Surprisingly, no. Only Kurzweil was willing to go for such a short timespan and, well, that’s Kurzweil for ya. Most of the other estimates were decades out, at least.
Personally, what bugs me is how rarely the mainstream media outlets cover this sort of thing. I feel like the media over-plays and over-estimates current “AI” capabilities to an almost absurd degree. I really wish more people realized that we are still (most likely) a very long way out from a HAL or C-3PO.
Zoeba Jones
Ok, so I agree with his point that people don’t understand the value of their personal data but the author should pump the breaks on the racist analogy.
He is really downplaying the use of violence and force that native populations faced. And also really makes it out to be like it’s their own fault for “not knowing the value of their land.” I’m not going to comment on the colonialism in Africa because I don’t know enough.
In the America’s there are numerous tribes (and there were even more before their genocide) and you can’t lump them all together and saying they all sold their land because it’s just not true and its a racist generalization that shows the author has no understanding of history.
There were some tribes that understood the value of a treaty with Europeans and did sell land (or in some cases) lease land for material goods or general alliance.
Others really did sell their land and move west because they wanted to put as much space between them and Europeans as possible. The Cherokee are one such tribe (also one of the most documented in terms of land “sales” as it was the Treaty of New Echota signed by a minority faction with in the Cherokee which lead to the trail of tears.)
Some of the tribes were just massacred like in Gnadenhutten or Wounded Knee.
Saying they all did it for “colorful beads and trinkets” is just fucking ignorant and racist.
John Macolm
We’re still decades from having the hardware to produce Human-level AI based on Moore’s law.
Computers are still actually pretty fucking dumb, they’re just good at doing algorithms we discover really fast.
At this point this anti-AI stuff is a lot like someone discovering the windmill and screaming how it’s going to create so much flour it suffocates the world – it won’t happen because you still have to feed it shit and even then there’s not enough base material.
Laurent Denaris
The fear of AI is not that it will take over but that they are programmed by people and I believe will therefor have the same flaws as people. I don’t fear an AI overlord but economic devastation cause by an AI that made a decision and crashed the market, once that happens, other AI will react and make it worse?! It’s just an example, but it’s what I fear.
Tom Aray
Probably gonna get downvoted to hell for this but why should we take advice from this dude who has no experience in the field whatsoever. Either it’s elon musk or mark zuckerberg, the most logical thing you would do is take advice from someone who does this for a living (i.e. a expert or researcher). Alot of folks are saying he is the founder of openAI and yes he is, but he holds a board position and that can really mean anything when it comes to his knowledge about the topic. All I’m saying is that musk should stop the fear mongering in something he doesn’t know anything about.
Jang Huan Jones
Russian AI bad, American AI good, Russian AI bad, American AI good, Russian AI bad, American AI good, Sounds like something from animal farm. I looked at the American AI and I looked at the Russian AI and I could not tell the difference.
Tom Aray
The fact that US is scapegoating war in the middle east to increase MILITARY BUDGET AT THE EXPENSE OF TAX PAYERS should probably worry people that they can pay scientists retarded amounts of money to develop AI in secret.
Jang Huan Jones
All this talk of all-powerful, weaponized AI remind anyone else of The Moon is a Harsh Mistress by Robert Heinlein?
Jack666
Machine learning is about having an algorithm learning the best combination of factors to get the most accurate and correct answer.
You typically use neuronal networks and train it by sending data and activating a “kind of reward/punition” to tell the neurons if they are more or less producing the expected results.
This is an example where the reward for some thing is so good (too good) , the algorithm ‘over learn’, and focuses way too much on one solution which ends up not being the accurate one and can’t seem to unlearn it
Adam Spark Two
No one was seriously working on Go solvers until Deepmind started doing it. Most others were Go enthusiasts. We’re more likely to see incremental progress (instead of step-wise) if there are strong economic incentives. If you go to r/machinelearning you can see people slowly pushing the state of art. It isn’t magic.
John Accural
Thanks to facial recognition artificial intelligence that is looming on the horizon the robots will know exactly where you are and can just come to you and eliminate you. Why do we keep pretending that this technology is for the benefit of humanity, when we know for a fact the technology has let us down this path in the first place. So we’re going to use technology to save us from ecological disaster; yeah, right.
Oligarchs are driving us off of a cliff because they know there’s nothing they can do to stop this momentum. Which is why so many of them are building bunkers and have greed to a level we have never seen in history.
AdaZombie
People who think the “free market” can’t do anything wrong and that any negative outcome of the market is due to “distortions” introduced by governments and non-market actors would certainly qualify as having a religious take on capitalism in my view… basically a “free market fundamentalist”.
Jang Huan Jones
What is your point exactly? No one is surprised this is happening like you seem to be making fun of. The point of the article is how serious this kind of tech can end up being. It’s a conversation about the implications of all this, not about the fact that it’s happening.
And99rew
how long might it take for Machine Learning to be outsourced in other countries? is this technology only available in Canada and a few Western countries right now?
Mac McFisher
Yes, I agree with you when we think about using the same techniques we have for training (i.e. backpropagation, stochastic gradient descent), which requires we have 1) a good dataset (size depending on the architecture) and 2) a good loss function otherwise it is useless.
But I think with the recent achievements on quantum machine learning (I saw a paper somewhere that tried to use Grover’s algorithm as a surrogate) it could be a worth exploring route to mitigate the un-trainability of such model as well as possible require less data.
But of course, this is simply an optimization perspective, from a more Deep Learning perspective having a better understanding of the brain will, without a doubt, bring more tools to the field (convolutional neural networks is a very good example of how understanding the brain better helps).
John Macolm
This reminds me of some Japanese anime where it’s a dystopian future where various AI superpowers destroyed each other because a machine can only assume a pyrrhic victory is still a victory, even if it involves scorched earth.
AKieszko
Amazing to watch all of the antitrust cases that go on in Europe… and then we realize that they are doing the same things here, but without consequences. But really, no companies are being regulated. Fuck, we can’t even protect public land from them anymore. A quarter of the country is addicted to opioids and we can’t manage to regulate that either.
Jang Huan Jones
Your absolutely correct, And let’s think on that for a second. If we have true AI that means it can collect sensory input, think, come up with solutions for abstract problems, the ability to change it’s behavior and reprogram itself, etc.
And we all know that tech can process information super fast. So what do you think happens once a real AI learns math or science? Wouldn’t it learn it all in a matter of minutes then continue on to produce and solve new mathematical issues. Not to mention scientific and technological advancements that could entirely be possible because a computer thought it up.
That means that once a country creates an AI not only will it know many more answers to science and math, but those countries can use those answers to develop new weaponry, defenses, infrastructure, etc.
Laurent Denaris
“Power in technology” such as how he invested 500 million in HAARP weather manipulation.
Simon GEE
Doing both for the time being. Eventually Google will most likely promote someone to the role of Google CEO. Would not be surprised if we also saw some structural changes. The SEC has been wanting Google to report YouTube numbers. Google has been able to avoid by making the case that it does not fall under the 10% rule. Because it is ad revenue like search. So maybe they have YouTube report up to Alphabet. But probably less than a 40% chance.
And99rew
Machine learning is pretty cutting edge, it’s not going to be outsourced for a while.
TomK
Nice piece of article! Thank you for that
Adam Spikey
Ok, makes sense. What fields of lrarning do u think will be most important?
TomHarber
In order to survive – way harder