Are we ready for military AI?

Will Artificial Intelligence weapons soon be able to launch military operations independently of human input?

military AI blog Norbert Biedrzycki robots

My article in Data Driven Investor published 21st of February 2019.

Today, algorithms may come in charming shapes, such as Sophia, a robot with a lovely attitude and an enlightened philosophy.

Others, like Atlas, are being built to look like Robocop brutes that can run, jump, and, maybe, shoot. Why not? 

Regardless of how we, civilians, feel about it, Artificial Intelligence (AI) has entered the armaments industry. The world is testing electronic command and training systems, object recognition techniques, and drone management algorithms which provide the military with millions of photographs and other valuable data. Already, the decision to use an offensive weapon frequently is made by a machine, with humans left to decide only whether or not to pull the trigger. In the case of defensive weapons, machines often make autonomous decisions (to use the defensive systems) without any human involvement at all.

Which is scary. Will AI weapons soon be able to launch military operations independently of human input?

It’s clear that the military is developing smart technologies. After all, we owe many of the innovations we know from civilian applications to military R&D, including the internet itself (which began as  Arpanet), email, and autonomous vehicles – all developed by the U.S. Defense Advanced Research Projects Agency (DARPA). But today modern weaponry relying on machine and deep learning can achieve a worrisome autonomy, although the military officially claims that no contemporary armaments are fully autonomous. However, it does admit that a growing proportion of arsenals meet the technological criteria for becoming fully autonomous. In other words, it’s not a question of if weapons will be able to act without human supervision, it’s a matter of when, and whether we allow them to choose targets to attack and carry those attacks out. 

There is another consideration here worth noting. While systems still are designed to leave the final decision to human beings, the reaction time required time is frequently so short once the weapon has analyzed the data and chosen a target that it precludes reflection. With a half a second to decide whether or not to pull the trigger, it is difficult to speak of humans themselves in those situations as being fully autonomous.

Thinking weapons around the world 

Human Rights Watch, which has called for a ban on “killer robots,”  nas estimated that there are at least 380 types of military equipment that employ sophisticated smart technology operating in China, Russia, France, Israel, the UK, and the United States. Much publicity has recently focused on the company Hanwha, a member of a group of the largest weapon manufacturers in South Korea. The Korea Times, calling it the “third revolution in the battleground after gunpowder and nuclear weapons, “has reported that together with the Korean Advanced Science and Technology Institute (KAIST), it is developing missiles that can control their speed and altitude and change course without direct human intervention. In another example, SGR-A1 cannons placed along the demilitarized zone between South and North Koreas reportedly are capable of operating autonomously (although programmers say they can’t fire without human authorization).

The Korean company Dodaam Systems makes autonomous robots capable of detecting targets many kilometers away. Also, the UK has been intensively testing the unmanned Taranis drone, set to reach its full capacity in 2030 and replace human-operated aircraft. Last year, the Russian government’s Tass news agency reported that Russian combat aircraft will soon be fitted with autonomous missiles capable of analyzing a situation and making independent decisions regarding altitude, velocity, and flight direction. And China, which aspires to become a leader in the AI field, is working hard to develop drones (especially those operating in so-called swarms) capable of carrying autonomous missiles that detect targets independent of humans.

A new bullet

Since 2016, the U.S. Department of Defense has been creating an artificial intelligence development center. According to the program’s leaders, progress in the field will change the way wars are fought. Although former U.S. Deputy Secretary of Defense Robert O. Work has claimed that the military will not hand power over to machines, if other militaries do, the United States may be forced to consider it. For now, the agency has established a broad, multi-billion-dollar AI development program as core to its strategy, testing state-of-the-art remotely-controlled equipment, such as the Extreme Accuracy Tasked Ordnance (EXACTO), a .50 caliber bullet that can acquire targets and change path “to compensate for any factors that may drive it off course.” 

According to experts, unmanned aircraft will replace piloted aircraft within a matter of years. These drones can be refueled in flight, carry out missions against anti-aircraft forces, engage in reconnaissance missions, and attack ground targets. Going pilotless will reduce costs considerably as pilot safety systems in a modern fighter aircraft may add up to as much as 25% of the whole combat platform. 

Small, but deadly

Work is currently under way to tap into the potential of so-called insect robots, a specific form of nanobot that according to the American physicist Louis Del Monte, author of the book Nanoweapons: A Growing Threat To Humanity, may become weapons of mass destruction. Del Monte argues that insect-like nanobots can be programmed to insert toxins into people and poison water-supply systems. DARPA’s Fast Lightweight Autonomy program involves the development of house-fly-sized drones ideal for spying, equipped with “advanced autonomy algorithms.” France, the Netherlands, and Israel reportedly are also working onintelligence gathering insect drones

military AI blog Norbert Biedrzycki robots

The limits of NGO monitoring and what needs to happen now

Politicians, experts, and the IT industry as a whole are realizing that the autonomous weapons problem is quite real. According to Mary Wareham of Human Right Watch, the United States should “commit to negotiate a legally binding ban treaty [to]… draw the boundaries of future autonomy in weapon systems.” Meanwhile, the UK-based NGO Article 36 has devoted a lot of attention to autonomous ordnance, claiming that political control over weapons should be regulated and based on a publicly-accessible and transparent protocol. Both organizations have been putting a lot of effort into developing clear definitions of autonomous weapons. The signatories of international petitions continue to attempt to reach politicians and present their points of view during international conferences. One of the most recent international initiatives is this year’s letter signed by the Boston-based organization Future of Life Institute in which 160 companies from the AI industry in 36 countries, along with 2400 individuals, have signed a declaration stating that “autonomous weapons pose a clear threat to every country in the world and will therefore refrain from contributing to its development.” The document was signed by, among others, Demis Hassabis, Stuart Russell, Yoshua Bengio, Anca Dragan, Toby Walsh, and the founder of Tesla and SpaceX Elon Musk.

However, until an open international conflict arises that will reveal what technologies are actually in use, keeping track of the weapons that are being developed, researched and installed is next to impossible.

Another obstacle faced in developing clear binding standards and producing useful findings is the nature of algorithms. We think of weapons as material objects (whose use may or may not be banned), but it is much harder to make laws to cope with the development of the code behind software, algorithms, neural networks and AI. 

Another problem is that of accountability. As is the case with self-driving vehicles, who should be held accountable should tragedy strike? The IT person who writes the code to allow devices to make independent choices? The neural network trainer? The vehicle manufacturer?

Military professionals who lobby for the most advanced autonomous projects argue that instead of imposing bans, one should encourage innovations that will reduce the number of civilian casualties. This is not about algorithms having the potential to destroy the enemy and civilian population. Rather, they claim, their prime objective is to use these technologies to better assess battlefield situations, find tactical advantages, and reduce overall casualties (including civilian). In other words, it is about improved and more efficient data processing. 

However, the algorithms unleashed in tomorrow’s battlefields may cause tragedies of unprecedented proportions. Toby Walsh, a professor dealing with AI at the University of New South Wales in Australia warns that autonomous weapons will “follow any orders however evil” and “industrialize war.”  

Artificial intelligence has the potential to help a great many people. Regrettably, it also has the potential to do great harm. Politicians and generals need to collect enough information to understand all the consequences of the spread of autonomous ordnance.

Read full article

.    .    .

Works cited

YouTube, BrainBar, My Greatest Weakness is Curiosity: Sophia the Robot at Brain Bar, link, 2018.

YouTube, Boston Dynamics, Getting some air, Atlas?, link, 2018.

The Guardian, Ben Tarnoff, Weaponised AI is coming. Are algorithmic forever wars our future?, link, 2018. 

Brookings, Michael E. O’Hanlon,Forecasting change in military technology, 2020-2040, link, 2018. 

Russell Christian/Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, link, 2018. 

The Korea Times, Jun Ji-hye,Hanwha, KAIST to develop AI weapons, link, 2018. 

Bae Systems, Taranis, link, 2018. 

DARPA, Faster, Lighter, Smarter: DARPA Gives Small Autonomous Systems a Tech Boost, Researchers demo latest quadcopter software to navigate simulated urban environments, performing real-world tasks without human assistance,link, 2018. 

The Verge, Matt Stroud, The Pentagon is getting serious about AI weapons, link, 2018. 

The Guardian, Mattha Busby, Killer robots: pressure builds for ban as governments meet, link, 2018. 

.    .    .

Related articles

– Artificial intelligence is a new electricity

– Robots awaiting judges

– Only God can count that fast – the world of quantum computing

– Machine Learning. Computers coming of age

Leave a Reply

33 comments

  1. Jang Huan Jones

    Why does this sub pander to Elon Musk’s technology anxieties? Artificial Intelligence (AI) isn’t some kind of self-aware menace that’s going to steal your job or launch all the bombs. It also isn’t something that we’re just going to “stumble onto”; it will take many years of directed basic research to even begin to develop what Elon Musk is envisioning. Finally, we must allow such a technology oversight and management of critical systems to realize the nightmare scenarios that keep Elon Musk up at night.

  2. John Accural

    One of those “dog robots” from Boston Dynamics with a gun or three mounted on it will be extremely difficult to deal with, especially if their owner have them set to kill-on-sight – which they will in this sort of scenario because the side that doesn’t will be at a massive disadvantage.
    So yeah. These killer robots are basically our worst nightmare apart from nuclear weapons. Countries like Russia, China, the US waging wars with these things can leave massive amounts of dead with no way for normal living soldiers to effectively fight back.

    • Guang Go Jin Huan

      Emotions are judgments about the relevance of the current situation to a person’s goals. For example, if someone gives you $1 million then you will probably be happy because the money can help you to satisfy your goals of surviving, having fun, and looking after your family. Robots are already capable of doing at least a version of appraisal, for example when a driverless car calculates the best way of getting from its current location to where it is supposed to be. If emotions were just appraisals, then robot emotions would be just around the corner.

  3. Piotr91AA

    Rights Watch called for a ban on “killer robots,” after estimating that there are at least 380 types of military equipment that employ sophisticated “smart” (AI) technology IN Operation in China, Russia, France, Israel, the UK, and the United States.

  4. Peter71

    One concern is that AI programs may be programmed to be biased against certain groups, such as women and minorities because most of the developers are wealthy Caucasian men. Recent researches show that support for artificial intelligence is higher among men than women.
    Algorithms have a host of applications in today’s legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants. An AI-based criminal offender profiling application assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low-risk estimate to white defendants significantly more often than statistically expected.

  5. The AI systems are capable enough to reduce human efforts in numerous areas. To conduct different operations in the industry, many of them are using artificial intelligence to create machines that perform various activities regularly. The artificial intelligence applications help to get the work done faster and with accurate results.

  6. Oscar P

    Say we have an insurance claims processing algorithm which decides if a claim should be approved or not. The AI may be looking at income, gender, demographics, age, location, credit history, claims history, and more to determine whether a claim is likely to be valid. If a claim is approved, we can take that transaction and change one variable, asking the model what if it was a female instead of a male. If the claim is then denied, then you know the model is biased.

    • Tesla29

      Realistically speaking “our robot friends” will definitely be efficient enough to replace all of our human friends. That flesh and blood thing may just become a thing of the past.
      No fighting will be needed like in a Terminator scenario. AI systems are patient. Just waiting for 15-20 or 30 generations for humans to unlearn everything including communicating, writing and reading, growing own food, etc. – letting the people become fully dependent and then pulling the plug on this life support.

      Looks logical to me. Why would intelligent, independent systems need humans?

      • John Accural

        Thanks to facial recognition artificial intelligence that is looming on the horizon the robots will know exactly where you are and can just come to you and eliminate you. Why do we keep pretending that this technology is for the benefit of humanity, when we know for a fact the technology has let us down this path in the first place. So we’re going to use technology to save us from ecological disaster; yeah, right.
        Oligarchs are driving us off of a cliff because they know there’s nothing they can do to stop this momentum. Which is why so many of them are building bunkers and have greed to a level we have never seen in history.

        • Jang Huan Jones

          Even if its completely untrue it gives a taste of what the future may hold. Having said that, the issue of an AI driven war between superpowers and the issue of AI being used against a certain political ideology are orders of magnitude in difference. One is an existential threat to the entirety of humanity and the other is not.

  7. Karel Doomm2

    Quntum Intenet in the long term is a fantastic development that reshapes security and encryption. But during early adaption, it may create some imbalance as systems transition from legacy models to QC.

    Your thoughts on mitigating this, please.

    • Jack666

      Right, we already knew we were on the right track, but it feels good to see it mentioned first in almost all these kind of charts!

    • Zoeba Jones

      Yes, robots are unfeeling machines that make decisions based on numbers and somehow the one that picks the targets is not responsible anymore. It’s blaming the bullet and not the one who pulls the trigger.

    • Oscar P

      In the case of a large initial data set, the most frequent errors are involuntary: Either the work was not done well, those who created the data sets had biases themselves, or bias emerged in a completely involuntary and unconscious fashion, because of variables and hidden correlations. Data bias is, by far, the biggest problem AI developers face, and it’s at the heart of any number of recent AI debacles, including one company’s botched facial recognition system, which was mysteriously bad at identifying women of color.

      • AndrewJo

        I respect your writings and viewpoint. However, there is tremendous amount of responsibility of blame and accountability of the Technology firms. Of course, the military wants new technology. The cut throat gains by big firms did not consider public privacy or absurd innovation that did not consider the evolution of Humanity. You might have Tim drinking his Mountain Dew in Mountain View, CA and thinking how he can work on the new brain interface. The public has been studied by one of the biggest search engines. They built the algorithms around human behavior. No governance equals the current state.

    • Tom Jonezz

      While the algorithm may not see bias in the way people see bias, because of the inherently flawed dataset where all people will develop at slightly different rates and their experiences, even those of equal social force/value, will cause varying impressions which may cause differing effects; any algorithmic mechanisms applied will observe and apply a simple value to each circumstance without the understanding to know which evolutionary processes have or have not been impressioned upon.

    • John Accural

      Can a robot truly make a good choice about taking someone’s life? Not without a huge amount of information being input. Can a robot handle that? Absolutely. Can a robot handle that autonomously? Without any human or computer interaction? Merely on firsthand experience and whatever intel they were given? Can a robot, completely on its own, decide the fate of a human being based on these things and not a simple hitlist?

      • Jang Huan Jones

        Anyone else worry that he says things like this publicly on an international platform to push his largest competitors (or enemies depending on who you ask) to try as hard as they can to create AIs themselves, first? AIs that we’ve been warned might be our undoing? Why fight your opponent if they’re already building the means to their own destruction? Especially when fueling the fire is free.

      • Guang Go Jin Huan

        If robots ever get good at language and form complex relationships with other robots and humans, then they might have emotions influenced by culture.

  8. PiotrPawlow

    Experts are warning of the threat posed by China’s use of artificial intelligence to develop a survellience state, and say the risk of such authoritarian behaviour spreading to other parts of the world is increasing.

    “If I were asked which was the bigger threat from China to the West, is it Huawei or is it their research on artificial intelligence I would say it’s their research on artificial intelligence,” Professor Austin said.
    China has been investing heavily in AI – generally referred to as the development of computer systems that can perform tasks normally requiring human intelligence – in recent years and research by its institutions in the field has surged.

  9. Zidan78

    I would like to add a correction. AI has been a thing for a while, but 2010 a innovation called deep learning has a huge break through, it is base off nerual networks and the ability to recognise pattern and “meaning” by itself without programmers giving it a goal. The innovation is a way of thinking which branch off a few alogrithms that accelerated the process. So yes, big data help, but deep learning is the game changer.

  10. Tom Jonezz

    Is the use of intelligent machines a threat to society, an opportunity or just technology? What is AI and how will it develop in the future?

  11. John Accural

    I have a real challenge with the label of “Artificial Intelligence”. Artificial Intelligence is not one until it becomes self-aware (sentient). Until that time comes, saying “AI” is in my opinion more of a marketing term than anything else. At this time, I prefer to define it as a set of technologies which is able to aggregate data and present it in a way to facilitate the process of decision making. To allow for fully autonomous weapons to take action on decisions which are made based on weighed statistical data analysis and normalization is akin to playing a “smarter” version of Russian Roulette. Even if we ever reach a level where machines are self-aware, they should always be nothing more than tools aiding the decision making process of humans – people in this case who are tasked with “pressing the button” to either launch or defer the launch of a weapon. Some of the greatest battles and resistance wars – from the Battle of Thermopylae, to the partisan resistance in various countries occupied by the nazis in WWII had no “statistical” business of being fought – yet people did and people defeated enemies with far superior resources. Sometimes the best decisions are made based on “gut feel”, not computational analysis.

    • AndrewJo

      Central to all these applications is the ability of a quantum internet to transmit quantum bits (qubits) that are fundamentally different than classical bits. Whereas classical bits can take only two values, 0 or 1, qubits can be in a superposition of being 0 and 1 at the same time. Moreover, qubits can be entangled with each other, leading to correlations over large distances that are much stronger than is possible with classical information. Qubits also cannot be copied, and any attempt to do so can be detected.

      • Jang Huan Jones

        I find this pretty funny it seems like a lot of people don’t notice the fact that there’s a double agenda and bias when Elon Musk says worry about Russian artificial intelligence and technology what he is really saying is pay me lots of money to make technology.

    • SimonMcD

      When you speak to philosophers, they act as if these systems will have moral agency. At some level a toaster is autonomous. You can task it to toast your bread and walk away. It doesn’t keep asking you, ‘Should I stop? Should I stop?’ That’s the kind of autonomy we’re talking about.

    • Oscar P

      A model must often be kept up-to-date with new trends, but it also needs to be analyzed for bias creeping in, in unexpected ways. For instance, there could be subtle biases that may come out in feedback loops as the model retrains itself over multiple iterations.

  12. Zoeba Jones

    What an exciting time to be Alive!!!.. Hope we can take good advantage of technology.