Will algorithms commit war crimes?

It’s clear that the military is developing smart technologies. We owe many of the innovations we know from civilian applications to military R&D, including the internet itself (which began as Arpanet), email, and autonomous vehicles. But today modern weaponry relying on machine and deep learning can achieve a worrisome autonomy.

Share

facebook twitter google+ linkedin email
algorithms war crimes blog Norbert Biedrzycki

Today, algorithms may come in charming shapes, such as Sophia, a robot with a lovely attitude and an enlightened philosophy.

Others, like Atlas, are being built to look like Robocop brutes that can run, jump, and, maybe, shoot. Why not? 

Regardless of how we, civilians, feel about it, Artificial Intelligence (AI) has entered the armaments industry. The world is testing electronic command and training systems, object recognition techniques, and drone management algorithms which provide the military with millions of photographs and other valuable data. Already, the decision to use an offensive weapon frequently is made by a machine, with humans left to decide only whether or not to pull the trigger. In the case of defensive weapons, machines often make autonomous decisions (to use the defensive systems) without any human involvement at all.

Which is scary. Will AI weapons soon be able to launch military operations independently of human input?

It’s clear that the military is developing smart technologies. After all, we owe many of the innovations we know from civilian applications to military R&D, including the internet itself (which began as Arpanet), email, and autonomous vehicles – all developed by the U.S. Defense Advanced Research Projects Agency (DARPA). But today modern weaponry relying on machine and deep learning can achieve a worrisome autonomy, although the military officially claims that no contemporary armaments are fully autonomous. However, it does admit that a growing proportion of arsenals meet the technological criteria for becoming fully autonomous. In other words, it’s not a question of if weapons will be able to act without human supervision, it’s a matter of when, and whether we allow them to choose targets to attack and carry those attacks out. 

There is another consideration here worth noting. While systems still are designed to leave the final decision to human beings, the reaction time required time is frequently so short once the weapon has analyzed the data and chosen a target that it precludes reflection. With a half a second to decide whether or not to pull the trigger, it is difficult to speak of humans themselves in those situations as being fully autonomous.

algorithms war crimes blog Norbert Biedrzycki 1

Thinking weapons around the world 

Human Rights Watch, which has called for a ban on “killer robots,” has estimated that there are at least 380 types of military equipment that employ sophisticated smart technology operating in China, Russia, France, Israel, the UK, and the United States. Much publicity has recently focused on the company Hanwha, a member of a group of the largest weapon manufacturers in South Korea. The Korea Times, calling it the “third revolution in the battleground after gunpowder and nuclear weapons,” has reported that together with the Korean Advanced Science and Technology Institute (KAIST), it is developing missiles that can control their speed and altitude and change course without direct human intervention. In another example, SGR-A1 cannons placed along the demilitarized zone between South and North Koreas reportedly are capable of operating autonomously (although programmers say they can’t fire without human authorization).

The Korean company Dodaam Systems makes autonomous robots capable of detecting targets many kilometers away. Also, the UK has been intensively testing the unmanned Taranis drone, set to reach its full capacity in 2030 and replace human-operated aircraft. Last year, the Russian government’s Tass news agency reported that Russian combat aircraft will soon be fitted with autonomous missiles capable of analyzing a situation and making independent decisions regarding altitude, velocity, and flight direction. And China, which aspires to become a leader in the AI field, is working hard to develop drones (especially those operating in so-called swarms) capable of carrying autonomous missiles that detect targets independent of humans.

A new bullet

Since 2016, the U.S. Department of Defense has been creating an artificial intelligence development center. According to the program’s leaders, progress in the field will change the way wars are fought. Although former U.S. Deputy Secretary of Defense Robert O. Work has claimed that the military will not hand power over to machines, if other militaries do, the United States may be forced to consider it. For now, the agency has established a broad, multi-billion-dollar AI development program as core to its strategy, testing state-of-the-art remotely-controlled equipment, such as the Extreme Accuracy Tasked Ordnance (EXACTO), a .50 caliber bullet that can acquire targets and change path “to compensate for any factors that may drive it off course.” 

According to experts, unmanned aircraft will replace piloted aircraft within a matter of years. These drones can be refueled in flight, carry out missions against anti-aircraft forces, engage in reconnaissance missions, and attack ground targets. Going pilotless will reduce costs considerably as pilot safety systems in a modern fighter aircraft may add up to as much as 25% of the whole combat platform. 

algorithms war crimes blog Norbert Biedrzycki 2

Small, but deadly

Work is currently under way to tap into the potential of so-called insect robots, a specific form of nanobot that according to the American physicist Louis Del Monte, author of the book Nanoweapons: A Growing Threat To Humanity, may become weapons of mass destruction. Del Monte argues that insect-like nanobots can be programmed to insert toxins into people and poison water-supply systems. DARPA’s Fast Lightweight Autonomy program involves the development of house-fly-sized drones ideal for spying, equipped with “advanced autonomy algorithms.” France, the Netherlands, and Israel reportedly are also working on intelligence gathering insect drones

The limits of NGO monitoring and what needs to happen now

Politicians, experts, and the IT industry as a whole are realizing that the autonomous weapons problem is quite real. According to Mary Wareham of Human Right Watch, the United States should “commit to negotiate a legally binding ban treaty [to]… draw the boundaries of future autonomy in weapon systems.” Meanwhile, the UK-based NGO Article 36 has devoted a lot of attention to autonomous ordnance, claiming that political control over weapons should be regulated and based on a publicly-accessible and transparent protocol. Both organizations have been putting a lot of effort into developing clear definitions of autonomous weapons. The signatories of international petitions continue to attempt to reach politicians and present their points of view during international conferences. One of the most recent international initiatives is this year’s letter signed by the Boston-based organization Future of Life Institute in which 160 companies from the AI industry in 36 countries, along with 2400 individuals, have signed a declaration stating that “autonomous weapons pose a clear threat to every country in the world and will therefore refrain from contributing to its development.” The document was signed by, among others, Demis Hassabis, Stuart Russell, Yoshua Bengio, Anca Dragan, Toby Walsh, and the founder of Tesla and SpaceX Elon Musk.

However, until an open international conflict arises that will reveal what technologies are actually in use, keeping track of the weapons that are being developed, researched and installed is next to impossible.

Another obstacle faced in developing clear binding standards and producing useful findings is the nature of algorithms. We think of weapons as material objects (whose use may or may not be banned), but it is much harder to make laws to cope with the development of the code behind software, algorithms, neural networks and AI. 

Whom to blame? Algorithms?

Another problem is that of accountability. As is the case with self-driving vehicles, who should be held accountable should tragedy strike? The IT person who writes the code to allow devices to make independent choices? Neural network trainer? vehicle manufacturer?

algorithms war crimes blog Norbert Biedrzycki 3

Military professionals who lobby for the most advanced autonomous projects argue that instead of imposing bans, one should encourage innovations that will reduce the number of civilian casualties. This is not about algorithms having the potential to destroy the enemy and civilian population. Rather, they claim, their prime objective is to use these technologies to better assess battlefield situations, find tactical advantages, and reduce overall casualties (including civilian). In other words, it is about improved and more efficient data processing. 

However, the algorithms unleashed in tomorrow’s battlefields may cause tragedies of unprecedented proportions. Toby Walsh, a professor dealing with AI at the University of New South Wales in Australia warns that autonomous weapons will “follow any orders however evil” and “industrialize war.”  

Artificial intelligence has the potential to help a great many people. Regrettably, it also has the potential to do great harm. Politicians and generals need to collect enough information to understand all the consequences of the spread of autonomous ordnance.

.    .    .

Works cited

YouTube, BrainBar, My Greatest Weakness is Curiosity: Sophia the Robot at Brain Bar, link, 2018.

YouTube, Boston Dynamics, Getting some air, Atlas?, link, 2018.

The Guardian, Ben Tarnoff, Weaponised AI is coming. Are algorithmic forever wars our future?, link, 2018. 

Brookings, Michael E. O’Hanlon, Forecasting change in military technology, 2020-2040, link, 2018. 

Russell Christian/Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, link, 2018. 

The Korea Times, Jun Ji-hye,Hanwha, KAIST to develop AI weapons, link, 2018. 

Bae Systems, Taranis, link, 2018. 

DARPA, Faster, Lighter, Smarter: DARPA Gives Small Autonomous Systems a Tech Boost, Researchers demo latest quadcopter software to navigate simulated urban environments, performing real-world tasks without human assistance, link, 2018. 

The Verge, Matt Stroud, The Pentagon is getting serious about AI weapons, link, 2018. 

The Guardian, Mattha Busby, Killer robots: pressure builds for ban as governments meet, link, 2018. 

.    .    .

Related articles

– Artificial intelligence is a new electricity

– Robots awaiting judges

– Only God can count that fast – the world of quantum computing

– Machine Learning. Computers coming of age

Leave a Reply

17 comments

  1. AndrewJo

    There’s no way to stop the development, just because it may benefit military; and I believe to set a ‘standard’ is naive thinking – will North Korea, Iran submit to the same standard. Will US and China no try to gain advantage one over another in this ‘standards’?
    As with the nuclear powers – the industrial military power will be about a balance between the most powerful nations, and whether the most deadly weapons are deployed – relies on whether that balance is kept.

  2. PiotrPawlow

    Governance should be involved in technology. Just because some can build it we should follow. Pirates trying to capitalize on gains through technology should be questioned and not be allowed to set the stage for developing technology for humanity. https://www.youtube.com/watch?v=o8_imzEdS84

  3. Check Batin

    I’m watching the “Chernobyl” show on HBO. What strikes me about it is how similar it is to a Deep Learning project in a large company. No one really knows what is going on, management passes up only the shiny numbers, while the technicians clumsily manipulate hyperparameters in such a way as to cause explosions, power failures, budget overruns, colossal wasting of time, and near-radioactive reputational damage to everyone involved. Deep Learning is a brute force search for correlations, most of which are spurious and useless. It assumes a differential error surface that most data, being discrete, doesn’t have. It is data-hungry. It usually only finds a local minimum. It takes forever to train, and even longer to grid-search the optimal hyperparameters. Ten years after its rise in popularity (and 50 years after the invention of the first neural networks), researchers have found no extremely reliable countermeasures to overfitting. Who ever said the Cold War is over? The USSR is alive and well in tensorflow!

  4. Tom Jonezz

    Unfortunately, I doubt that it matters what any of us want. The day is coming, and fast, when the degree of computer automation in what we might call the ground traffic control system will rival or exceed the degree of automation in the air traffic control system. Some will say that the day is coming much too fast and in too many spheres. Computers are already in almost complete control of the stock market. They’re gradually taking over medical diagnosis. Some even want to turn sentencing decisions over to them. Perhaps things are getting out of control.

  5. Oscar P

    No matter how great you could ever construct any algorithm or how impervious you might build a tool to wield your altogether flawless algorithm even with some purposefully double-blind mechanisms to offer unfettered constraint where may be applicable; even in that, there are always a nubmer of separate variables required to compute anything therefore, even if for this example in your article, the social wisdom or intellectual value of the algorithm was 100,000,000,000 (100 Billion) times the value of anything it judged/digested to social interpretation to achieve a better more balanced.

  6. John Accural

    Can a robot truly make a good choice about taking someone’s life? Not without a huge amount of information being input. Can a robot handle that? Absolutely. Can a robot handle that autonomously? Without any human or computer interaction? Merely on firsthand experience and whatever intel they were given? Can a robot, completely on its own, decide the fate of a human being based on these things and not a simple hitlist?

    • Tom Jonezz

      Sometimes an AI is tasked with a very subjective question, like finding the ‘best’ job applicant. Bias is almost unavoidable because the machine has to make some kind of judgment. Someone is always going to look at that judgment as bias.

  7. Zoeba Jones

    Yes, robots are unfeeling machines that make decisions based on numbers and somehow the one that picks the targets is not responsible anymore. It’s blaming the bullet and not the one who pulls the trigger.

  8. SimonMcD

    When you speak to philosophers, they act as if these systems will have moral agency. At some level a toaster is autonomous. You can task it to toast your bread and walk away. It doesn’t keep asking you, ‘Should I stop? Should I stop?’ That’s the kind of autonomy we’re talking about.

  9. TomCat

    That DARPA funding could theoretically seed the rescue-robot industry, or it could kickstart the killer robot one. For Gubrud and others, it’s all happening much too fast: the technology for killer robots, he warns, could outrun our ability to understand and agree on how best to use it. Are we going to have robot soldiers running around in future wars, or not? Are we going to have a robot arms race which isn’t just going to be these humanoids, but robotic missiles and drones fighting each other and robotic submarines hunting other submarines?