Tuesday, April 9, 2024

Regarding war machines without humans

In the last couple of days, PD has posted links to info about automated war machines and the use of artificial intelligence as a tool for military use in selecting targets to be destroyed. Here is some of what some of those links lead to.

This 2021 WaPo article (not behind paywall) has a great 4:22 video about US thinking behind developing machines that can seek human or non-human targets and destroy them. The thinking by US experts appears to be develop these machines as fast as possible and make them as deadly as possible. The reasoning behind this course of action is that (1) other countries will build and use killer machines regardless of what the US does, and (2) trying to ban or control killer machines by international treaty would be very hard to monitor and enforce, so don't bother trying. 

The WaPo article comments:
Picture a desert battlefield, scarred by years of warfare. A retreating army scrambles to escape as its enemy advances. [Over a desert battlefield] dozens of small drones, indistinguishable from the quadcopters used by hobbyists and filmmakers, come buzzing down from the sky, using cameras to scan the terrain and onboard computers to decide on their own what looks like a target. Suddenly they begin divebombing trucks and individual soldiers, exploding on contact.

[This is] a real scene that played out last spring as soldiers loyal to the Libyan strongman Khalifa Hifter retreated from the Turkish-backed forces of the United Nations-recognized Libyan government. According to a U.N. group of weapons and legal experts appointed to document the conflict, drones that can operate without human control “hunted down” Hifter’s soldiers as they fled.

Long the stuff of science fiction, autonomous weapons systems, known as “killer robots,” are poised to become a reality, thanks to the rapid development of artificial intelligence.

In response, international organizations have been intensifying calls for limits or even outright bans on their use. The U.N General Assembly in November adopted the first-ever resolution on these weapons systems, which can select and attack targets without human intervention.
What exactly are killer robots? To what extent are they a reality?

Killer robots, or autonomous weapons systems to use the more technical term, are systems that choose a target and fire on it based on sensor inputs rather than human inputs. They have been under development for a while but are rapidly becoming a reality. We are increasingly concerned about them because weapons systems with significant autonomy over the use of force are already being used on the battlefield.
What are the ethical concerns posed by killer robots?

The ethical concerns are very serious. Delegating life-and-death decisions to machines crosses a red line for many people. It would dehumanize violence and boil down humans to numerical values.
A July 2023 article published by MIT News focuses on efforts to "democratize" access to machine learning by vastly reducing the time cost to set up and operate AI software focused on solving specific problems: 
“It would take many weeks of effort to figure out the appropriate model for our dataset, and this is a really prohibitive step for a lot of folks that want to use machine learning or biology,” says Jacqueline Valeri, a fifth-year PhD student of biological engineering in Collins’s lab who is first co-author of the paper.

BioAutoMATED is an automated machine-learning system that can select and build an appropriate model for a given dataset and even take care of the laborious task of data preprocessing, whittling down a months-long process to just a few hours. Automated machine-learning (AutoML) systems are still in a relatively nascent stage of development, with current usage primarily focused on image and text recognition, but largely unused in subfields of biology, points out first co-author and Jameel Clinic postdoc Luis Soenksen PhD '20.  
This work was supported, in part, by a Defense Threat Reduction Agency grant, the Defense Advance Research Projects Agency SD2 program, ....
The open question here is whether this can be applied to AI mated with killer war machines. In this case AI was applied to biology, not warfare. But even if not, it is obvious that US and other global militaries are willing to spend vast amounts of money on automating war and human slaughter. That is going to happen whether the dangers are carefully considered or not.

Wikipedia on failure to regulate killer machines (LAWS or lethal autonomous weapons systems):
The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons.

First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. Several countries including Israel, Russia, South Korea, the United States, and the United Kingdom oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient enough regulation for this area. 

Some photos of existing LAWS that operate on land or in the air:











US Army training with a LAWS


A long, detailed 2017 article the US Army Press published considers the moral implications of LAWS:
Pros and Cons of Autonomous Weapons Systems

Arguments in Support of Autonomous Weapons Systems

Support for autonomous weapons systems falls into two general categories. Some members of the defense community advocate autonomous weapons because of military advantages. Other supporters emphasize moral justifications for using them.

Military advantages. Those who call for further development and deployment of autonomous weapons systems generally point to several military advantages. First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. ....

Arguments Opposed to Autonomous Weapons Systems

While some support autonomous weapons systems with moral arguments, others base their opposition on moral grounds. Still others assert that moral arguments against autonomous weapons systems are misguided.

Opposition on moral grounds. In July 2015, an open letter calling for a ban on autonomous weapons was released at an international joint conference on artificial intelligence. The letter warns, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

We note in passing that it is often unclear whether a weapon is offensive or defensive. Thus, many assume that an effective missile defense shield is strictly defensive, but it can be extremely destabilizing if it allows one nation to launch a nuclear strike against another without fear of retaliation.
It seems that the US military has done a lot of thinking about automated warfare. However, the US government and the public seems to have limited understanding or influence. The process of automating war is well underway in the military. Arguably, the federal government has gone to its normal mode of inaction because it is busy with whatever else it is doing. What is government doing? Apparently, mostly continuing to blindly fund the US military, including funding for developing automated war machines, blithering and wasting time as far as I can tell.

No comments:

Post a Comment