AI weapons

KAIST (Korea Advanced Institute of Science and Technology) announced a statement in February to cooperate with a arms company Hanwha Systems to  “develop artificial intelligence (AI) technologies to be applied to military weapons”  and “search for and eliminate targets without human control” (James Vincent, 2018). Hanwha Systems is the subsidiary of Hanwha chaebol, which is in the blacklist of the United Nations on account of creating cluster booms that are considered as extremely cruel weapons.

The event lead to boycott from more than 50 leading AI researchers like Geoffrey Hinton, Yoshua Bengio, and Jü​rgen Schmidhuber who put pressure on KAIST by declaring to terminate cooperation until they stopped the research of autonomous weapons. Then the president of KAIST made a commitment that the research center would not develop weapons without human control.

The difference between intelligent weapons and autonomous weapons is that the former are still controlled by human although intelligent algorithms are integrated into the weapons like automatic aiming, positioning, while the latter completely make decisions by themselves. One of the reasons that autonomous weapons are banned is to avoid the technology falls into hands of terrorists just like nuclear weapons as they both can cause mass destruction. This is totally understandable that any weapons cannot be used to innocent civilians. But can it be researched and developed if there wasn’t terrorism ? It maybe not clear.

Intelligent weapons or robotics are not something new, like robots to transport cargoes, mechanical limbs to augment soldiers, drones, and etc. The purposes of creating such things are to enhance combat capabilities and more importantly to reduce casualties. On one hand, we are deploying technologies to new weapons to strengthen strike capabilities. On the other hand, we are specifying constraints for the so called humanism like the banning of chemical and biological weapons, nuclear weapons to cause cruel and mass harm to humans. So it seems to me that autonomous ans non-autonomous weapons have no difference. If we look at it optimistically, maybe one day there would be no people on the battle field, and the war becomes the fights between robots of opposing sides like video games turn to be real.

The video under is about autonomous mini drones. It is terrifying and brutal of the showcase applied to innocent civilians.

Reference

James Vincent. (April 4, 2018). Leading AI researchers threaten Korean university with boycott over its work on ‘killer robots’. The Verge. https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha

 

Add yours Comments – 1

  • To be frank, I personally don’t like the idea of Artificial Intelligence very much in the sense that it tries to “mimic” human behaviors. On the other hand, I dislike the idea of weapons, which implies violent solutions to conflicts. The combination of these two is awfully irritating: “a machinery that imitates human’s way of violently solving conflicts”.

Leave me a Comment