Editorial Feature

Could Autonomous Defense Systems Ensure Soldier Safety?

Military chiefs and diplomats in some nations have been forcefully opposed to moves to ban autonomous weapons systems recently. While the autonomous defense could make soldiers safer, strong opposition remains for AI-driven systems.

Could Autonomous Defense Systems Ensure Soldier Safety?

Image Credit: sommthink/Shutterstock.com

New British Army Procurement

Military leaders already possess automatic strike capabilities with weapons that can identify their own targets within a certain area after being launched. AI is being introduced to increase the area over which weapons can make autonomous decisions, and enable weapons to be “intelligent” for a longer period of time.

But as AI technologies are so rapidly progressing, military applications for them are also rapidly increasing and becoming more complex. Military bosses in the UK recently announced a new acquisition process that will see the British Army heavily investing in robotics, AI, and hybrid-power technology from 2021.

The Role of AI in the Military

The use of autonomous defense systems has been argued to reduce the need for putting human soldiers in the way of danger, and to improve the efficiency of military operations.

At UN meetings to discuss the role of AI in the military, the vast majority of participating states have expressed a strong desire to either totally ban or at least strictly regulate the development and deployment of autonomous weapons. This position has the support of UN Secretary General, António Guterres, who has called autonomous weapons “morally repugnant.”

Over 1,000 high-profile experts in artificial intelligence recently signed an open letter to world leaders calling for a ban on autonomous weapons. The signatories warned of the potential for a military artificial intelligence arms race, which would drain resources away from people at a time when the global economy is under threat.

Rapid advancement of military AI – as we saw with military hardware like rockets during the space race – could have serious unintended consequences for the rest of the world. The risks of uncontrolled weapons systems are enough for signatories of the open letter to support a total ban on AI weapons development.

Signatories included Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis, professor Stephen Hawking, Elon Musk, and over 1,000 AI and robotics researchers.

Using Autonomous Weapons

Human rights NGO PAX (Netherlands) has published a summary of the problems with autonomous weapons, written by military technology researchers Merel Ekelhof and Miriam Struyk.

Chief among these is that autonomous weapons are largely considered to be unethical.

Autonomous weapons also lower the threshold of going to war. If this entails less risk to soldiers’ lives, politicians will be freer to start wars and engage in military actions. This is because the public is more removed from the acts of war, making democratic control more difficult.

The mechanical manner of artificial intelligence makes it impossible to apply the rule of distinction, a key feature of international law around warfare. Artificial intelligence is unable to distinguish between combatants and civilians reliably, so indiscriminate killings are likely.

Another international law of warfare that cannot be applied by machine thinking is the rule of proportionality. Autonomous weapons systems are unable to weigh military gain against human suffering in war situations due to the complexity and nuance of such decisions.

Chain of responsibility also becomes complicated when militaries use autonomous weapons systems. Lethal force could happen in an accountability vacuum in which no person can be held to account for violations of international law.

This document further argues that autonomous weapons cannot be developed or deployed in a way that allows for real democratic oversight or meaningful accountability. The public and even oversight bodies in governments and militaries may be unable to raise ethical and moral concerns.

Finally, autonomous weapons systems – unlike nuclear weapons and satellite-based weapons systems – are relatively cheap and easy to copy. Developing this technology carries the risk of wide proliferation among states and non-state actors such as terrorists and organized criminals.

Continue reading: Unmanned Aerial Vehicles in the Australian Defense Industry.

References and Further Reading

Judson, J. (2021) UK’s future force to lean heavily into robotics, AI and hybrid power. DefenseNews. Available at: https://www.defensenews.com/digital-show-dailies/dsei/2021/09/16/uks-future-force-to-lean-heavily-into-robotics-ai-and-hybrid-power/

Ekelhof, M. and M. Struyk (2014). Deadly Decisions: 8 objections to killer robots. PAX. [online] Available at: https://paxvoorvrede.nl/media/download/deadlydecisionsweb.pdf

Gayle, D. (2019). UK, US and Russia among those opposing killer robot ban. Guardian. [online] Available at: https://www.theguardian.com/science/2019/mar/29/uk-us-russia-opposing-killer-robot-ban-un-ai

Gibbs, S. (2015). Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons. Guardian. [online] Available at: https://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Ben Pilkington

Written by

Ben Pilkington

Ben Pilkington is a freelance writer who is interested in society and technology. He enjoys learning how the latest scientific developments can affect us and imagining what will be possible in the future. Since completing graduate studies at Oxford University in 2016, Ben has reported on developments in computer software, the UK technology industry, digital rights and privacy, industrial automation, IoT, AI, additive manufacturing, sustainability, and clean technology.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Pilkington, Ben. (2021, December 29). Could Autonomous Defense Systems Ensure Soldier Safety?. AZoRobotics. Retrieved on November 29, 2024 from https://www.azorobotics.com/Article.aspx?ArticleID=447.

  • MLA

    Pilkington, Ben. "Could Autonomous Defense Systems Ensure Soldier Safety?". AZoRobotics. 29 November 2024. <https://www.azorobotics.com/Article.aspx?ArticleID=447>.

  • Chicago

    Pilkington, Ben. "Could Autonomous Defense Systems Ensure Soldier Safety?". AZoRobotics. https://www.azorobotics.com/Article.aspx?ArticleID=447. (accessed November 29, 2024).

  • Harvard

    Pilkington, Ben. 2021. Could Autonomous Defense Systems Ensure Soldier Safety?. AZoRobotics, viewed 29 November 2024, https://www.azorobotics.com/Article.aspx?ArticleID=447.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.