Article 65DYB Can Autonomous Weapons Be Compatible With International Humanitarian Law?

Can Autonomous Weapons Be Compatible With International Humanitarian Law?

by
Ariel Conn
from IEEE Spectrum on (#65DYB)
abstract-illustration-of-green-connected

This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and has invites you to answer these questions.

The real world is anything but binary. It is fuzzy and indistinct, with lots of options and potential outcomes, full of complexity and nuance. Our societies create laws and cultural norms to provide and maintain some semblance of order, but such structures are often open to interpretation, and they shift and evolve over time.

This fuzziness can be challenging for any autonomous system navigating the uncertainty of a human world-such as Alexa reacting to the wrong conversations, or self-driving cars being stymied by white trucks and orange traffic cones. But not having clarity on "right or wrong" is especially problematic when considering autonomous weapons systems (AWS).

International Humanitarian Law (IHL) is the body of laws that govern international military conflicts, and they provide rules about how weapons should be used. The fundamentals of IHL were developed before the widespread use of personal computers, satellites, the Internet, and social media, and before private data became a commodity that could be accessed remotely and often without a person's knowledge or consent. Many groups are concerned that the existing laws don't cover the myriad issues that recent and emerging technologies have created, and the International Committee of the Red Cross, the watchdog of IHL, has recommended new, legally binding rules to cover AWS.

Ethical principles have been developed to help address gaps between changing cultural norms and technologies and established laws, but such principles also tend to be vague and difficult to translate into legal code. For example, even if everyone agrees on an ethical principle like minimizing bias in an autonomous system, how would that be programmed? Who would determine whether an algorithmic bias has been sufficiently minimized" for the system to be deployed?

All countries involved in the autonomous weapons systems (AWS) debate at the United Nations have stated that AWS must follow international law. However, they don't agree on what these laws and ethics mean in practice, and there's additional disagreement over whether some AWS capabilities must be preemptively banned in order to ensure that IHL is honored.

IHL, Emerging Technology, and AWS

Much of the disagreement at the United Nations stems from the uncertainty surrounding the technology and how the technology will evolve in the future. Though existing weapons systems have some autonomous capabilities, and though there have been reports of AWS being used in Libya and questions about AWS being used in Ukraine, the extent to which AI and autonomy will change warfare remains unknown. Even when IHL mandates already exist, it's unclear that AWS will be able to follow them: For example, can a machine be trained to reliably recognize when a combatant is injured or surrendering? Is it possible for a machine to learn the difference between a civilian and a combatant dressed as a civilian?

Cyberthreats pose new risks to national security, and the ability of companies and governments to collect personal data is already a controversial legal and ethical issue. These risks are only exacerbated when paired with AWS, which could be biased, hacked, trained on bad data, or otherwise compromised as a result of weak regulations surrounding emerging technologies.

Moreover, for AI systems to work, they typically need to be trained on huge data sets. But military conflict and battlefields can be chaotic and unpredictable, and large, reliable data sets may not exist. AWS may also be subject to greater adversarial manipulation, which, essentially, involves tricking the system into misunderstanding the situation-something that can be as easy to do as placing a sticker on or near an object. Is it possible for AWS algorithms to receive sufficient training and supervision to ensure they won't violate international laws, and who makes that decision?

AWS are complex, with various people and organizations involved at different stages of development, and communication between designers and users of the systems may not exist. Additionally, the algorithms and AI software used in AWS may not have originally been intended for military use, or they may have been intended for the military but not for weapons specifically. To ensure the safety and reliability of AWS, new standards for testing, evaluation, verification, and validation are needed. And if an automated weapons system acts inappropriately or unexpectedly and causes unintended harm, will it be clear who is at fault?

Nonmilitary Use of AWS

While certain international laws cover human rights issues during a war, separate laws cover human rights issues in all other circumstances. Simply prohibiting a weapons system from being used during wartime does not guarantee that the system can't be used outside of military combat. For example, tear gas has been classified as a chemical weapon and banned in warfare since 1925, but it remains legal for law enforcement to use for riot control.

If new international laws are developed to regulate the wartime use of AI and autonomy in weapons systems, human rights violations committed outside of the scope of a military action could-and likely would-still occur. The latter could include actions by private security companies, police, border control agencies, and nonstate armed groups.

Ultimately, in order to ensure that laws, policy, and ethics are well adapted to the new technologies of AWS-and that AWS are designed to better abide by international laws and norms-policymakers need to have a stronger understanding of the technical capabilities and limitations of the weapons, and how the weapons might be used.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems." Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We've put together a series of questions in the Challenges document that we hope you'll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments