Pentagon adopts new ethical principles for using AI in war
- The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems
- The new US principles are meant to guide both combat and non-combat applications
The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.
The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.
They also say decisions made by automated systems should be “traceable” and “governable,” which means “there has to be a way to disengage or deactivate” them if they are demonstrating unintended behaviour, said Air Force Lieutenant General Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center.
An existing 2012 military directive requires humans to be in control of automated weapons but does not address broader uses of AI. The new US principles are meant to guide both combat and non-combat applications, from intelligence-gathering and surveillance operations to predicting maintenance problems in planes or ships.