It used to be called the warrior code, in which boundaries are imposed on what soldiers may and may not do in times of war. Then came the Geneva Convention, the international treaty negotiated after the second world war that still governs how combatants treat prisoners of war, the wounded and civilians in a war zone.
That could be under threat from robot weapons, which will be able to take decisions to kill or not without input from humans.
War has always been an industrial process, but it's become increasingly mechanised. There's growing concern about whether the autonomous robotic weapons that are slowly taking over from foot soldiers are capable of making appropriate decisions in a war zone. Where once there were ethics, responsibility and chivalry - at least in a basic sense - in the age of autonomous weaponry are we happy to use calculation, accuracy and efficiency?
The spectre of man-free warfare isn't new. Unmanned aircraft and drones are used for surveillance in Pakistan, Yemen and Afghanistan.
A report late last year has led human rights groups to campaign to forestall the arrival of autonomous robots.
Losing Humanity is a joint effort by Harvard Law School's International Human Rights Clinic and the charity Human Rights Watch, which is chiefly concerned about the protection of civilians during times of war. Although the report concludes that governments should ban autonomous weapons completely, it also makes an appeal to "roboticists" to establish a professional code of conduct "to ensure that legal and ethical concerns about their use in armed conflict are adequately considered".
There's a good reason for this appeal. "The armed robot will select its target and choose when to fire," writes Steve Goose, director of the arms division at Human Rights Watch, in response to the report, calling it, "a frighteningly dangerous path to follow in terms of the need to protect civilians during armed conflict".
His concern is that these "killer robots" could be unable to either distinguish between soldiers and civilians on a battlefield, or to weigh up the military advantages of an attack with the potential for harming civilians. "Giving machines the power to decide who lives and dies on the battlefield would take technology too far," says Goose. "They would lack the ability to relate to humans and to apply human judgment."
If humanitarian law is beyond machines, using robots creates a problem of accountability: who's breaking the law, the robots or their makers?
As forthright as the language may be, completely unmanned weapons - machines that can operate without any human supervision - don't really exist yet. But they are coming.
One weapon that's under scrutiny is Samsung's new SGR-1, a tracking and surveillance robot that allows targets to be tracked while the surveillance module continues to search for other targets. It has no blind spots, claims Samsung, and is equipped with an "optional weapons station" for a "lethal and non-lethal" weapons system.
For now, the actual firing is done remotely by a human. More common are drones, otherwise known as an unmanned combat air vehicles (UCAV), such as the latest Northrop Grumman X47B, which, for the first time anywhere, will be launched from a US Navy aircraft carrier later this year.
"It seems to me that the Geneva Convention has to be updated and possibly consider a new era of robotic weapons. Such elements have to be agreed upon by representatives of all nations," says Antonio Espingardeiro, a member of the Institute of Electrical and Electronics Engineers, and a robotics inventor, "but the real question here is not about robots; it's about ourselves and why our nature continues to drive us into war. "