
Review | When AI goes wrong who’s to blame, Singapore law professor asks; do we legally treat algorithms and machines as we once did mercenaries and miscreant animals?
- Simon Chesterman, a law professor in Singapore, asks some sobering questions about legal responsibility for the decisions of AI machines and algorithms
- Like mercenary troops, algorithms that decide on your guilt or innocence, or right to entitlements, lack moral intuition, he notes. So are we still in control?
We, the Robots? Regulating Artificial Intelligence and the Limits of the Law by Simon Chesterman, pub. Cambridge University Press
Chesterman, dean and professor of law at the National University of Singapore, brings a sober but readable approach to a subject otherwise much given to speculation and fearmongering. He enlivens his work with stories from the real world: accidents involving self-driving cars; stock market collapses caused by automated trading; biases in the opaque proprietary software used to assess the likelihood an individual will default on a loan or repeat a criminal offence.

Existing laws and regulations, whose design was predicated on the direct involvement of humans, are already struggling to cope with problems arising merely from the speed of transaction made possible by ever-faster processors and computer-to-computer communications. Examples include the “flash crash” of 2018, in which US stock markets took a tumble driven by overenthusiastic algorithmic trading systems doing thousands of deals with each other in a matter of seconds.
When an autonomous vehicle hits a pedestrian, who is to be held responsible? Is it the human supervisor who should have overridden the system in time? The system’s designer? The vehicle’s owner?
At what point does it become possible to view the system itself as some sort of responsible legal entity? For the AI, punishment now simply takes the form of treating the error as data used to make improvements.
Even here historic debate may provide some solutions, as questions of control and responsibility have been asked for centuries concerning equally autonomous mercenaries, such as those hired for use in African civil wars or the Swiss private regiment that has protected successive popes since the early 1500s.

As Chesterman points out, reliance on mercenaries came to be seen as “not only inefficient but suspect: a country whose men did not fight for it lacked patriots; those individuals who fought for reasons other than love of country lacked morals.”
What love of country do fighting machines have? What moral intuition can be found in systems now beginning to make decisions on who should receive government benefits? What legal fairness is there in algorithms whose secretive nature means their reasoning cannot be analysed and challenged?
“Some functions,” Chesterman suggests, “are ‘inherently governmental’ and cannot be conferred to contractors, machines, or anyone else.”
In discussing attempts to hold AI to account, he again shows that not everything is new about the legal problems that face us, drawing parallels with medieval trials of animals that had caused harm to human beings. He discusses whether existing laws might be adapted to hold programmers or remote operators responsible for negative outcomes from the use of AI, but wonders whether such systems will eventually become so autonomous that responsibility must be reconsidered. Perhaps AI will eventually have to be involved in regulating AI.
And he does conclude by looking into the future, and particularly the application of AI to legal matters, although AI judges are already being tested in mainland China. What chance of the sort of transparency required for fairness there?


“The emergence of fast, autonomous, and opaque AI systems forces us to question the assumption of our own centrality,” says Chesterman. He concludes that it’s not yet time for us to relinquish our overall control. But that such a serious commentator even considers this matter may alarm some readers more than hysteria about a potential AI takeover.
