-
Advertisement
Law
Hong KongLaw and Crime
Alexander Tang

Legal Tales | Why AI cannot – and should not – replace human judges

The limitations of large language models like ChatGPT go well beyond hallucinations, and the courtroom is no place for probabilistic guesswork

3-MIN READ3-MIN
2
Listen
Hong Kong’s judges attend the ceremonial opening of the legal year in January. Photo: Sam Tsang
Every few months, another headline announces that artificial intelligence (AI) is poised to “disrupt” the legal profession. Lawyers, we are told, will soon be replaced by algorithms.

Judges, apparently, are next, with some recent research suggesting that machines are more “accurate” in following established legal principles than human judges.

While disruption will surely come, suggesting total replacement is imminent misconstrues what AI systems do and what our legal system and courts exist to achieve.

Advertisement

The starting point is this: insofar as we humans who designed AI can understand them, the large language models behind tools like ChatGPT are, at heart, pattern-completion machines. They ingest vast quantities of internet text and learn to predict, statistically, what word comes next.

Much has already been said about “hallucination”: the tendency of these models to fabricate cases and conjure things out of thin air – a product of the predictive, pattern-completing nature of AI reasoning.

Advertisement

That problem is by now well documented and need not be rehashed at length here. It suffices to say that the issue remains unsolved. But the deeper difficulties are structural, and these deserve closer attention.

SCMP Series
Legal Tales
[ 45 of 45 ]
Advertisement
Select Voice
Select Speed
1.00x