Surgical robotics is here to stay. But when things go wrong, who’s to blame?
- Darren Mann says as the use of automation and AI in surgery becomes more daring and more ubiquitous, safeguards must keep pace with the speed of new discoveries, and responsibilities for safety must be fairly shared between doctor and engineer
Time was when the requisite qualities for a surgeon were distinctly biological – “eye of a hawk, heart of a lion and hands of a lady”. But in modern-day hospitals, machines are quietly supplanting handicraft in the operating theatre.
These “master-slave” type of devices use tele-manipulation of articulating arms operated by a surgeon sitting at a console with video monitor and joystick/pedal controls. Worldwide, the commercial space is monopolised by the first FDA-approved system – aptly named after the original Renaissance man, da Vinci. The technique is essentially that of minimal-access (‘laparoscopic’) surgery, and robotic-assisted surgical devices have been applied across specialities.
Six thousand surgical robots globally performed a million operations last year. The market was worth US$6 billion. A basic system costs US$2 million.
Robotic surgeries are 6-28 per cent more expensive than the laparoscopic equivalent (averaging US$2,200 more), but proof of advantage over standard surgical techniques has been elusive: complication and success rates are comparable.
Surgical robotics is poised to go “generic” as original patents expire and new developers and applications enter the field. Next-generation devices will feature enhanced voice control, “fly-by-wire” algorithms to counter perceived faulty operator intent and eventually AI-driven autonomy and self-learning. “Medusa” snake-head flexible endoscopic systems promise scarless surgery through natural orifices. Star Trek’s Dr “Bones” McCoy’s refrain “I’m a doctor, Jim, not an engineer” never rang truer.
With sufficient data speeds, operating consoles can be far removed from the mechanical system. The first telepresence surgery “Operation Lindbergh” was performed in 2001 – a transatlantic procedure by surgeons in New York on a patient in France; and 5G remote surgery was tested on an animal in China just this year.
Looking beyond civilian health care, the Pentagon’s Defence Advanced Research Projects Agency is developing a battlefield surgery robotic “trauma pod”, and the US space agency Nasa is already testing its Extreme Environment Mission Operations (Neemo) telesurgery in a submarine.
How is safety monitored? The US Food and Drug Administration’s surgical devices division tracks adverse events through medical device reports submitted to its Manufacturer and User Facility Device Experience (Maude) database. A review in 2016 documented over 10,000 robotic procedure mishaps: failure of the system occurs in 5 per cent of surgeries.
Typical events include system errors requiring rebooting, abnormal robotic arm movements, loss of visuals and dislodged burnt/broken pieces of equipment. Injury (and even death) has been recorded in around 1 per 1,000 operations, although apportioning operator or system fault can sometimes be difficult.
Automation and artificial intelligence have made aviation safer, and are set to do the same for road travel (although ironically Uber’s driverless car recently had its first lethal accident with the driver on board) and medicine. Nevertheless, the surgical domain is arguably more complex than transport systems, and new hazards will emerge as humans progressively cede authority in the shared space of homo-robotic co-working.
The psychology of risk homeostasis tends to keep mishap rates constant, because as an activity is made apparently safer, human behaviour becomes adaptively more hazard-tolerant. And the “automation paradox” means that human error is not eliminated, but rather new opportunities for fallibility are created, as skills fade and situational awareness is lost in deference to computer-generated action.
When surgery is performed by autonomous unthinking machines (albeit with the fig leaf of human supervision), then where does liability fall? In aviation, fault-attribution is overwhelmingly on pilot error, a finding rarely challenged probably because they are always the first at the scene of an accident. But using the operator as a convenient sponge for mishaps creates what cultural anthropologist Madeleine Elish calls the “human crumple zone”. In a future AI-dominated surgery, where human control progressively diminishes, it is iniquitous to seek to confine fault to the doctor and so shield the wider system – responsibility should more properly also be distributed among designers, engineers, manufacturers, owners and government health departments.
The future of surgery is dependent on a balance between zeal for innovation and the safeguards of evidence-based medicine. Some of us recall a bygone era of eminence-based medicine. That’s the surgical spirit: sometimes wrong, never in doubt. Although currently on probation, the emerging field of surgical robotics deserves the benefit of the doubt – and a new social compact.
Darren Mann is a consultant and military surgeon with over 30 years’ experience, based in Hong Kong