One more reason to worry about artificial intelligence - how it could fuel the rise of fascism
Amid perennial worries about robots taking away jobs or malfunctioning and running amok, researchers have come up with a new concern - that in the wrong hands, artificial intelligence (AI) could power the return of fascism.
The prospect was raised at the SXSW Interactive conference in Texas this week.
First, at the SXSW Spotlight on AI event, there was a demonstration that showed how far AI has to go, and how hard it is to program sentience.
Osaka University roboticist Hiroshi Ishiguro brought along two robots.
They were supposed to enter into a conversation but minutes of dead air passed. A dozen men with laptops frantically worked to get the system back online.
Finally, a rudimentary conversation started about sushi, and whether it was better than ramen.
The voices were, well, robotic. The pacing was too slow for a toddler. Not exactly the stuff of nightmares.
But in her talk “Dark Days: AI and the Rise of Fascism,” Microsoft Research scholar Kate Crawford gave the audience something worth fearing, as she laid out how data has historically been misused by the powerful. This included the 18th-century practice of phrenology, which tried to link head measurements to intelligence, to Nazi Germany’s use of Hollerith tabulating machines to isolate its Jewish citizens.
“AI has the power to check as well as help along (authoritarian) regimes,” she said, adding there already is evidence “that the tricks of fascist regimes of the past are getting a re-run.”
Some of the present transgressions actually come from the private sector, she said.
Crawford mentioned Uber’s development of Greyball, the ghost app that allowed the ride-hailing company to thwart municipal officials in cities trying to crack down on unauthorised expansion. Uber has agreed to stop using it to block regulators, while defending its purpose in weeding out fraudulent customers.
She also cited Palantir Technologies, the data mining start-up. According to a report in The Verge, which cited public records, Palantir has helped customs officials with ways to track and assess immigrants. Palantir is backed by Peter Thiel, the tech billionaire who is now a key adviser to President Trump. CEO Alex Karp told Forbes in January that the company would decline requests by the administration to create a Muslim Registry.
Crawford allows that data-crunching revelations in health and science can come from AI. But she’s worried about the human biases inherent in such machines.
“Always be suspicious if you hear than some machine is ‘free from bias’ if it was trained through human-generated data,” she said. “Because as our biases show, machines could create a terrifying system in the hands of an autocrat.”
Crawford cited an experiment by Chinese researchers that fed facial characteristics about criminals into a computer, which then was able to scan a random photo and “tell within 90 per cent accuracy“ if that person was likely to become a criminal.
“We’re at a volatile moment, one where data could be attached to unaccountability,” she said.
Crawford urged attendees not to despair, but rather to stay vigilant about how AI data is used in the coming years. To that end, she and a few colleagues have launched AI Now, an ACLU-backed initiative that encourages researchers to join together to monitor AI’s social impact.
“What these dark days do is challenge us to be prepared,” she said. “Done right, AI can be used to keep power structures in check. But in the wrong hands …”