As AI takes us back to a future of mystery, don’t blame ‘the machines’ for stock market volatility
Niall Ferguson says that as machines become too sophisticated for human minds to fully comprehend, we could be returning to an age of mystification in which algorithms are the enigmatic forces we blame for unusual developments
Are we living through the remystification of the world? Much that goes on is baffling these days. Financial market movements seem increasingly mysterious. Why, after close to a decade of sustained recovery from the nadir of early 2009, did global stock markets sell off so sharply this month?
We who claim expertise in these matters can tell stories about what just happened, but the feeling persists that we haven’t a clue. Twelve weeks ago I warned that “financial red lights” were flashing again. Was I prescient or just lucky? I argued that, as central banks raised interest rates and wound down quantitative easing, there was bound to be downward pressure on stock markets and that, for demographic and other reasons, the end of the prolonged bond bull market was nigh.
Yet the market gyrations have elicited more exotic explanations. The villains of this stock market correction include an exchange traded note, XIV, which enabled investors to bet on continuing low volatility and large quantitative hedge funds that employ “risk parity” and “trend following” strategies. For people at dinner parties who want to avoid explaining these rather complicated things, it was all the fault of “the machines” or “the algorithms”.
Nobody doubts that computers play a far larger role in financial markets today than ever before. It seems reasonable to assume that automated transactions by index tracking funds, not to mention high-frequency trading by quant funds, tend to amplify market movements. Yet there is no need to invoke these novelties to explain the return of normal financial volatility. There is a superstitious quality to the phrase: “It was the machines.”
For most of human history, superstition was the dominant mode of explanation. If the crops failed, it was the wrath of the gods. If a child died, it was the work of evil spirits. Sociologist Max Weber argued that modernity was about the advance of rationalism and the retreat of mystery – the “disenchantment (Entzauberung) of the world”. People entered an “iron cage” of rationality and bureaucracy. I have always thought “demystification” a more precise translation. This process may be reversible.
“The machines” are getting smarter every day. Machine learning is already superior to human learning in numerous domains. The best human players of chess and Go no longer stand a chance against the computers of DeepMind, the company Google acquired in 2014. The physicist Albert-Laszlo Barabasi told me that computers at his laboratory at Northeastern University already do a better job of assessing the performance of football players than human experts.
People tend to think of AI in terms of science fiction such as 2001: A Space Odyssey , the 1968 Stanley Kubrick film, in which the computer HAL 9000 attempts to kill the entire crew of a spaceship. But perhaps the right way to think of AI is historical – as a phenomenon that may return humanity to the old world of mystery and magic. As machine learning steadily replaces human judgment, we are as baffled as our premodern forefathers were. Many of us stand in the same relation to financial “flash crashes” as medieval peasants did to flash floods. As former Google chairman Eric Schmidt explained, even the best software engineers no longer fully understand how their own algorithms work.
Firms such as Nvidia programme cars to teach themselves how to drive. This “deep learning” goes further than our paltry human minds can fathom. How exactly is Deep Patient, a system developed at Mount Sinai Hospital in New York, able to predict which patients may succumb to schizophrenia? We don’t really know, and Deep Patient isn’t designed to explain its reasoning.
AI is no longer about getting computers to think like humans, only faster. It is about getting computers to think like a species that had evolved brains much bigger than humans. How will we cope with this remystification of the world? Shall we begin to worship the machines? Or shall we just lapse into fatalism? I would like to believe that the sum of human happiness will be increased by deep learning. But I fear that the sum of human understanding may end up being reduced.
Eyes for AI? China’s computer vision tech firms have the answer
Consider a political example. Many British people today wonder why Brexit is going wrong. Growing numbers of people want to rerun the referendum. The government is sleepwalking towards an agreement in which nothing changes except that the UK loses all voting rights in Brussels. All this was more or less predictable two years ago. Yet The Daily Telegraph and Daily Mail have an alternative explanation: a “secret plot to sabotage Brexit” by the dastardly cosmopolitan financier George Soros.
This kind of explanation also has a history, and not an edifying one. If the remystification of the world means a revival of thinly veiled anti-Semitism as well as magical thinking, then I’m staying put in Weber’s iron cage.
Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford