Worried AI will replace your job?Here’s an explainer to prepare for that day
Oxford-Yale survey of 352 machine learning experts predicted a 50 per cent chance AI will outperform humans in all tasks in just 45 years
Machines are expected to excel humans in language translation by 2024, in writing high-school essays by 2026, in driving a truck by 2027, and working in retail service jobs by 2031, according to research by Oxford University’s Future of Humanity Institute and Yale University. By 2049, machines will be able to write a bestseller, and by 2053 they’ll be working as surgeons. The technology of artificial intelligence will be at the centre of these applications.
The Oxford-Yale AI impact research, based on a survey last year of 352 machine learning experts, estimated a 50 per cent chance that AI will outperform humans in all tasks in just 45 years, and could take over every job in the next century. For most researchers, it is a matter of “when” not “if”.
Worried about your job being replaced by a machine? Here’s a primer to help prepare for that day.
1. What is artificial intelligence?
Artificial intelligence is the science of simulating intelligent behaviour in computers, enabling the latter to exhibit human-like behavioural traits such as knowledge, reasoning, common sense, learning, and decision-making, according to a definition in a recent Goldman Sachs report on AI.
2. What is machine learning?
Machine learning is a branch of AI that entails enabling computers to learn from data without being explicitly programmed. Machine learning often involves classification, clustering and prediction, such as identifying fraud and spam, grouping text, voice or images, and predicting the likelihood of customer behaviour.
3. What is supervised learning versus unsupervised learning?
In supervised learning, the system is given examples with correct answers so it can learn to predict the output. Real world applications include spam detection. In unsupervised learning, the system is given unlabelled examples instead of correct answers, and has to discover patterns on its own, such as grouping customers who share certain characteristics.
4. Is deep learning the same as machine learning?
Deep learning is a type of machine learning involving “deep layers” of large neural networks which resemble the structure of a human brain with neuron-like connected nodes. With each layer solving different aspects of a problem, the hierarchy allows the system to solve more complex problems.
5. What is the difference between narrow AI and general AI?
What has been achieved so far is considered narrow AI. Most current AI applications are only good at performing single tasks, such as playing Go or navigating driving routes, whereas general AI is expected to fully replicate human intelligence in independent reasoning and decision making.
6. What are some common real-world AI applications?
Many applications have spun out of the technology dubbed the fourth industrial revolution. Speech recognition has enabled smart home speakers while text analysis offers automatic recommendations based on purchase history. AI has also opened the way to more demanding applications such as autonomous driving and cloud-based diagnostics.
7. Should we be afraid of AI?
Although science fiction movies often depict super intelligent robots capable of destroying the world, most experts say the introduction of human-like robots is still years, if not decades, away.
“People talk about the danger of AI [and] if it is going to harm humans. I think that argument is really overhyped,” said Li Deng, chief AI officer of hedge fund giant Citadel. “It overestimates the technology in terms of the speed of advancement.”
8. What are current limitations of AI?
A McKinsey report issued in January noted that there were still five limitations that needed to be addressed before AI technology could be adopted by business on a large scale.
● Data labelling: as most current AI models are trained through supervised learning, humans still need to label and categorise the underlying data, which can be a sizeable and error-prone task.
● Obtaining massive training data sets: as these can be difficult to obtain or create, new technologies are required to reduce the number of examples needed for model training.
● Explainability gap: larger and more complex models make it hard to explain in human terms why decisions are reached, although nascent approaches such as local-interpretable-model-agnostic explanations (LIME) are looking to provide more refined interpretations.
● Generability of learning: whereas humans can apply what has been learnt to understand the unknown, AI models have difficulty carrying experiences from one set of circumstances to another. The issue in turn gives birth to a new type of technology involved in transfer learning.
● Bias in data and algorithms: while AI has already sparked ethical debates such as discrimination imported from humans or distorted financial models, many biases still go unrecognised and are disregarded.