Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Credit card fraud affected 42 per cent of respondents in a recent poll of countries in the Asia-Pacific. Photo: Alamy

How AI can stop credit card fraud – as it happens

Mohan Jayaraman says despite fears that the rise of artificial intelligence poses a risk to humanity, a quiet revolution is under way as it plays a role in the speedy and accurate detection of dubious credit card activity

Artificial intelligence has featured in some alarming headlines this year, with Tesla chief Elon Musk warning that this field could pose an “existential risk for human civilisation”.
But a more discreet AI revolution is under way in preventing card fraud using machine-learning algorithms. Here, we find a study on the rewards and limits of this cutting-edge technology, which may prove instructive for other industries.

AI can parse vast troves of data and detect even the most subtle aberrant patterns in transactions and other user behaviour, giving us the ability to make sense of the growing cloud of “data exhaust” generated by consumers.

Card fraud is a worthy test of its potential, affecting 42 per cent of respondents polled in Experian’s new Fraud Management Insights 2017 research report on 10 Asia-Pacific countries.

Can SCMP outsmart Hong Kong’s most famous robot, Sophia?

How is digitalisation taking China closer to its goal of a stronger and efficient economy?

But thanks to machine-learning algorithms, banks and credit card companies can now spot dubious activity almost instantly, even mid-transaction. This can be achieved with a marked reduction in false positives, resulting in a better consumer experience.

For consumers, businesses and regulators, AI application here is typically a net positive, meaning lower losses from fraud, improved service and sustained confidence.

Yet fraudsters are locked in an arms race with those who would catch them, with the invention of new detection techniques needed merely to keep up.

Teaching aid: Alibaba tests AI system that can spot errors in written Chinese essays

A core challenge for deploying data analytics is obtaining accurate data to minimise false positives, and doing it fast enough so as not to inconvenience customers. Then there are the privacy and security concerns around “big data” itself.

But machine learning per se does not necessarily imply invasiveness. Fraud-detection software can be purpose-built to use only specific data sets, and there should be no privacy issues if there are agreed-upon standards for safeguarding data aggregation, explicit user consent and a clear regulatory framework.

Artificial intelligence has been used for face recognition for mobile payments, and is now being employed to make rapid calculations to detect credit card fraud. Photo: Simon Song

Alibaba’s next moon shot is to make cities adapt to their human inhabitants, technology seer says

But developers of algorithms do need to beware of more complex regulatory issues that may yet emerge. For instance, machine learning produces “black box” algorithms – complex models whose inner workings are at least partially opaque.

Legal issues might crop up if the reasons for suspecting fraud cannot be fully explained, so reporting systems that explain how transgressions are detected should accompany the technology.

Solutions such as these take longer to build, requiring more thoughtful design, system-wide intent and a greater degree of human intervention. In this regard, people are not yet replaceable.

Mohan Jayaraman is regional managing director of decision analytics & business information at Experian Asia Pacific

This article appeared in the South China Morning Post print edition as: How AI can stop card fraud
Post