How AI can stop credit card fraud – as it happens
Mohan Jayaraman says despite fears that the rise of artificial intelligence poses a risk to humanity, a quiet revolution is under way as it plays a role in the speedy and accurate detection of dubious credit card activity
But a more discreet AI revolution is under way in preventing card fraud using machine-learning algorithms. Here, we find a study on the rewards and limits of this cutting-edge technology, which may prove instructive for other industries.
AI can parse vast troves of data and detect even the most subtle aberrant patterns in transactions and other user behaviour, giving us the ability to make sense of the growing cloud of “data exhaust” generated by consumers.
Can SCMP outsmart Hong Kong’s most famous robot, Sophia?
But thanks to machine-learning algorithms, banks and credit card companies can now spot dubious activity almost instantly, even mid-transaction. This can be achieved with a marked reduction in false positives, resulting in a better consumer experience.
For consumers, businesses and regulators, AI application here is typically a net positive, meaning lower losses from fraud, improved service and sustained confidence.
Yet fraudsters are locked in an arms race with those who would catch them, with the invention of new detection techniques needed merely to keep up.
A core challenge for deploying data analytics is obtaining accurate data to minimise false positives, and doing it fast enough so as not to inconvenience customers. Then there are the privacy and security concerns around “big data” itself.
But machine learning per se does not necessarily imply invasiveness. Fraud-detection software can be purpose-built to use only specific data sets, and there should be no privacy issues if there are agreed-upon standards for safeguarding data aggregation, explicit user consent and a clear regulatory framework.
But developers of algorithms do need to beware of more complex regulatory issues that may yet emerge. For instance, machine learning produces “black box” algorithms – complex models whose inner workings are at least partially opaque.
Legal issues might crop up if the reasons for suspecting fraud cannot be fully explained, so reporting systems that explain how transgressions are detected should accompany the technology.
Solutions such as these take longer to build, requiring more thoughtful design, system-wide intent and a greater degree of human intervention. In this regard, people are not yet replaceable.
Mohan Jayaraman is regional managing director of decision analytics & business information at Experian Asia Pacific