Remember, services like Facebook are free because you’re the product being sold
Stuart Hargreaves says that the use of giant internet platforms like Facebook leaves individual users’ data vulnerable, and patchy legislation does not help
You may have first read about the Cambridge Analytica story on Facebook itself. A friend may have posted it. A news provider you follow may have shared it using their Facebook plug-in. It may have appeared in your news feed because your co-worker’s cousin posted it and then your co-worker “liked” it. It could have come to your attention in any number of ways.
If you “liked” it or commented on it, then the news would further spread in the same way to your friends. If you did, that would be logged. The kinds of news you like, share or comment on constitutes one small data point among thousands that Facebook knows about you. They gather data not only from the way in which you interact with Facebook directly, but also by tracking you across the web thanks to “persistent cookies”. They can use these data points to accurately predict your hobbies, age, gender, religion, sexual orientation, marital status, ethnicity, education, approximate income and political views.
Facebook has 2.2 billion monthly active users. Let that sink in – close to 30 per cent of the entire planet logs in to Facebook at least once a month, each contributing to this massive accumulation of data. This database of highly detailed profiles has made Facebook one of the most valuable companies on the planet, and its founder Mark Zuckerberg one of the wealthiest people ever.
But profiles can be used for much more than predicting that one user may be usefully targeted with an ad for designer shoes, while another should be shown an ad for baby formula. They can also be used to predict the kinds of political messaging that may sway their views, and how likely they are to be a useful cog that will re-share a piece of political propaganda (or in the current parlance, “fake news”). And that brings us back to Cambridge Analytica.
Cambridge Analytica hired an academic to create one of the ubiquitous “free personality quizzes” that pop up on Facebook from time to time. Anyone who took that quiz willingly gave the app developer not only a remarkable insight into their own psychological state, but also to all of their own information on Facebook, to their list of friends and, in turn, a significant amount of their personal data. All told, this app managed to gather the information of tens of millions of people, even though only 250,000 took the actual quiz. Under Facebook’s terms of service at the time, this secondary data could only be used for “improving the user experience” and could not be resold.
Cambridge Analytica, however, took all that data, ran it through its own “secret sauce” algorithm and allegedly sold the results to a number of political groups seeking to target voters with highly personalised messaging. Millions of users unwittingly had their internet habits harvested and processed to feed them a particular political viewpoint individually packaged in a manner to which they might be susceptible. Cambridge Analytica’s clients included the campaigns of Donald Trump and Ted Cruz in the United States, and Brexit in the UK.
Though Facebook is right that Cambridge Analytica breached its terms of service, that cannot – and must not – be the end of the discussion. It is often stated on the internet that if the service you are using is free, then you are the product. Given that Facebook has created a giant surveillance machine in which 2.2 billion of us happily participate, the consequences of this are profound.
Though Facebook has tightened up their terms of service to close the specific loophole abused by Cambridge Analytica, Facebook (and the internet in general) is still awash with “free” quizzes, games and apps, all of which harvest user data as their business model. When Facebook tries to wash its hands of responsibility for how that data is used once it is taken off its platform, then abuse is nearly inevitable.
The potential for abuse is particularly acute in those jurisdictions that (like the United States) lack a comprehensive data protection law, and instead primarily rely on users to read pages and pages of agreements to determine what data that flashlight app will gather from their phone, or to whom the maker of that personality quiz will pass along their answers. The situation in Hong Kong is somewhat better, thanks to the Personal Data (Privacy) Ordinance, which seeks to regulate the collection and use of personal data. But the ordinance is in many ways a law for another era. Amended only once since 1995, it is not well suited to the significant changes wrought by internet giants whose business model is entirely dependent on the collection and processing of our personal information.
Other jurisdictions, in particular the European Union, are dramatically revamping their data protection laws to deal with these changes. New reforms that will come into effect this year include requiring that an individual explicitly consent to any new use of their data and to its transfer to a third party, and significant penalties for organisations that breach that requirement. The Hong Kong ordinance offers no such protection and the time is ripe for the government to consider reform.
With that said, it may be the case that no legal reform can prevent all possible abuse of such a massive platform. Users themselves will have to weigh the benefits they receive from using free services like Facebook against the knowledge that they are contributing to an ever-more detailed profile of themselves, to be used one day for possibly unknown purposes. We must all keep in mind that ultimately, we are the product being sold. With that in mind, barring a radical change in Facebook’s business model, perhaps more and more people will begin to follow the advice to #deletefacebook.
Stuart Hargreaves is a professor in the Faculty of Law at the Chinese University of Hong Kong