Making It Personal
The Stanford Responsible Digital Leadership Project helps industry foster an ethical attitude to data privacy and artificial intelligence. Søren JØRGENSEN, the Project Director, explains how to find a balance between privacy and data use.

[Sponsored Article]
Technology and data have transformed the lives of many, but their misuse can be detrimental to personal privacy and safety. For this reason, Søren Jørgensen, a research fellow at Stanford University in the US, joined with Dr Elise St John of the Digital Transformation Hub at California Polytechnic State University and Radhika Shah, an angel and impact investor from Silicon Valley and also a fellow at Stanford University, to launch the Responsible Digital Leadership project at the end of 2019.
The aim of the project is to provide guidelines for the responsible use of new technologies such as Artificial Intelligence (AI) and data, and to provide a learning platform for global businesses to exchange ideas and establish better practices. Working with a group of banks and insurance companies, the project hopes to find ways for companies to implement ethical principles and guidelines when using data. The project also considers data ethics in the context of responsible digital leadership, especially its impact on society, human rights, and it assesses how the use of data complies with the United Nations Sustainable Development Goals (SDGs).
“Globally, it is a critical time, as we are starting to realize the challenges that the use of technology brings,” says Jørgensen. “We realize that responsible digital leadership is about culture and learning, and about promoting a mindset of responsible behavior. We must develop the ability to face these challenges when they come up. We need guidelines, but guidelines alone won’t do that, it’s a result of how we behave.”
In what could be classified as the largest global project of its kind, Jørgensen says that the Responsible Digital Leadership Project is the culmination of the ideas of more than 70 PhD, MBA and master’s level students from some of the world’s top universities. HKUST is included in that list.
This global effort “will define the risks of technology use, with a mission to future-proof the financial sector against getting the ethics of data and AI wrong.” Jørgensen says that the project has so far uncovered around 60 concrete dilemmas and challenges when it comes to the ethical use of data and technology.