Letters | As ChatGPT makes waves, Hong Kong must develop AI regulations informed by ethics
- Readers discuss the need for AI policy infrastructure that stresses ethics compliance, redundant technology still employed in the city, and the continued use of bamboo scaffolding

Yet an area that is under-represented in the regulatory discussion of AI is how users should be regulated. In the case of ChatGPT, the language model is based on a vast collection of information, which users globally simultaneously contribute to and distribute. Any large-scale misuse of the model for purposes that do not align with society’s values would create challenges for the industry.
What is worth particular policy attention is the collaboration between AI companies and academic partners, which can serve as an important tool to achieve the ethical use of AI. More specifically, technology companies could learn how to operationalise the principles of AI ethics into standards, implementation guidelines and algorithms. Product developers would learn to identify ethical pitfalls and train their systems using ethics-embedded computer source codes, algorithms and data sets. User-consumers would benefit from ethics-informed AI systems that embrace values such as autonomy, privacy, non-discrimination and sustainable development.