Advertisement

Letters | As ChatGPT makes waves, Hong Kong must develop AI regulations informed by ethics

  • Readers discuss the need for AI policy infrastructure that stresses ethics compliance, redundant technology still employed in the city, and the continued use of bamboo scaffolding

Reading Time:3 minutes
Why you can trust SCMP
0
ChatGPT, a chatbot that uses AI to generate text from any given prompt, has garnered attention for its human-like responses. Photo: Shutterstock
Feel strongly about these letters, or any other aspects of the news? Share your views by emailing us your Letter to the Editor at letters@scmp.com or filling in this Google form. Submissions should not exceed 400 words, and must include your full name and address, plus a phone number for verification.
Artificial intelligence is a crucial enabler of digital transformation. ChatGPT, a large language model that can perform humanlike writing tasks, has showcased how AI can be a powerful tool. Yet, concerns have been raised about the ethical use of AI. OpenAI, the company that created ChatGPT, also recently called for proper regulation of AI tools.
As part of the Greater Bay Area, Hong Kong is well positioned to develop ethical guidelines for AI applications. Hong Kong’s dynamic tech, financial markets and regulatory regime will support the city’s commitment to becoming a regional hub for emerging technologies. In August 2021, the Office of the Privacy Commissioner for Personal Data issued policy guidance on the ethical use of AI, focusing on data and privacy protection as well as standard setting for the ethical development of AI.

Yet an area that is under-represented in the regulatory discussion of AI is how users should be regulated. In the case of ChatGPT, the language model is based on a vast collection of information, which users globally simultaneously contribute to and distribute. Any large-scale misuse of the model for purposes that do not align with society’s values would create challenges for the industry.

As the Hong Kong government has a blueprint for building a world-class smart city by 2030, it is important to develop ethical awareness and critical thinking regarding the processes and uses of technology by local talent. It is time Hong Kong’s policymakers took a closer look at how the use of large-scale AI models could be governed.

What is worth particular policy attention is the collaboration between AI companies and academic partners, which can serve as an important tool to achieve the ethical use of AI. More specifically, technology companies could learn how to operationalise the principles of AI ethics into standards, implementation guidelines and algorithms. Product developers would learn to identify ethical pitfalls and train their systems using ethics-embedded computer source codes, algorithms and data sets. User-consumers would benefit from ethics-informed AI systems that embrace values such as autonomy, privacy, non-discrimination and sustainable development.

Advertisement