Hong Kong issues first AI data protection guidelines, promises more compliance checks
- The city’s privacy watchdog issued a framework for using personal data with generative AI and recommended firms set up a governance committee

Companies using generative AI solutions should take a range of measures to protect personal data, including conducting risk assessments, deciding on the appropriate level of human oversight, and minimising the amount of personal data collected to train their models, according to a framework published by the Office of the Privacy Commissioner for Personal Data (PCPD) on Tuesday.
They should also set up an internal AI governance committee, led by a C-level executive, that reports directly to the board, according to the document.
The framework is the most comprehensive set of AI-related regulatory guidelines to date in Hong Kong, which currently does not have any laws or regulations specifically governing the technology that has seen rapid adoption since OpenAI’s launch of ChatGPT in late 2022.

However, it does not impose mandatory requirements, noted Amita Haylock, a partner at the law firm Mayer Brown in Hong Kong.
“Given the government’s strong support for innovation and technology, I believe it is likely to continue with an incremental approach to AI use and development by introducing voluntary guidelines and subject matter-specific measures – for example, in data privacy and intellectual property – rather than enact sweeping laws and regulations in this regard,” she said.