Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Douyin’s latest initiative underscores the heightened awareness in China’s internet industry of addressing the risks that deepfakes pose. Photo: Shutterstock

TikTok’s Chinese sibling Douyin asks creators on the platform to label content generated by AI as Beijing moves to regulate ChatGPT-like tools

  • ByteDance-owned Douyin said creators on the platform must be held responsible for the consequences of posting content made via generative AI
  • The short video app’s rules are based on China’s new regulation, the Administrative Provisions on Deep Synthesis for Internet Information Service
ByteDance-owned short video app Douyin, the Chinese version of TikTok, has rolled out a new set of rules that require all creators on the platform to label content that was generated by artificial intelligence (AI), as Beijing initiates steps to regulate ChatGPT-like tools.
By clearly marking which content is AI-generated, Douyin creators will “help other users differentiate between what’s virtual and what’s real”, according to the app’s new platform rules published on Tuesday. It said creators must be held responsible for the consequences of posting content made via generative AI.

In line with that move, Douyin has released a technical standard for creators to label such content.

Douyin said its latest rules are based on China’s new regulation, the Administrative Provisions on Deep Synthesis for Internet Information Service, which came into effect on January 10.
The regulation imposes obligations on the providers and users of so-called deep synthesis – an AI-based technology covering deep learning, machine learning and other algorithmic processing systems – which uses mixed data sets and algorithms to produce synthetic content, such as deepfakes, according to a February blog post by international law firm Allen & Overy.

“The highly realistic outputs, ease of operation and low cost create potential safety and security risks [from adopting] deep synthesis technology, as it can be used by criminals to produce, copy and disseminate illegal or false information or assume other people’s identities to commit fraud,” the blog post said.

It said the regulation covers technologies that generate or edit text content, video and audio, as well as those applied for virtual scene generation and 3D reconstruction.

Douyin’s latest initiative underscores the heightened awareness in China’s internet industry of addressing the risks that deepfakes pose.

China’s internet censors target deepfake tech to curb online disinformation

While AI-generated digital avatars – also known as virtual humans – are allowed on Douyin, these must be registered with the platform and users are required to verify their real names.

Douyin on Tuesday said users who employ generative AI to create content that infringes upon other people’s portrait rights and copyright, or contain falsified information will be “severely penalised”.

The Chinese short video app’s new rules come weeks after internet regulator the Cyberspace Administration of China (CAC) unveiled a new set of rules targeting generative AI services like ChatGPT, the popular chatbot launched last year by Microsoft Corp-backed start-up OpenAI.

The CAC last month proposed that companies providing generative AI services in China must take measures to prevent discriminatory content, false information, and content that harms personal privacy or intellectual property.

China to require security assessment for AI tools similar to ChatGPT

Operators of generative AI services should also ensure that their products uphold Chinese socialist values, and do not generate content that suggests regime subversion, violence or pornography, or disrupts economic or social order, the CAC said.

All generative AI products must pass a security assessment by the CAC before being made available to the public, as required by a 2018 regulation covering online information services that have the ability to influence public opinion, the internet regulator said. It is soliciting feedback on the proposed rules until May 10.

The CAC’s draft measures highlight the issues related to generative AI “that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency”, Yan Luo and Xuezi Dan, lawyers at Covington & Burling, wrote in an analysis piece published last month on DigiChina, a collaborative project to understand China’s tech policy developments at the Cyber Policy Centre of Stanford Law School in the US.

Regulatory requirements for companies, including security assessment with the CAC and filtering inappropriate content created via generative AI, could “raise practical challenges for providers that wish to offer their generative AI services in China”, the lawyers wrote.

3