Advertisement

China tech firms need to raise their game on security concerns posed by ChatGPT-like services, experts say

  • Experts say instances of hackers tapping ChatGPT-like services to write malicious codes or spam emails are increasing
  • ChatGPT-like services can help hackers due to the volume of content propagated and the more persuasive nature of human-like text

Reading Time:3 minutes
Why you can trust SCMP
China tech firms need to up their security focus on ChatGPT-like services. Photo: Reuters
Ben Jiangin Beijing

As global tech firms scramble to offer rival products to ChatGPT, the much-talked-about chatbot launched by San Francisco-based tech start-up OpenAI, Chinese artificial intelligence (AI) and security experts are warning that the unchecked growth of such services raises cybersecurity concerns.

Advertisement

Hackers and online scam groups have been using ChatGPT, which is capable of giving humanlike responses to complex questions, to write malicious code that could be used in spam and phishing emails, according to a representative at Beijing-based online security firm Huorong.

“We’ve noticed instances of ChatGPT being used to generate malicious codes,” said the person. “That it is lowering the barriers for [launching] online attacks.”

The Huorong representative added that while ChatGPT has made it easier for online attacks to be launched, it does not necessarily increase the efficacy of such attacks.

“[ChatGPT] is able to quote open-source malicious backdoor or trojan horse codes that are already available online, but it will not be able to elevate the function of the codes [to make them more effective],” said the person.

Advertisement

Still, having another tool that can assist and potentially popularise internet scams does not bode well for Chinese online users, who are already at risk from a variety of online frauds, from privacy leaks to malicious adware.

loading
Advertisement