Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
AI-generated images seen on a computer. Photo: AFP

China’s Ant, Baidu, Tencent collaborate with US firms OpenAI, Nvidia on publishing first global generative AI standards

  • The GenAI standard was written by researchers from Nvidia and others, while the LLM guideline was penned by Ant employees
  • As GenAI develops rapidly, tech companies have called for efforts to keep the technology safe for individuals and businesses
China’s Ant Group, Baidu and Tencent Holdings have joined forces with leading global tech companies including OpenAI, Microsoft and Nvidia to publish two international standards on generative artificial intelligence (GenAI) and large language models (LLMs).

The companies on Tuesday released the “Generative AI Application Security Testing and Validation Standard” and the “Large Language Model Security Testing Method” during a side event at the United Nations Science and Technology Conference in Geneva, Switzerland, according to a statement from the event organiser, the non-profit World Digital Technology Academy (WDTA).

They are the first global standards specifically covering GenAI and LLM, technologies behind increasingly popular AI services such as OpenAI’s ChatGPT and Microsoft’s Copilot, which is supported by OpenAI technology. Chinese search engine operator Baidu has also rolled out its own AI chatbot, Ernie Bot, while Tencent and Ant have launched their respective LLMs.

Ant is an affiliate of Alibaba Group Holding, owner of the South China Morning Post.

Ant Group is among a list of Big Tech companies involved in composing the world’s first international standards specifically governing generative AI. Photo: Reuters

The new GenAI standard was written by researchers from Nvidia, Facebook owner Meta Platforms and others, and reviewed by companies including Amazon.com, Google, Microsoft, Ant, Baidu and Tencent.

It provides a framework for testing and validating the security of GenAI applications, according to a copy of the document published on the WDTA website.

The LLM guideline, penned by 17 Ant employees and reviewed by Nvidia, Microsoft, Meta and others, outlines a diverse range of attack methodologies to test an LLM’s resistance to hacks, according to the official copy

The WDTA, established last April under the UN framework, aims to “expedite the establishment of norms and standards in the digital domain”.

As GenAI develops rapidly and becomes increasingly used by businesses and individual users, tech companies have called for efforts to keep the technology safe. OpenAI chief executive Sam Altman, upon resuming his role in November after a brief ousting, said “investing in full-stack safety efforts” would be one of the company’s priorities.

In July, China became the first country to regulate GenAI and related services with the issuing of rules stipulating that service providers should uphold “core socialist values”, among other requirements. Since then, Beijing has approved several tech companies, including Ant, Baidu and Tencent, to open their LLMs for commercial use.

International standards and regulations on AI have existed before GenAI became popular.

In 2021, Unesco, the UN’s heritage body, introduced a “Recommendation on the Ethics of AI”, which has been adopted by 193 member states.

Between 2022 and 2023, the International Organisation for Standardisation, a Geneva-based non-governmental group that composes standards covering a wide range of areas from workplace safety to IT security, published AI-related guidelines on system management, risk management and systems using machine learning.

Post