-
Advertisement
Artificial intelligence
TechBig Tech

ChatGPT maker’s mission to prevent AI from ‘going rogue’ will eat up a fifth of OpenAI’s compute power

  • ’The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,’ an OpenAI co-founder wrote in a blog post
  • The start-up, backed by Microsoft, is dedicating 20 per cent of compute power to solve the ‘alignment problem’ to make AI’s goals beneficial to humans

Reading Time:2 minutes
Why you can trust SCMP
A keyboard reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken February 8, 2023. Photo: Reuters
Reuters
ChatGPT’s creator OpenAI plans to invest significant resources and create a new research team that will seek to ensure its artificial intelligence remains safe for humans – eventually using AI to supervise itself, it said on Wednesday.

“The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,” OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University on June 5, 2023. Photo: AFP
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University on June 5, 2023. Photo: AFP

Superintelligent AI – systems more intelligent than humans – could arrive this decade, the blog post’s authors predicted. Humans will need better techniques than currently available to be able to control the superintelligent AI, hence the need for breakthroughs in so-called “alignment research”, which focuses on ensuring AI remains beneficial to humans, according to the authors.

Advertisement

OpenAI, backed by Microsoft, is dedicating 20 per cent of the compute power it has secured over the next four years to solving this problem, they wrote. In addition, the company is forming a new team that will organise around this effort, called the Superalignment team.

The team’s goal is to create a “human-level” AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research.

Advertisement

AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.

Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x