OpenAI, the start-up behind the popular ChatGPT artificial intelligence chatbot, said Thursday it will award 10 equal grants from a fund of US$1 million for experiments in democratic processes to determine how AI software should be governed to address bias and other factors. The US$100,000 grants will go to recipients who present compelling frameworks for answering such questions as whether AI ought to criticise public figures and what it should consider the “median individual” in the world, according to a blog post announcing the fund. Critics say AI systems like ChatGPT have inherent bias due to the inputs used to shape their views. Users have found examples of racist or sexist outputs from AI software. Concerns are growing that AI working alongside search engines like Alphabet Inc’s Google and Microsoft Corp ’s Bing may produce incorrect information in a convincing fashion. China’s internet watchdog warns of risks from generative AI, other advanced tech OpenAI, backed by US$10 billion from Microsoft, has been leading the call for regulation of AI. Yet it recently threatened to pull out of the European Union over proposed rules. “The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back,” OpenAI’s chief executive Sam Altman told Reuters. “They are still talking about it.” The start-up’s grants would not fund that much AI research. Salaries for AI engineers and others in the red-hot sector easily top US$100,000 and can exceed US$300,000. AI systems “should benefit all of humanity and be shaped to be as inclusive as possible”, OpenAI said in the blog post. “We are launching this grant programme to take a first step in this direction.” The San Francisco start-up said results of the funding could shape its own views on AI governance, though it said no recommendations would be “binding”. Altman has been a leading figure calling for regulation of AI, while simultaneously rolling out new updates to ChatGPT and image-generator DALL-E. This month he appeared before a US Senate subcommittee, saying “if this technology goes wrong, it can go quite wrong”. Microsoft too has recently endorsed comprehensive regulation of AI even as it has vowed to insert the technology into its products, racing with OpenAI, Google and start-ups to offer AI to consumers and businesses. Nearly every sector has an interest in AI’s potential to improve efficiency and cut labour costs, along with concerns AI could spread misinformation or factual inaccuracies, what industry insiders call “hallucinations”. AI is already behind several widely believed spoofs. One recent phoney viral image of an explosion near the Pentagon briefly affected the stock market. Despite calls for greater regulation, Congress has failed to pass new legislation to meaningfully curtail Big Tech.