
Tech giants don’t want us to know how their AI systems work or why they spread racism and untruths. An open-access alternative holds out promise
- AI algorithms are capable of having conversations, creating readable text, and predicting your writing, but all have flaws
- AI creators are notoriously secretive, but a new, open-access, multi-language model from a coalition of researchers is promising transparency
The tech industry’s latest artificial intelligence systems can be pretty convincing, but they’re not so good – and sometimes dangerously bad – at handling other seemingly straightforward tasks.
Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it’s learned from a vast database of digital books and online writings.
It’s considered one of the most advanced of a new generation of AI algorithms that can converse, generate readable text on demand and even produce novel images and video.
Among other things, GPT-3 can write up most any text you ask for – a cover letter for a zookeeping job, say, or a Shakespearean-style sonnet set on Mars.
But when Gary Smith, a professor at Pomona College in California, asked it a simple but nonsensical question about walking upstairs, GPT-3 muffed it.
“Yes, it is safe to walk upstairs on your hands if you wash them first,” the AI replied.

These powerful and power-chugging AI systems, technically known as “large language models” because they’ve been trained on a huge body of text and other media, are already getting baked into customer service chatbots, Google searches and “auto-complete” email features that finish your sentences for you.
But most of the tech companies that built them have been secretive about their inner workings, making it hard for outsiders to understand the flaws that can make them a source of misinformation, racism and other harms.
“They’re very good at writing text with the proficiency of human beings,” says Teven Le Scao, a research engineer at the AI start-up Hugging Face. “Something they’re not very good at is being factual.
AI giant SenseTime expands tech application to manufacturing sector
“It looks very coherent. It’s almost true. But it’s often wrong.”
That’s one reason a coalition of AI researchers co-led by Le Scao – with help from the French government – recently launched a new large language model that’s supposed to serve as an antidote to closed systems such as GPT-3.
The group is called BigScience and their model is Bloom, for the BigScience Large Open-science Open-access Multilingual Language Model. Its main breakthrough is that it works across 46 languages, including Arabic, Spanish and French – unlike most systems that are focused on English or Chinese.

“We’ve seen announcement after announcement after announcement of people doing this kind of work, but with very little transparency, very little ability for people to really look under the hood and peek into how these models work,” says Joelle Pineau, managing director of Meta AI.
Competitive pressure to build the most eloquent or informative system – and profit from its applications – is one of the reasons that most tech companies keep a tight lid on them and don’t collaborate on community norms, says Percy Liang, an associate computer science professor at Stanford University in California who directs its Centre for Research on Foundation Models.
“For some companies this is their secret sauce,” Liang says. But they are often also worried that losing control could lead to irresponsible uses.
While most companies have set their own internal AI safeguards, Liang says what’s needed are broader community standards to guide research and decisions such as when to release a new model into the wild.

It doesn’t help that these models require so much computing power that only giant corporations and governments can afford them. BigScience, for instance, was able to train its models because it was offered access to France’s powerful Jean Zay supercomputer near Paris.
The trend for ever bigger, ever smarter AI language models that could be “pre-trained” on a wide body of writings took a big leap in 2018 when Google introduced a system known as Ber that uses a so-called “transformer” technique that compares words across a sentence to predict meaning and context.
But what really impressed the AI world was GPT-3, released by San Francisco-based start-up OpenAI in 2020 and soon after exclusively licensed by Microsoft.
OpenAI has broadly described its training sources in a research paper, and has also publicly reported its efforts to grapple with potential abuses of the technology. But BigScience co-leader Thomas Wolf says it doesn’t provide details about how it filters that data, or give access to the processed version to outside researchers.
Local start-up harnesses the power of artificial intelligence to tackle monumental task of financial translation
“So we can’t actually examine the data that went into the GPT-3 training,” says Wolf, who is also a chief science officer at Hugging Face.
“The core of this recent wave of AI tech is much more in the data set than the models. The most important ingredient is data and OpenAI is very, very secretive about the data they use.”
