Tech giants don’t want us to know how their AI systems work or why they spread racism and untruths. An open-access alternative holds out promise
- AI algorithms are capable of having conversations, creating readable text, and predicting your writing, but all have flaws
- AI creators are notoriously secretive, but a new, open-access, multi-language model from a coalition of researchers is promising transparency

The tech industry’s latest artificial intelligence systems can be pretty convincing, but they’re not so good – and sometimes dangerously bad – at handling other seemingly straightforward tasks.
Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it’s learned from a vast database of digital books and online writings.
It’s considered one of the most advanced of a new generation of AI algorithms that can converse, generate readable text on demand and even produce novel images and video.
Among other things, GPT-3 can write up most any text you ask for – a cover letter for a zookeeping job, say, or a Shakespearean-style sonnet set on Mars.
But when Gary Smith, a professor at Pomona College in California, asked it a simple but nonsensical question about walking upstairs, GPT-3 muffed it.
“Yes, it is safe to walk upstairs on your hands if you wash them first,” the AI replied.