The best AI chatbots seem like real humans, and that’s scaring people
- OpenAI’s new ChatGPT chatbot fascinated Twitter users with its human-like ability to answer questions, explaining scientific concepts and writing scenes for plays
- But this level of sophistication is worrying some observers, who say these technologies could be used to spread false information or create credible scams

California start-up OpenAI has released a chatbot capable of answering a variety of questions, but its impressive performance has reopened the debate on the risks linked to artificial intelligence (AI) technologies.
The conversations with ChatGPT, posted on Twitter by fascinated users, show a kind of omniscient machine, capable of explaining scientific concepts and writing scenes for plays, university dissertations or even functional lines of computer code.
“Its answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and relevant,” says Claude de Loupy, head of Syllabs, a French company that specialises in automatic text generation.
“When you start asking very specific questions, ChatGPT’s response can be off the mark,” de Loupy adds, but its overall performance remains “really impressive”, with a “high linguistic level”.
OpenAI was co-founded in 2015 in San Francisco by billionaire tech mogul Elon Musk, who left the business in 2018, and received US$1 billion from Microsoft in 2019.