What are the threats and opportunities from ChatGPT? Scientists weigh in
- The chatbot has increased the risk of academic fraud and plagiarism, but could also help break down language barriers
- If you ask the bot directly, it will give you a list of ways that it can help scientists – although it will also warn about accuracy

The technology can translate and summarise texts as well as answer questions, heightening concerns about academic fraud.
To test how believable the AI-generated texts are in the eyes of the professionals, a team of scientists in the United States asked their peers to tell research paper abstracts written by the AI writer apart from those by humans.
The blind reviews misidentified 32 per cent of generated abstracts as being real and 14 per cent of original abstracts as being written by the chatbot, according to the study published in late December on the bioRxiv website ahead of peer review.
“Reviewers indicated that it was surprisingly difficult to differentiate between the two, but that the generated abstracts were vaguer and had a formulaic feel to the writing,” the team from Northwestern University and the University of Chicago said.
Lead author Catherine Gao, from Northwestern University, said she was concerned that AI writing software could hurt the credibility of the scientific community.
