- Since OpenAI launched the artificial intelligence chatbot last November, it has gained more than 100 million users, but some are worried about its impact on education
- Meanwhile, the University of Hong Kong and Baptist University have banned students from using ChatGPT in their assignments due to concerns over plagiarism
Hot Topics takes an issue being discussed in the news and allows you to analyse different viewpoints on the subject. Our questions encourage you to examine the topic in-depth. Scroll to the bottom of the page for sample answers.
Context: ChatGPT sparks AI chatbot race, but some raise concerns about plagiarism
OpenAI’s chatbot uses artificial intelligence to create humanlike responses to users’ prompts
Critics and educators are wary of the chatbot’s power as it can pass medical and legal exams in seconds
Launched last November, ChatGPT (Chat Generative Pre-trained Transformer) has sparked an AI chatbot craze. It has the power to transform how humans work and learn.
ChatGPT was created by OpenAI, a start-up based in San Francisco. The chatbot uses artificial intelligence to create content that responds to users’ prompts. People can type their questions into a text box and engage in conversations with the bot. It can even produce images and videos. Its responses are based on a database of digital books, online writings and other media.
Currently, ChatGPT’s basic services are free. The tool gained 1 million users in the first five days after its launch. As of last month, it had 100 million users.
But the chatbot has its limits. According to OpenAI, ChatGPT sometimes writes “plausible-sounding but incorrect or nonsensical answers”.
Still, the chatbot has shocked users with its ability to write in a humanlike manner. People have used the bot to write emails, poems, code and even books.
ChatGPT has caused a stir in education. Teachers are concerned about students using the bot to do their work.
Law professors at the University of Minnesota used the chatbot to generate answers to exams in four courses last year. Its responses earned low but passing grades. Other exams the chatbot has passed include the US medical licensing exam and one for a graduate-level course at the Wharton School of the University of Pennsylvania.
In January, Microsoft confirmed a US$10 billion (HK$780 billion) investment into OpenAI. Search engine giant Google, which has a 92.5 per cent share of the global search market, was reportedly worried about ChatGPT’s release. This is because of the bot’s potential to threaten Google’s dominance and income.
Earlier this month, Google launched an AI chatbot called Bard to compete with Microsoft’s BingGPT, which uses techniques from ChatGPT.
Reuters and staff writers
List THREE of ChatGPT’s abilities.
Explain why people are concerned about ChatGPT’s impact on education.
What is the user trying to do with ChatGPT in the photo? Do you think the chatbot can give a proper response to that question? Explain using information from Context.
What are the other technology products mentioned above? What do you think this photo is meant to symbolise?
News: University of Hong Kong temporarily bans students from using ChatGPT, other AI-based tools for coursework
Suspected violations will be treated as plagiarism
University plans to launch broad-based campus debate on the implications of AI-based tools for teaching and learning at the institution
The University of Hong Kong (HKU) has temporarily banned students from using ChatGPT or any other artificial intelligence-based tool for coursework, assessments or class, with any suspected violations to be treated as plagiarism.
Earlier this month, the tertiary education institution became the first in the city to prohibit the use of AI-based tools on campus.
Professor Ian Holliday, HKU’s vice-president for teaching and learning, issued a campuswide email on Friday announcing the decision. Students seeking exemptions would need to obtain written permission from course instructors, he said.
With ChatGPT sweeping the internet and shaking up the education sector, the university planned to launch a campus debate on the implications of AI-based tools for teaching and learning, he explained. “It will take a while for us to settle on a long-term policy,” he said.
“We therefore need to adopt a short-term policy. This is it: as an interim measure, we prohibit the use of ChatGPT or any other AI-based tool for all classroom, coursework and assessment tasks at HKU.”
Holliday warned that suspected violations of the policy would be treated as plagiarism cases, saying teachers could use various methods to verify if students had used any AI-based tools to complete their work.
“Teachers who suspect ChatGPT or another AI-based tool has been used may call a student in to discuss their work, set a supplementary oral examination, require a supplementary in-hall examination or adopt other measures,” he said.
Last Wednesday, Baptist University sent a letter to students warning that students would commit plagiarism if they took words or ideas from other sources, including ChatGPT and other AI technologies, and presented them as their own.
“If students rely heavily on AI-based tools to do coursework, they can’t develop their own logical reasoning, critical thinking and language skills,” Holliday said. “The AI can be a double-edged sword.”
Why is HKU only temporarily banning the use of AI-based tools?
With reference to News and Context, why might ChatGPT be considered “a double-edged sword” by educators? Explain ONE argument for and ONE against the use of the AI tool in the classroom.
Issue: Educators’ concerns over academic fraud and unreliable AI detection tool
Blind reviews misidentified 32 per cent of ChatGPT’s research paper abstracts as being real
But some point to the chatbot’s potential to decrease language barriers and improve learning processes
The AI chatbot ChatGPT is causing concern among academics and educators, but some say it could also help reduce language barriers and improve learning processes.
To test how believable the AI-generated texts are to professionals, a team of scientists in the United States asked their peers to tell research paper abstracts written by the AI writer apart from those by humans.
The blind reviews misidentified 32 per cent of generated abstracts as being real and 14 per cent of original abstracts as being written by the chatbot, according to the study published in late December on the bioRxiv website ahead of peer review.
Lead author Catherine Gao, from Northwestern University, said she was concerned that AI writing software could hurt the credibility of the scientific community.
“Organisations such as paper mills [fake research paper factories] could use this technology more easily to fabricate scientific writing and results,” she explained. “If a scientist were to base their work on data that is false, or a clinician to base their care on studies that were fake, that could be very dangerous.”
But using the AI service, which can translate and write in languages including English, Chinese, Spanish and French, could help scientists publish in a second language.
“[The tools] can be used to help scientists with the burden of writing and help improve equity, particularly for scientists who may have language barriers to disseminating their work,” Gao said.
Wharton professor Christian Terwiesch typed into ChatGPT questions from the final exam for his class. He wanted to see if the tool could pass an MBA-level course, and it did. But he noted some answers contained some simple mistakes. He said this showed that humans were still far from using the tool to replace trained professionals.
But Terwiesch encourages fellow educators to consider “opportunities where we can think about improving our learning process” using AI tools in the classroom.
OpenAI, the creator of ChatGPT, released a free detection tool on February 1 to help educators and others distinguish if a text was written by a human or a machine. But the tech firm said that the tool was “very unreliable”, especially on short texts below 1,000 characters.
To what extent do you agree that ChatGPT is dangerous? Explain using THREE examples from Context, News and Issue.
Use information from Context, News and Issue to craft a suggestion for HKU’s long-term policy on AI-based tools. Justify your answer.
artificial intelligence: a field of computer science and engineering that works to create machines that can think and act like humans. It is used in many different industries, from healthcare to manufacturing.
chatbot: a software application used to conduct an online chat conversation with human users
ChatGPT: an AI program trained on a large data set and human-defined guidelines and principles to satisfy a human user’s inquiries. It can converse, generate text on demand and even produce images and videos.
MBA: acronym for Master of Business Administration. It is a postgraduate degree that provides training for business or investment management.
OpenAI: US research laboratory founded in 2015 with the goal to ensure “artificial general intelligence benefits all of humanity”
OpenAI’s detection tool: also known as a classifier. It can help distinguish if a text was written by a human or an AI program. OpenAI noted some limitations. For example, AI-generated text could be edited to evade detection by this tool, and the program would have trouble with text that is not in English.
List THREE of ChatGPT’s abilities. ChatGPT can engage in conversations, generate text such as essays and poems and produce images and videos based on a database of digital books, online writings and other media. (accept other reasonable answers)
Explain why people are concerned about ChatGPT’s impact on education. People are concerned because students could use the tool to write essays and complete exams, so schools must be cautious about academic dishonesty and plagiarism.
What is the user trying to do with ChatGPT in the photo? Do you think the chatbot can give a proper response to that question? Explain using information from Context. The user is trying to ask the AI program to explain quantum computing in simple wordings, which is something the generative AI tool can deliver. However, the tool is not infallible and may present false facts.
What are the other technology products mentioned above? What do you think this photo is meant to symbolise? Bard is Google’s AI chatbot, and Microsoft also has BingGPT, which uses technology from ChatGPT. The photo symbolises the current AI chatbot race sparked by ChatGPT. Since Microsoft has already invested in OpenAI, the company behind ChatGPT, Google has had to launch its own AI chatbot to keep up. Although Google has a strong hold on the search engine market, the ChatGPT is a threat as it offers a much more powerful search tool for users.
Why is HKU only temporarily banning the use of AI-based tools? The university plans to launch a broad-based campus debate on the implications of AI-based tools for teaching and learning at the institution, so it will take a while for them to settle on a long-term policy.
With reference to News and Context, why might ChatGPT be considered “a double-edged sword” by educators? Explain ONE argument for and ONE against the use of the AI tool in the classroom. For: The technology will transform many sectors, so students should learn how to use it to keep up with the changing world. Against: If students rely heavily on the AI tool, they may fail to develop their own critical thinking and language skills. (accept other reasonable answers)
To what extent do you agree that ChatGPT is dangerous? Explain using THREE examples from Context, News and Issue. I agree that ChatGPT is dangerous because it could fabricate scientific writing. If people do not know how to read the details of a research paper carefully, they may mistake fake results as real facts. Doctors might also give patients incorrect and harmful care based on fabricated information. It can also be detrimental to students being able to develop their foundational skills in critical thinking, logical reasoning and language. (accept other reasonable answers)
Use information from Context, News and Issue to craft a suggestion for HKU’s long-term policy on AI-based tools. Justify your answer. HKU should allow professors to judge whether AI-based tools will be useful in the classroom. This is because some classes require students to generate their own writing and code to build a foundation of knowledge. However, higher-level courses should make use of the AI-based tools to push students to create even more complicated projects. This will be an important skill as they move into sectors that may use AI to improve efficiency. (accept other reasonable answers)