Advertisement
Advertisement
ChatGPT could be seen as a tool, like the calculator or Microsoft Excel, taking up automated low-level, repetitive tasks and allowing us to get into higher-level, more complex thinking, and to be more creative. Photo: Shutterstock
Opinion
Yanto Chandra
Yanto Chandra

How academia can embrace ChatGPT and reignite a love for learning

  • Resistance is reminiscent of the 1988 US maths teachers’ protest against the calculator
  • But if teachers can reassess their roles and how they teach and grade work, ChatGPT and AI in general can help shift education from rote learning towards discovery in learning
The rise of ChatGPT has prompted debate in academic circles as to whether artificial intelligence should be banned for students (and professors). Some universities have proudly banned ChatGPT while others are harnessing its benefits. One major concern is the potential for abuse, such as in turning in AI-generated essays or programming codes as one’s own work.

The fear is that using AI would reduce the quality of learning. Having short cuts to quick wins would make students lazy. And university administrators would be blamed for failing for suppress academic misconduct. There are concerns too about AI cannibalising the work of academics.

But one argument is missing here. Academic cheating is a multibillion-dollar industry. Students have confessed to using education platforms, which offer help with homework and textbook answers, to cheat. There are also countless ghostwriting, contract cheating and other services available.

Academia is all too optimistic about anti-plagiarism software and academic integrity policies catching all the bad apples. Banning AI is hardly going to help when there are thousands of ways to cheat.

And it seems ChatGPT is likely to only get a student a C grade anyway, according to informal experiments by American university professors in putting the bot through tests. The tentative conclusion is that ChatGPT is not a great cheating tool because it requires some skill to get it to produce targeted, high-quality results, and even so, the output can include errors.

The resistance towards ChatGPT in academia is reminiscent of maths teachers’ protest against the use of calculators in Washington in April 1988. But calculators did not remove the need to learn algebra and calculus.

Nor did Microsoft Excel remove the need to learn about matrices. Instead, these tools automated low-level, repetitive tasks, allowing us to get into higher-level, more complex thinking, and to be more creative.

With calculators, maths teachers have taught algebra differently. For example, instead of teaching rigid steps in how percentages and fractions are calculated, students can be asked to explore alternative ways to arrive at an answer, using calculators. The focus shifts away from rote memorisation towards discovery in learning. As a teaching tool, AI including ChatGPT can perhaps be seen in a similar light.

ChatGPT and AI in general require academia to change their perspectives in three areas: how they view their role in learning, and how they design pedagogy and assessments.

First, academia can lead in providing the wisdom and awareness with which to judge and interact with output from AI. Knowledge co-production with AI can become what academia focuses on. A crucial first step is to devise a responsible AI use policy which can include having students acknowledge AI use in their assessed work.

Second, curriculum design has to change with the advent of AI. In any given topic, AI can create plenty of examples in seconds – from coding a simple website in HTML to drafting a CEO’s speech in a crisis. Students can critically evaluate these examples as part of learning.

AI can help learning by correcting grammar, suggesting the appropriate semantics (“do I say drink or take the medicine?”), analogy, metaphor or even the words more culturally relevant in one country than another.

02:27

Baidu unveils China’s answer to ChatGPT, sends stocks tumbling

Baidu unveils China’s answer to ChatGPT, sends stocks tumbling

AI can also quickly offer different versions of the same thing, which can help in learning how to, say, write for different target audiences, and to write with greater clarity. Asking AI questions can also help in developing debate and argument, especially when the AI is asked to refute one’s ideas, which learners can follow up on with rebuttals.

AI can also help in role playing. For example, learners can ask the AI to write a fundraising statement for a social enterprise, first as Mother Teresa, then as Albert Einstein, to try and better understand the historical figures. To weed out plagiarising, we can learn to spot the structure and style typically churned out by an AI.

Third, AI will require a change in academic assessments. We can sit learners down for sessions with customised AI chatbots, to build repositories of knowledge to complement lectures, readings, case studies and tutorials. After a session, the chatbots can recommend personalised learning instructions to plug the gaps detected.

Visual-recognition AI can also help to grade maths equations, sketches of an idea or a prototype. It can compare and contrast work from different student cohorts to assess their strengths and limitations, and offer support. Different pedagogical techniques can be assessed by AI in terms of efficacy and speed.

If AI is an elephant, then academia are the six blind men, each touching a different part of the massive entity. As AI penetrates email systems (e.g., in sentence completion or correction) and software (e.g., Microsoft Office apps, integration of AI into Bing, Google, Baidu search engines), it will be harder to evade.
AI will become another calculator or Google in our lives. ChatGPT it, Bing it, Ernie it, Midjourney it – these phrases will enter our daily lexicon. If surgeons can use AI to detect cancer with greater precision and speed while maintaining the final call, there seems to be little risk for academia to embrace AI as a new avenue for knowledge discovery, creation and dissemination. The age of human-machine symbiosis is arriving.

Dr Yanto Chandra is an associate professor at City University of Hong Kong. The views expressed in the article are his own

2