How academia can embrace ChatGPT and reignite a love for learning
- Resistance is reminiscent of the 1988 US maths teachers’ protest against the calculator
- But if teachers can reassess their roles and how they teach and grade work, ChatGPT and AI in general can help shift education from rote learning towards discovery in learning
The fear is that using AI would reduce the quality of learning. Having short cuts to quick wins would make students lazy. And university administrators would be blamed for failing for suppress academic misconduct. There are concerns too about AI cannibalising the work of academics.
Academia is all too optimistic about anti-plagiarism software and academic integrity policies catching all the bad apples. Banning AI is hardly going to help when there are thousands of ways to cheat.
The resistance towards ChatGPT in academia is reminiscent of maths teachers’ protest against the use of calculators in Washington in April 1988. But calculators did not remove the need to learn algebra and calculus.
Nor did Microsoft Excel remove the need to learn about matrices. Instead, these tools automated low-level, repetitive tasks, allowing us to get into higher-level, more complex thinking, and to be more creative.
With calculators, maths teachers have taught algebra differently. For example, instead of teaching rigid steps in how percentages and fractions are calculated, students can be asked to explore alternative ways to arrive at an answer, using calculators. The focus shifts away from rote memorisation towards discovery in learning. As a teaching tool, AI including ChatGPT can perhaps be seen in a similar light.
First, academia can lead in providing the wisdom and awareness with which to judge and interact with output from AI. Knowledge co-production with AI can become what academia focuses on. A crucial first step is to devise a responsible AI use policy which can include having students acknowledge AI use in their assessed work.
Second, curriculum design has to change with the advent of AI. In any given topic, AI can create plenty of examples in seconds – from coding a simple website in HTML to drafting a CEO’s speech in a crisis. Students can critically evaluate these examples as part of learning.
AI can help learning by correcting grammar, suggesting the appropriate semantics (“do I say drink or take the medicine?”), analogy, metaphor or even the words more culturally relevant in one country than another.
AI can also quickly offer different versions of the same thing, which can help in learning how to, say, write for different target audiences, and to write with greater clarity. Asking AI questions can also help in developing debate and argument, especially when the AI is asked to refute one’s ideas, which learners can follow up on with rebuttals.
AI can also help in role playing. For example, learners can ask the AI to write a fundraising statement for a social enterprise, first as Mother Teresa, then as Albert Einstein, to try and better understand the historical figures. To weed out plagiarising, we can learn to spot the structure and style typically churned out by an AI.
Third, AI will require a change in academic assessments. We can sit learners down for sessions with customised AI chatbots, to build repositories of knowledge to complement lectures, readings, case studies and tutorials. After a session, the chatbots can recommend personalised learning instructions to plug the gaps detected.
Visual-recognition AI can also help to grade maths equations, sketches of an idea or a prototype. It can compare and contrast work from different student cohorts to assess their strengths and limitations, and offer support. Different pedagogical techniques can be assessed by AI in terms of efficacy and speed.
Dr Yanto Chandra is an associate professor at City University of Hong Kong. The views expressed in the article are his own