Advertisement
Advertisement
After the public gained access to OpenAI’s chatbot, which can create human-like written responses to simple user prompts, in November 2022, it shifted how people think about artificial intelligence. Photo: TNS
Opinion
Kai-Lung Hui and Jiali Zhou
Kai-Lung Hui and Jiali Zhou

Should we let AI take over mundane work? Not so fast

  • It may seem that outsourcing tasks to generative AI like ChatGPT could increase productivity
  • However, it might actually limit the supplementary learning that occurs when people do certain tasks, so AI must be deployed with care
Generative artificial intelligence, especially OpenAI’s implementation of ChatGPT, has opened up new possibilities for the creation of intelligent content. In fact, AI-generated content is so good that it reads almost as if it was written by a human being.

For example, a study by researchers from US universities published in May found that when answering random patient questions from a social media platform, ChatGPT outperformed physicians in terms of both quality of information and empathy. And OpenAI has a said GPT4, the state-of-the-art model underlying ChatGPT Plus, could score in the top 10 per cent in the SAT reading exam for college admissions, the GRE verbal test for graduate school admissions and the uniform bar exam for lawyers.

This tool is expected to be deployed in companies and other organisations to enhance worker productivity. Schools and teachers are also keen to embrace it. Many universities now allow students to use generative AI in course work and assignments. Some local institutions offer free access to ChatGPT, encouraging students to use it in school.
However, generative AI has its limitations. Because the content it produces is generated from a complex large-language model instead of logically developed and based on facts, AI sometimes “hallucinates”, that is, produces incorrect or misleading results, due to insufficient training data, bias and incorrect assumptions. In June, two lawyers in the US were sanctioned for using ChatGPT to produce a legal brief with fictitious case citations. They claimed they weren’t aware ChatGPT could make up cases.

This case raises an important question: how does generative AI affect human capability when people no longer need to develop logical arguments and verify facts when creating content?

08:15

How a Hong Kong school embraces ChatGPT in the classroom

How a Hong Kong school embraces ChatGPT in the classroom

Using generative AI to produce work is akin to outsourcing it to other people. Thus, we can infer its impact on human capability by referencing outsourcing.

We conducted research on how contributions to the well-known programming language, Python, changed after the introduction of a crowdsourcing program called Internet Bug Bounty (IBB), which rewards people for finding security vulnerabilities in open source software – but excludes the official system maintainers, who have been reporting and fixing bugs, and making enhancements, from the reward.

The IBB program should have made the Python maintainers’ job easier, giving them more time to perform other tasks and hence increasing their productivity in those tasks.

Interestingly, we found the opposite. The official Python maintainers not only discovered fewer bugs after the IBB program, they also made fewer enhancements to Python. This means their overall productivity fell after bug reporting was outsourced.

We looked at many reasons for the drop in productivity, including a loss of motivation due to not being rewarded in the IBB program. We concluded that it was mainly due to a lack of inter-task learning. There was a bigger drop in productivity among Python maintainers who reported more bugs before the IBB program than those who reported fewer. The drop was also bigger when the bug reporting and enhancement tasks were more closely related.

We surveyed a number of Python maintainers, most of whom said that finding vulnerabilities helped them learn and gain ideas for how to enhance the program. This implies that when performing a task, we not only contribute to the outcome, we learn at the same time.

The Internet Bug Bounty program, which rewards people for identifying security vulnerabilities in open source software, should have helped increase the productivity of those whose job it is to find and fix bugs, but that has not necessarily happened. Photo: Shutterstock

This inter-task learning side-effect is illuminating. It suggests that learning and other capabilities may be sacrificed when we enjoy the power and convenience of generative AI. The short-term gain in productivity could be offset by the long-term loss in capability. Worse, this capability may be key to our contribution to future tasks, including, possibly, intelligent knowledge work.

So how should generative AI use be managed in an organisation? First, distinguish between objectives – are we seeking work output or learning and training? In most places, work output may be the priority, so it would be reasonable to actively deploy generative AI tools. Even so, we must be aware this may slow employees’ learning of how to perform tasks.

How AI development has fostered a digital ‘sweatshop’ in poor countries

Second, start classifying tasks by nature, in terms of whether they are interrelated. Generative AI tools may be better deployed for tasks that have little relation to other tasks, for example, pure automation jobs. We might wish to discourage AI’s use for deeply intertwined tasks.

Third, more training should be provided once generative AI is deployed. People are intrinsically lazy. If we let them use generative AI without supplementing it with learning opportunities, their ability may deteriorate over time.

04:48

Will ChatGPT replace reporters? We asked AI to write for the Post

Will ChatGPT replace reporters? We asked AI to write for the Post

At school, the priority should be learning rather than work output. We must seek to prevent the unrestrained use of generative AI in homework or assignments. If students don’t do homework themselves but outsource it entirely to AI, they may soon lose the ability to reason, innovate and produce more advanced knowledge.

We must revisit the common question, “Why waste time doing mundane work when AI can do it for us?” This is a dangerous sentiment because we do work not only to produce an outcome, but to learn as well. Such learning builds the foundation for future knowledge production.

We should not get carried away by generative AI, thinking that only work outcomes matter. The learning itself is equally important, if not more so.

Kai-Lung Hui is Elman Family Professor of Business and senior associate dean at Hong Kong University of Science and Technology Business School

Jiali Zhou is an assistant professor at the American University in Washington DC

The views expressed here are the authors’ own

1