Advertisement
Advertisement
Hong Kong society
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Illustration: Henry Wong

Will ChatGPT replace me? Hong Kong reporter gets the shivers as he learns what the AI tool can do

  • SCMP journalist Oscar Liu spends weeks working on a two-part report, ChatGPT churns out articles in under a minute
  • Chatbot-generated reports found to have fictional newsmakers, unverifiable quotes, questionable claims

I have to admit that several times over the past few weeks, and sometimes in a bout of sheer terror, I wondered if I may one day lose my job as a journalist in Hong Kong to an artificial intelligence-driven chatbot.

I was tracking the arrival of ChatGPT in the city, and the impact it has already made on students, teachers, lawyers and so many people in other professions.

The Microsoft-backed AI tool took the internet by storm after its launch last November. Garnering 100 million users worldwide in its first two months, it became the fastest-growing consumer application in history.

Many people I spoke to gushed that it could do everything: write essays and marketing pitches, solve mathematical problems, figure out computer code and even produce a thesis for a postgraduate student. All in a matter of seconds.

But could ChatGPT write the stories I was working on?

Hong Kong users say ChatGPT is so fast, so good, but what about accuracy, ethics?

My assignment started after my friend told me that a young trainee lawyer had begun using the AI tool to draft legal documents and correspondence.

I then had to do a lot of legwork, tracking down and interviewing people of different backgrounds to ask what they thought of ChatGPT, as well as its pros and cons.

I spoke to that trainee lawyer, as well as students, teachers, experts in media ethics, law and linguistics, representatives of the technology, publishing, banking, insurance and catering sectors, as well as two lawmakers.

That meant making phone calls, writing emails and meeting some interviewees face-to-face.

I reached out to the Innovation, Technology and Industry Bureau and representatives of Microsoft, and attended a conference hosted by Baptist University that featured AI experts, and I spoke to some of them too.

I kept being told about the “transformative potential” of an AI tool like ChatGPT, its power to disrupt the way humans do things, and that it will keep getting better and “more humanlike” in its updated iterations and competitors might emerge that prove smarter, faster, better.

Everyone added the caveat, however, that ChatGPT was only as good as the sources it trawled to spew out at lightning speed responses to questions. It relied on data available mainly online and up to 2021, and material from some books and other sources.

The nagging question for me as I plodded on was: could ChatGPT write my story? What if it did so better than me? Was my job on the line?

My editor had told me this would be how I would end my assignment: write it and then ask ChatGPT to do the same, and decide who could do it better.

I wrote a two-parter, one on ChatGPT’s impact on the education scene, and the other about what people in other sectors said about its potential and their concerns.

Who needs a teacher? I’ve got ChatGPT! Hong Kong educationists worry about future

Then I challenged the chatbot. I requested two 1,500-word essays that should include comments from academics, a lawmaker and a government statement.

I am not going to lie, I was shocked when it tossed up the first story in 36 seconds flat and part two arrived eight seconds later.

At first glance, both looked good enough to submit to my editor. The reports were structured and told well from start to end. The grammar looked perfect.

Panic set in. What if my boss thought they were better written than my own work?

I pushed those thoughts aside and looked more closely at the chatbot’s stories.

And there it was. One report quoted a Professor Lee Wai-hang, a linguistics expert at the University of Hong Kong (HKU), saying: “ChatGPT is a powerful tool that can help students learn faster and more effectively.”

But on checking, I found there was no such professor at HKU’s department of linguistics. That quote? Nowhere to be found on Google.

The other story quoted “Anna Wong, a member of the Legislative Council of Hong Kong”. Hello, there is no such lawmaker in the city.

That was when I began questioning the origins of various bits of information presented so authoritatively in both stories. It was impossible to tell where much of it came from, or how to even begin verifying and attributing before publication.

If I submitted those reports, would I have to go through them line by line to check? Or would the subeditor who picked them up after me do so?

I suddenly began feeling a little more secure about being a human reporter. Never mind that after 15 years, I am still honing my skills to do this job better.

Father of China’s Great Firewall raises concerns about ChatGPT-like services

ChatGPT cannot go to lunch and sniff out a great story idea from a contact.

It has no clue who to call, how to gather or sieve through all the conversations and exchanges that make up the reporter’s notebook to identify angles to explore, the best individuals to quote, and produce a well-told, accurate report that will stand up to scrutiny.

For sure, the technological advancement ChatGPT represents is hugely impressive and exciting. I am sure there are ways we could harness it.

But for now, it is simply not human enough and I bet my job is safe.

My editor? She is not saying. She wants to keep me on my toes.

Postscript: When I completed this article, I learned about Google’s new AI chatbot, Bard, that will directly compete with ChatGPT. Great, another assignment coming my way. Bring it on.

11