The Language Proficiency Assessment for Teachers (English) is now entering its 13th year. But, despite a revision in 2007, little has changed.
The exams were set up in answer to an outcry from the business and education communities over the fast-falling standards of English in this world city. They carry the weighty mission of raising the standards of English by tightening the entry into the teaching profession.
Designed by specialists, the tests are not subject to scrutiny for their effectiveness by the usual government watchdogs. But, given the importance of the assessment's mission, an external evaluation is overdue.
The assessment was Hong Kong's chance to show the world how to design a purpose-specific language test. Instead, we have adopted methods of assessment that defy proven practice.
The test consists of five papers: reading, listening, speaking, writing, and classroom language assessment (for serving teachers). The results show a disturbing contrast. Year after year, the pass rates for writing languish at between 36 and 46 per cent, and at around 50 per cent for the speaking test. But in classroom language, the success rates are an astounding 95-98 per cent.
This doesn't compute. How can we have a failure rate of 50 per cent in speaking and a near-perfect score in classroom language, unless serving teachers are much better English speakers than non-teachers? The suppressed truth lies elsewhere. First, the Education Bureau conducts the classroom language assessment, whereas the rest of the papers are entrusted to the Hong Kong Examinations and Assessment Authority.
Teachers to be tested are given several weeks' notice before an examiner descends on their classrooms. Predictably, what the examiner sees is not a live, spontaneous performance, but a rehearsed one. For a true evaluation, examiners should drop in unannounced. Better still, this charade should be scrapped altogether.
The writing and speaking papers are at the centre of a simmering controversy. They are supposed to test the candidates' communicative abilities. But there are serious doubts about their fairness, especially the writing paper.
Instead of calculating the marks of each of these two papers by aggregate, as do other international language tests, including the International English Language Testing System, the Hong Kong authorities chose to adopt the "passing the scales" method, meaning that each of the papers may have five or six components and failing any one component would result in overall failure of that paper.
Many native speakers have stumbled over the treacherous "explaining the errors" section. So have numerous passionate would-be teachers who have returned from an overseas education.
To raise our standards of education, attracting committed teachers is far more cost-effective than splashing out millions of dollars on sponsoring 100 top students to study at elite international universities. As it is, the system may not be letting in the wrong people, but it is clearly screening out the right people.
Studies have also shown that, compared to the "aggregate method", test subjects in the "scale method" underperform by more than 20 per cent.
Writing is Hong Kong teachers' Achilles' heel, understandable in that their professional duties don't require them to do any. Hong Kong students are equally deficient.
Instead of encouraging the acquisition of alphabetical literacy, writing techniques or figures of speech, the exam designers chose to emphasise the use of precise linguistic terminology. Why are we stuck with memorising esoteric terminology that has little place beyond the classroom?
Since all teacher-training tertiary institutions in Hong Kong require their education degree students to take the language proficiency exams, you can be sure that their professors will see to it that they learn them well.
Public exams are a great tool for driving relevant learning. Why not focus on functional, instead of formal, learning?
Thirteen years after the birth of the test, our English still sucks. The exam designers are all competent professionals. But even they must admit that there is always room for improvement.
Without continuous review and revision, the assessment risks becoming just another addition to the battery of tests that Hong Kong people suffer through without any lingering positive impact. While the exams have many good features, their designers must not lose sight of their central purpose: to raise our English standards by encouraging teachers to read more and write better.
If and when that happens, only then can we tell the world that Hong Kong does it better.
Philip Yeung is co-founder of the Hong Kong Society for the Promotion of English and a former speechwriter to the president of HKUST. Philipkcyeung2@yahoo.com