March organisers show little interest in academic teams' reliable estimates
In response to Albert Cheng King-hon's anti-scientific screed that attacked the academics who estimated the number of people who took part in the July 1 march ('Leung and his allies should realise the true extent of Hongkongers' anger', July 6), I wish to put the record straight as a survey methodologist who did not organise counting this year.
There are three key questions to be asked of an estimate.
First, are you measuring the right thing - estimating people who passed a single point or people who joined and left the march at any point? Counting at multiple points is not enough to estimate the total count. You need to ask people when they joined the march. A telephone survey is not reliable, as people usually overestimate their participation. It is much better to ask during the march.
Second, are the results consistent if repeated? One person cannot count the full flow as it is too fast, so it needs to be divided up, but then there is likely to be double counting as the people are not walking in easily separated lines. A video recording can be slowed down until one person can count the flow across the whole road and allows multiple counters to be compared to check reliability.
Third, can you show that your results are accurate? That is impossible without recording images so you can double-check by comparing results from independent counters and recount if the answers are inconsistent.
The combination of counts using slowed-down video recordings (as shown by my team) and an on-the-ground survey of a joining point (as developed by Professor Paul Yip Siu-fai's team) provides reliable, valid and verifiable estimates.
Overhead pictures of sufficiently high resolution only work well for static crowds. Marches need multiple pictures because the march starts before everyone arrives and not everyone joins at the start. Because of banners blocking the view, it is hard to count all the individuals from overhead.
Why do we still have this debate? Because the organisers insist on using unreliable methods that seriously overestimate the number of people.
The organisers have no idea of how many people really take part and show little interest in working with any of the academic teams to provide reliable, valid and verifiable estimates. The academic teams have no agenda other than ensuring that we have good estimates that enable accurate reporting.
John Bacon-Shone, director, Social Sciences Research Centre, University of Hong Kong