Special clothing offer for ratings emperors
Moody's Investors Service does not like taking blame for missing the Asian financial crisis.
In a 45-page defence of its record, which it has just released, it argues that it did foresee much of the troubles in Asia and alerted investors to them, but, having already been criticised for excessive caution, it could not be asked to lead others in downgrading by too wide a margin.
It also argues that some events could not be foreseen.
The argument has some merit, but I would come at it from a different angle. Mine takes a route through the crystal ball.
The difficulty essentially comes down to the fact that investment decisions are still more art than science, but the more that they can be made to appear science the more people will accept them.
Ratings agencies have made them appear more science.
They have established a framework of precise grades that are applied to all companies they cover across the globe, and they announce the changes in their ratings in ways that make it appear that their analytical models are responsible and human judgment does not play a big role in the equation.
They will admit, when asked, that it is not quite so simple.
But they know the moment they tell the public that someone made a judgment on the basis of experience the public will have in mind a picture of someone licking his thumb and sticking it up in the air.
Never mind that no computer program ever written is in the same league of sophistication as the human brain; the computer is trusted more than the brain.
So the ratings agencies may know that getting their calls consistently right requires putting their analysts together in a room regularly for a discussion that includes such comments as 'I don't quite believe what we're being told here' or 'the figures look a little funny to me'.
But these are not comments cited when the reasons for a rating change are published.
The market wants science, and it is the scientific portrayal to which the agencies lean when presenting their views to the market.
Then one day, as happens every now and then, there is a collapse in the market, and investors who have suffered begin looking for culprits.
They do not have a big case to make against investment gurus who admit to having done little more than put fingers in the air and call it experience and judgment. Judgment can be faulty. Everyone knows this. You take known risks when you invest on the basis of a wet finger or a gut feeling.
But science; that's a different matter. Analytical investment models should be proven before they are presented to the public as a basis for investment decisions.
It would be irresponsible to tout them as science otherwise.
Yet if they are proven, how could they have gone wrong? Did someone botch the job of proving them? Did someone lie about whether they were proven? How could this happen? Science in investment goes two ways. It is a great sales tool when selling your services to the public, but you can be badly gored on it when things go wrong.
Let's just be plain about it and say that the plus points of the big ratings agencies are that they can provide independent advice and that they have established a framework for relative judgments of many different companies across many different markets.
But when it comes to placing a value on the financial health of these companies, they are faced with the same difficulties that all other investment analysts face and they can put no more science into it.
The same insufficient, untimely and inaccurate data plague them all. The same corporate financial officers do their best to lead them all into painting a rosy picture.
So if I am to pin special blame on the ratings agencies, it is only for making themselves appear more scientific than they can be. It is not their plea, but I think it is a better one.