tjbresearch.com
Wednesday, May 19th, 2010
yardsticks for pundits

Of late, I have seen several blog postings (for instance, ones by Josh BrownThe Reformed Broker | Brown provides market perspective in an irreverent style. and Barry RitholtzThe Big Picture | This is one of the most popular sites in the “econoblogosphere.”) that argued for greater scrutiny of those in the business (and/or habit) of making investment predictions.  Be it on CNBC or other networks, in blogs or articles in the mainstream media, or within publications from investment firms, new prognostications come fast and furious.

Report cards of previous predictions are harder to find, at least ones that are consistent and fair in their grading.  You can be sure that there will be self-reporting of calls that look great, without an accompanying discussion of luck versus skill.  The clunkers will never be mentioned again.

Consequently, there is a great need for unbiased reviews of these forecasts, so I share the sentiment of the bloggers mentioned above, but I fear that much of the rhetoric leaves the wrong impression.  It is not easy at all to tell who is “good” and who is “bad.”

I come to my opinions on this through much study of, consulting on, and writing about research performance analysis, the closest thing there is to a yardstick for pundits.  If we could magically get all the predictions in the investment world into a giant database, we would likely use analytics similar to those that measure the performance of recommendations made by research analysts.  Long-time readers know that I’m not a fan of those applications as commonly designed.  To rehash (and to point you to other postings with more detail should you have a morbid interest in such things):

In “the research performance derby,”the research puzzle | This is from July of 2008, early in my blogging “career.” I reviewed an article in Bloomberg that ranked research analysts, but I could have written essentially the same piece about listings that appear in a number of other publications.  If we had that imagined pundit database, we’d get oodles of those articles purporting to identify the best and the brightest, but very little exposition of the shortcomings and traps inherent in such rankings.

I highlighted one key problem in a subsequent piece called “to the precipice.”the research puzzle | The illustration and description in this one might be worth a look. As I wrote:

There is a conceit in the business that the best among us can pick the highs and the lows, and we evaluate much of our work on that basis.  The real truth is that good investors see the shifting odds of risks and returns and are willing to back away from a position as it becomes more likely to one day disappoint, even if the day of reckoning may not be arriving tomorrow.  Such an approach at least acknowledges that among the imponderables in the great list of unknowns is exactly when things will change.

The measuring stick causes nearsightedness.  Everything is always reduced to an assessment about whether something went up or down, often during a very short time period, and whether you were on the “right” side of the move when it occurred, completely without regard to the wisdom of your actions or whether you will be vindicated later on.  Especially at key turning points, the assessment tool coaxes you into playing a game of greater-fool tag rather than helping you make good decisions.

There are other issues too, as I elaborated upon in “the performance parade,”the research puzzle | The series on GRAS ran to nine postings; a condensed version was published in CFA Magazine. which was part of a series of postings on the Global Research Analyst Settlement.  This paragraph can serve as your CliffsNotes (although the supporting arguments are worth your time):

So, a research analyst (or computer in the case of a quant firm) boils down all sorts of valuable information into one variable, which is adjusted further in the mapping process, and evaluated without regard to time horizon or risk.  That's what we use to determine who is best?  If so, we get what we deserve.

As we would with attempts to extend the analysis more broadly into other types of predictions.  I’ve been doing this long enough to know that people want easy answers, even if they are misleading, which is a good description of what we’d get from rankings spit out of our hoped-for database.

That said, transparency is important and there should be regular reporting about what people have said in the past and what has transpired since.  More of that and maybe the prediction game would feature less inane bloviating about low-probability outcomes and more discussion of how uncertainty is being priced today.

During our formative years, some of us may have felt the forceful impact of a real yardstick applied in ways intended to make us think twice about future transgressions.  In dealing with reckless and unashamed forecasters, we should wield a figurative one in the hope that better behavior will ensue.  However, we must remember that it is a blunt instrument that should be used sparingly and on the obvious miscreants first.  Stopping to look too closely at the numbers on the stick will only confuse us.