Tuesday, July 22nd, 2008
the research performance derby

On a periodic basis, investment publications and websites feature stories that attempt to name the “best” research analysts and firms.  A recent piece in Bloomberg MarketsBloomberg | This is the link to the story online, the sidebar for which includes further links to some but not all of the illustrations in the article, which is found in print in the August issue of Bloomberg Markets. demonstrates some of the pitfalls of such efforts.

For starters, it’s always important to understand the methodology of the ranking system in question.  As you would expect, Bloomberg uses the data and functions from its system to calculate performance.  A thorough review of the rankings should start with vetting that foundation — analyzing the data collection and error-detection policies, and understanding and evaluating the math behind the performance calculations.  (Regarding Bloomberg, I have done neither.)  After that, the key question is how the “best” performance is defined.

In this case, roughly stated, it is a measure of which analysts made appropriate calls on large cap, widely-covered stocks that were more volatile than most (some five hundred stocks in all).  The process for implementing such a screen is problematicThere appears to be no link online to the description of how it was done, which appears in print as “How We Crunched the Numbers.”  I would say they crunched them and crunched them, using a multi-step process for identifying stocks to be included, evaluating the analysts on those stocks, and then generating rankings for the firms for which the analysts work.  In my opinion, there were questionable choices at each step.  I mean that literally — what I most would like to do is to pose questions about the decisions that were made, so that I can judge whether my doubts (about what seems to be a convoluted and incomplete analytical structure) are warranted., but, as importantly, is that really the definition that we want to use?  The online headline for the piece at the time of this writing is “Why Paul Miller in Virginia is Wall Street’s Best Stock Picker.”  Nothing against Mr. Miller, but surely one call on one stock (he was bearish on Countrywide during the first part of the stock’s monumental decline) should not confer such a title.

Which brings us, of course, to the main problem with the reporting of the performance of research analysts and firms.  The reader wants “best” and “worst,” but research performance is multi-dimensional and famously difficult to analyze and categorize.  All the nuance is lost in the typical ranking system, and the range of research information that an analyst or a firm conveys is reduced to a single rating (which often is then mapped from its native terminology, which might convey more information, to a simple “buy,” “sell,” or “hold” for apparent comparability).  Furthermore, any ranking is subject to the standard caveats of other investment performance evaluations — woe to the user who uses the information for making value judgments without understanding how that “performance” came to be or what processes went into creating it.

No blog post — at least not any of the length I plan to write — can tackle all of the issues of research performance (although I’ll attack it from many angles in posts to come).Registered users and clients of tjb research will see a more detailed treatment of this topic that will be published later in the year. It is a complicated process that is a mixture of objective and subjective analysis and investigation.  Don’t expect a magazine article to give you insight into whether your research or that of the firms you rely on for ideas is any good.

As with many things, the answers are more elusive than that.