One aspect of the financial information flows these days is that ideas take hold in a hurry and are bandied about and interpreted by a wide range of characters. A concept can morph, be augmented, get stripped down, and be applied in a variety of ways that may or may not make sense, and it can all happen very quickly before it dies out as a point of interest to the wired world. Sometimes, though, there’s a second or third life for the concept, as a posting by someone that refers to it again can lead to a new round of tweets and retweets and comments here and there.
That has been the case with a chart published on the McKinsey Quarterly site earlier this year.McKinsey Quarterly | The article was titled, “Equity analysts: Still too bullish.” If you are a regular reader of the econoblogosphere, you can see it in your mind’s eye: a series of squiggly lines that purport to show that estimates from sell-side analysts tend to be too high — “excessively optimistic,” according to the article in which the chart appeared.
For awhile, I tried to keep track of all of the conclusions drawn based upon that one chart. My attention to the piece (and the subsequent waves of commentary) was heightened by discussion over time with Jeff Miller, who writes the blog A Dash of Insight. As we talked and emailed about it over a few months, we realized that our review had led us to so many different topics that we could reel off a whole series of possible blog postings. Miller took the first whack at it allA Dash of Insight | Called “Profiting from Forward Earnings Estimates.” and has promised more; this is my initial commentary on the McKinsey piece and the reaction to it. (We haven’t seen each other’s work before publication.)
I am fascinated by the way the idea rippled through finance-related sites and how it was used in many “analysts are clueless” dispatches, accompanied by a wide variety of statements about what that means for investment process, all without the benefit of scrutiny of the McKinsey piece. Now, McKinsey is a reputable source, but its analysis begged for vetting and clarification. We’ll see whether I agree with all of Miller’s ultimate conclusions — it was clear from our interactions that we approach things differently at times — but at least he has been digging and questioning. It seems that there’s comparatively little of that going on these days; everything is taken at face value and thrown out into the electronic ether.
It bears noting in that regard that the target audience of the McKinsey authors was corporate executives that deal with analysts, the publication of the article aimed at helping CEOs avoid the trap of getting pushed by analysts into making forecasts that are too optimistic. Almost all the play of the chart, of course, was on the investment side. Not that the two are necessarily incompatible, but the image was immediately co-opted into investment conclusions with nary a mention of its original purpose.
Taking the chart at face value, the main message to me was an old favorite: If you are using analyst estimates that are too far in the future as a part of your stock selection process, you’re making a mistake. (A classic example of that is the misuse of the PEG ratiothe research puzzle | My piece on it, called “unpegged,” explains a series of problems with it.; see Exhibit 2 in the McKinsey piece for its graph of forecasted versus realized growth rates.) It seems obvious to say so, but it’s harder to estimate earnings further out, so errors will be larger when trying to make those forecasts. And it’s not surprising that the misses tend to be on the high side overall.
As I talked to Miller over time, I liked the fact that he compared McKinsey’s conclusions to the prevailing belief that on balance firms beat estimates when reporting earnings, leading him to postulate that there must be a “crossover” point from which aggregate analyst estimates are quite reliable. Miller’s initial analysis has focused on using estimates that are one year forward. Whether that turns out to be the “best” application of his idea or not, it at least tries to break the analysts-are-bad-because-of-this-chart meme into a look at what they tend to do well and what they tend to do poorly.
Some of the other considerations worthy of note (that I may visit in the future):
What kinds of conclusions can be made about changes in forecasting behavior in the wake of modifications during the last decade in the regulation and compensation of analysts?
Is the key variable in analyzing the accuracy of estimates the economic environment in which they are made? It looks as if that’s critically important to the nature of forecasting errors (as would be expected) — maybe attempts at characterizing the results across a cycle actually confuse rather than enlighten the question.
Can conclusions on behavior regarding aggregate estimates for the S&P 500 be applied to sectors or industries or individual stocks?
What does the mysterious Y-scale on the McKenzie chart mean? (Miller and I did not get a response from McKinsey on that, although it provided additional information on some other questions.)
Does “the crowd” really do in-depth work on ideas that are published online, or just parrot them or adapt them in ways that support already-prevailing views? Where is its wisdom?
I thank Jeff Miller for engaging me in this analysis and debate, and look forward to seeing where it goes from here.