Tuesday, June 28th, 2011
double counting

Each investment strategy must be judged on its own merits.  What is the investor trying to do?  Does it make sense given what we know about how markets work?  What are the strong points and weak points of the approach — in concept and in application?  In what environments will it likely “work” and when will it falter?

Under the microscope today is detailed fundamental analysis.  You don’t have to go far to find those who scoff at this decidedly old-school way to assess value in the market.  I saw a comment the other day that professional analysts don’t know anything more than the typical day trader, which is laughable, but the real question is whether that knowledge is being effectively applied to make money.

It’s not hard to cite examples of successful investors who are experts at poring over the numbers and seeing the picture differently than others, and we could talk about many aspects of their process, including risk control, time horizon, and the like.  I’ve written before about the use and misuse of complex models like those featuring the discounted cash flow (DCF) approach;the research puzzle | This posting, “of theory and practice,” was part of the “letters to a young analyst” series. the key for using DCF or other detailed evaluations is to take advantage of their complexity without falling victim to their pitfalls.

One way that the reliance on such models can go awry is by “double counting” without realizing it.  Let’s say, for example, you are doing a homemade model with a value twist.  You might start with the notion of competitive advantage, now typically envisioned via the image of a Medieval “moat,” which protects a business castle from attacks by competitive hoards.  An assessment of the moat might work its way into a model in a mechanistic way, but it can also act more subtly on other model-building decisions by the analyst.

For example, a certain category of moat might lead to a specific discount rate being applied to cash flows, or to adjusting in some way the rate that would otherwise have been used.  That’s a reasonable approach, but what if (after the messy business of modeling the financial statements of the firm) you apply some kind of margin of safety to the intrinsic value you derive, and the amount of that margin is based essentially upon the same factors as your discounting exercise.  Isn’t that double counting?

Harder to ferret out are the ways in which that “messy business” in the middle is distorted by your overall impressions of the quality of a firm.  Start out with the mindset that you’re evaluating one with higher risks and the revenue line items you use might be less aggressive as a precaution and expenses a bit higher.  So, the concern that you try to value in one place could show up again and again, leading it to be discounted over and over.  (Of course, this could occur with a positive impression of a company as well as a negative one.)

You’d be surprised how often traps like these present themselves.  I once stumbled into a situation like this when doing due diligence on a well-known research firm.  The layering of expectations is usually hidden unless you have the opportunity to interview an analyst and dig into the model.

Similar phenomena can occur with other strategies.  Do multiple chart views add value for the technical analyst or are they an exercise in confirmation bias?  Do trading algorithms conflate inputs to amplify factor exposures in unexpected ways?  Do “weight of the evidence” market exposure indicators end up with a thumb on the scale by virtue of how they are constructed?

The goal in every case is to have an objective application of the philosophy, without distortions or double counting.  In practice, it’s one of the most difficult things to do.