tjbresearch.com
Tuesday, February 10th, 2015
too beautiful

A recent Washington Post articleWashington Post | It was titled “The new scientific revolution: Reproducibility at last.” on academic research opens with the story of Diederik Stapel, a professor of social psychology.  Despite his stardom, “there was often something odd about Stapel’s research.”

In fact:

“When students asked to see the data behind his work, he couldn't produce it readily.  And colleagues would sometimes look at his data and think:  It's beautiful.  Too beautiful.  Most scientists have messy data, contradictory data, incomplete data, ambiguous data.  This data was too good to be true.”

There’s been a groundswell of late for a new approach to research, one focused on addressing a key weak link — the inability to reproduce results.  Brian Nosek of the Center for Open Science summed up the goal:  “Show me the data, show me the process, show me the method, and then if I want to, I can reproduce it.”

The desire for a new level of transparency is rooted in the growing realization that much ballyhooed research isn’t as good as everyone assumes it to be.  Sometimes there is outright fraud perpetrated by a researcher, but often there is merely incompetence.  Frequently, the storytelling of the researcher goes beyond the body of evidence at hand.the research puzzle | As in the investment world, “cracking the narrative” is essential.

Further insight into the issues can be found in an October 2013 issue of Economist.  It features an editorialEconomist | “How science goes wrong.” and an articleEconomist | “Trouble at the lab.” on the topic.  Not only does “careerism [encourage] exaggeration and the cherry-picking of results,” but studies are often poorly designed, “statistical mistakes are widespread,” and there is much evidence that the vaunted peer review process is flawed, even at prestigious journals.  The whole system ensures that “replication is hard and thankless.”  Consequently, the results aren’t properly vetted and we end up believing things that we shouldn’t.

The entire debate about the quality of research matters not just to ivory-tower denizens, but to the commercial and governmental entities that rely on the research.  (Think pharmaceuticals, for example.)  And it should matter to investment decision makers as well.

The work of Reinhart and Rogoff energized the discussion about growth and austerity among macro thinkers, but it took a graduate student to find that their model held a spreadsheet error.Bloomberg | Here’s a short summary of the events.  How could such an important topic, which captivated the smartest minds around, have avoided the kind of scrutiny that would reveal the mistake?

A close-to-home example for investment professionals is that of F-Squared.  On the basis of its historical performance data, the firm tapped into the explosive demand for “ETF managed portfolios” and built assets under management into the tens of billions of dollars.  There was just one problem — the performance was made up.  The firm got a slap on the wrist (“Raise $28B on imaginary track record, pay $35M fine.  Sounds like a good deal,” tweeted Meb Faber.Twitter | Faber’s handle is @mebfaber.), but did the buyers learn anything?  Or will any of the rest of us?

If anything, the markets are more messy, contradictory, incomplete, and ambiguous (to use the adjectives that appeared above) than other fields of study, yet we are awash in backtested results and accept them in a way that doesn’t even allow us to see made-up numbers readily, to say nothing of the run-of-the-mill leaps of logic that underlie much of the “science” of investment management these days.the research puzzle | For starters, the focus on the magical output from “the risk machine.”

Despite the sage advice to “never delegate understanding,”the research puzzle | The title of this piece came from Charles Eames. asset owners and their gatekeepers often buy a good story on the basis of some underlying calculations without sufficient scrutiny — and perpetuate inherent errors by adopting the story as their own.

Which is not to say that quantitative analysis isn’t an important part of the toolkit, just that if it becomes the toolkit without proper perspective regarding its weaknesses, you are asking for trouble.

Every day, it seems there are new indexes and new vehicles based upon patterns found deep in the pile of market data (which gets exponentially deeper each year).  Those patterns promise something, but often when they go “live,” they don’t deliver.AllAboutAlpha.com | As nicely summarized here by Andrew Beer.  The “backtest overfitting”SSRN | This research was mentioned in a comment on the Beer article on AllAboutAlpha.com. can lead to persuasive, but wrong, conclusions.

As is the case in other fields, we need to see a conclusion for what it truly is, and what it really means, not what we want it to be to justify our ideas or sell our products.  That means changing how we approach the whole process, within our organizations and at the journals that publish academic research on investments.

Recently, Cliff Asness wrote a pieceAQR | This site is full of great quantitative research articles. which highlighted a notable fact:  that the famous Fama and French HML factor, which identifies the return spread on cheap stocks versus expensive ones, uses dated rather than contemporaneous information in its calculations.  I was astounded.

Not being a “quant” myself, I may have missed other references to this disconnect over time, but so have legions of practitioners who have adopted the conclusions of the standard HML research without understanding its nuances.  While the entirety of the Asness paper was interesting, I came away mostly wondering about critical assumptions elsewhere that I (and others) regularly overlook.the research puzzle | As I wrote a while ago, there is no one more valuable than an “assumption hunter.”

For a variety of reasons, much of the research that we see — in journals, presentations, and reports — is likely a lot less beautiful than it appears.  We need to shine a bright light upon it.