tjbresearch.com
Monday, January 27th, 2014
testing ourselves

I recently created an online investment “challenge.”CredSpark | The site on which the challenge is found is in beta, so any feedback would be appreciated.  The questions cover quite a range of topics, some of which can be answered simply on the basis of investment knowledge and some which require awareness of specific market and industry events of 2013.

It’s hard.  As of this writing, the average number of correct answers (out of 24) is 15, three less than the “passing” bar that I set.  No one has gotten all of the answers right.  Furthermore, of a handful of seasoned investment professionals that voluntarily reported their scores to me, some barely passed and some failed.

That’s not particularly surprising, in that most people in the industry are specialists, and it is a broad-based test.  In addition, some questions were hard for reasons that had nothing to do with investment expertise.

Creating the challenge and evaluating the results have gotten me thinking about how we assess readiness levels for those charged with making investment decisions.  For example, let’s consider investment committees, which often include members with a broad range of backgrounds and experiences.Financial Planning | This is on my mind because of a speech I gave about investment committees last week.  Among them, who knows what?  It’s an important question that often doesn’t get answered.  Should we test the members to find out?

That could be embarrassing, of course.  But other situations could go beyond embarrassing.  Lately, I have asked a number of people a question about the promotion of “alternative” products by financial advisors at RIAs and broker-dealers.research puzzle pieces | Here’s a posting about the environment right now in the business and at those firms regarding alternatives.  The question:  “How many advisors do you think could pass a test about the details of the strategies in the products that they are selling/using?”  The answer is almost always, “Not very many.”  (We are likely to see the results of that disconnect in arbitration cases down the road,Accelerant | Disclosure:  I am affiliated with Accelerant, which handles such cases. but the gap could be assessed through testing today.)

You can imagine how the results of knowledge tests could aid in designing training programs, refining compliance systems, and spotting the weak areas in a firm’s ability to deliver on its investment beliefs.  But those types of tests aren’t used very extensively; for one thing, no one is eager to have their shortcomings revealed.

Most of the “testing” that I have conducted over time has been in regard to investment expectations.  While the results typically haven’t involved enough people to be statistically significant, the process almost always reveals surprising discoveries about a group’s expectations, including the differences within the group that lurk unnoticed.

There are other kinds of evaluations that can be conducted, including assessing decision makers on a number of different characteristics (personality, risk tolerance, the ability to work with others, and on and on).  However, investment organizations are generally reluctant to do anything with even a hint of heavy-handedness, so relatively benign activities that would help leaders to develop a better organization are resisted.

Therefore, holes in the knowledge base can go unrecognized and linger longer than would be expected — and other hurdles to organizational success won’t get dismantled because they aren’t examined.

It seems obvious that organizations should be built on a culture of understanding rather than one of avoidance.  But what should we do to test ourselves and each other, to get at that understanding of the knowledge, skills, tendencies, and expectations that we have (or that we lack)?

That is a very important — and unappreciated — question for leaders, and it is not one that’s going away any time soon.