New Standards for Financial Reporting

During my professional career, I was never allowed to get away without doing the following. Why do we allow "financial experts" to get away with anything less?

1) Always report what an investigator should have been able to see. It is never sufficient to report that an investigator failed to see something.

2) Always include the appropriate timeframe.

3) Always include error estimates (e.g., confidence limits), no matter how rudimentary.

4) Use actual results as the standard, not a mathematical model.

5) Mix numbers carefully. Be sure to match precisions. Be sure to identify precisions.

Finally, and most important:

6) Never reject an analysis without letting us know the RIGHT answer. If that is not possible, at least tell us what we need to do to determine the right answer AND give us relevant information to help us understand the issue.

Remarks

1) There is an amazing amount of financial literature devoted to what investigators did not see. It is deceptive. It is highly misleading. Would readers have accepted the notion that there is no such thing as investor skill if they had been told the truth? To be credited with skill, a mutual fund manager would have had to outperform the market by more than 5% (annualized) for more than a decade.

2) My recent investigation about Time and the Gordon Model shed more light about timeframes. The mathematics that support the equation apply only in the very long-term. The success of the Gordon Model relates only to a shorter-term of 5 to 15 years. Yet, I had not seen this mismatch mentioned by others. John Bogle has come closest. He introduces the Speculative Return. He applies the formula over a single decade.

3) I was flabbergasted by the reaction to confidence limits on the Stock-Return Predictor. Ever since Sir Isaac Newton, error analysis has been a fundamental part of science and engineering.

In financial reporting, I have noticed the use of statistics to establish whether a factor causes an effect, but not to quantify how big an effect.

Perhaps, the environment is a factor. In a friendly environment, it is very easy to come up with a coarse error analysis, especially when there is an automatic allowance for being blindsided 10% of the time. Do financial experts really think that their year 10 stock market predictions are routinely off by more than plus and minus 3% and usually no better than plus and minus 6%? If so, why do they argue about tenths of a percent (0.1%s)?

Of course, our financial experts can tell us about errors, at least in a rudimentary sense. If treated decently, they are likely to give us a lot of useful information. They can tell us about how things can go wrong and by how much.

4) I have asked skilled engineers what they would conclude if I were to toss a coin twenty times and it came up heads every time. What would the odds be that the next coin toss would come up heads?

Most have answered that the odds would be 50%-50%, the standard answer.

Then, I made it clear that I had not assumed the use of a fair coin. A fair coin is a mathematical fiction. It has 50%-50% odds, exactly. It has no memory whatsoever. I pointed out that I might be using a two headed coin.

A strange thing happened. About half of the engineers argued with me. The odds were 50%-50% because the coin had no memory, they insisted. All tosses were independent.

They were so conditioned to assuming a fair coin that they did not entertain even the possibility of anything else.

I have noticed such behavior concerning the stock market. It is a serious error.

5) Some numbers are rough approximations. Others are exceedingly refined and very precise. For example, I identify the (maximum) confidence level associated with my valuation (P/E10) based predictions as 90% (two-sided, 95% one-sided). It could be a little bit less, perhaps 85%, but not very much less.

In contrast, we see exotic calculations relating to extremes such as six sigma errors. Those kind of calculations require the best of Benoit Mandelbrot and Nassim Taleb as well as other researchers of their caliber.

To interpret my numbers, simply acknowledge that I allow for a 10% chance of being blindsided, good or bad. You may encounter the run up of a bubble, a good outcome provided that you leave in time, or you may encounter a crash, a horrible outcome. Base most of your planning on the most likely 90% of all outcomes. But prepare for contingencies, the remaining 10%.

6) This brings us back to my most important standard:

Never reject an analysis without letting us know the RIGHT answer. If that is not possible, at least tell us what we need to do to determine the right answer AND give us relevant information to help us understand the issue.

Have fun.

John Walter Russell
July 30, 2006