by Matthew Leitch, 17 May 2002

In view of the many problems with judgements it is often necessary, for important decisions, to use calculation. This is a great step forward from unsupported judgement but, of course, introduces a further range of errors.

**Faulty maths:**Maths is one of those school subjects that just a few people like and succeed at but which most people hate and struggle with. Within a few weeks of stopping study of maths it is very difficult to remember, especially if it never made perfect sense to you in the first place. Even the bright financiers who build huge computer models to support multi-billion pound lending decisions make mistakes - lots of them.**Easy formulae with a bad fit to reality:**The mathematical models that mathematicians have chosen to develop and explore down the centuries have been influenced by a natural desire to keep things simple, short, and easy to calculate. This was especially true before computers came along, but is still a factor now. More importantly, most of the most famous models have pre-computer origins and trade realism for ease of working in a way that is no longer necessary.**Failure to consider sensitivity:**Sometimes a calculation is made from a model and the result depends a great deal on one or a few factors put into the model but this sensitivity is not noticed. It may be that these factors are actually difficult to estimate accurately. In other situations it may be that some factors that are hard to estimate are not a problem because the overall result is not sensitive to them and even a large estimation error would make little difference to the overall conclusion. These things have to be searched for. The standard method for this is "sensitivity analysis" which considers each element of the model in isolation to look at the difference a small change would make. This is better than nothing but vulnerable to sensitivity to combinations of parameters that move together because they are not independent.**The flaw of averages:**The name of this error was coined by Professor Sam L. Savage, whose website is easy and fun as well as very useful. It refers to the common misconception that business projections based on average assumptions are, on average, correct. For example, suppose you work in an organisation for scientists that puts on conferences. Each conference attracts an audience, but you don't know until quite near the date of the conference how big the audience will be. Long before that time you have to book a venue and put down a large non-refundable deposit. Occasionally, conferences are called off because of lack of interest, but you don't get your deposit back. The flaw of averages would be the assumption that you can forecast the financial value of a conference from just the "average" or expected audience. This might be wrong sometimes, but on average it will be right and lead to correct decisions about whether to try to put on a conference or not. As Professor Savage explains, this is only right in those rare cases where the business model is "linear". In this instance the model is not linear and the non-refundable deposit is not considered if the conference it profitable with the expected audience.

As a result the best known, most often used formulae are sometimes not a good fit to reality.

The usual measure of spread of a distribution (e.g. the height of children in a school) is its variance. This is found by squaring the difference between each data point (e.g. the height of a child in the school) and the average of all the data points. The advantage of *squaring* the difference is that all the resulting numbers are positive. If you just used the difference between each data point and the average, some differences would be negative numbers, which is a problem. This could be overcome by just taking the absolute value of the differences (i.e. ignoring the minus sign) but absolute numbers are hard to do algebra with so squaring won.

[That's an over-simplification: squaring was initially adopted because, when making estimates of a physical value from unreliable measurements, and assuming the measurement errors are normally distributed, minimising the sum of the squared differences gives the best possible estimate. However, in the 20th century it was realised that if the distribution is not quite normal the advantage of squaring quickly disappears. By that time, however, squaring was almost universal and had been applied in other situations as well. I think the elegance of the algebra was a major factor in this.]

One effect of squaring is to make data points a long way from the average more important to the overall measure of spread than data points closer to the average. There's no reason to think they are more important and some statisticians have argued that they should be *less* important because data points a long way from the average are more likely to be erroneous measurements.

(Using the standard deviation, which is the square root of the variance, makes no difference to this. The relative importance of individual data points to the measure of spread is the same as for variance.)

In financial risk modelling it is standard practice to define risk as the variance of returns. This means it is subtly distorted by the effect described above.

However, there is an even more damaging problem with this, since it almost always amounts to an assumption that the distribution of returns is symmetrical about the average, not skewed.

Almost all the equations in finance today incorporate this assumption. But consider two skewed distributions of money returns, one of which is simply the other spun about the average. They have the same average and variance, but which would you prefer to invest in? One is like buying a lottery ticket, with a thrilling upside; the other is something like dangerous sport played for money - modest payoff normally and with a slight chance of being killed.