Error message

Image resize threshold of 10 remote images has been reached. Please use fewer remote images.

Playing Golf in the Dark

Golf in the DarkBy Steve Morlidge, Business Forecasting thought leader, author of "Future Ready: How to Master Business Forecasting" and  "The Little Book of Beyond Budgeting" 

About the only thing that everyone seemed to agree on in my old company was that forecasting was really important and that our forecasts were poor. But when I asked people how they knew that they were no good I got a variety of answers rather like the one a judge once gave when asked to give a definition of pornography: ‘I know it when I see it”.

As it is unlikely that you will succeed if you don’t know what success looks like, I looked in the corporate controller’s database for a definition of what constitutes a ‘good forecast’. But I got zero hits.

Nothing.

I must have given talks on forecasting a hundred or so times and whenever I ask the question ‘how does your business define a good forecast?’ I am usually met by a sea of blank faces. On the few occasions when I do get a response the answer I am given is invariably wrong.

Imagine learning to play golf and spending all the time perfecting your swing - worrying about the angle of your shoulders, when you break your wrists on the backswing and the rotation of your hips -  but paying no attention at all to where the ball goes. Well that it what most businesses do with their forecast processes.

We worry about whether our processes are ‘best practice’ and fret about which budgeting and forecasting software tool to use but are oblivious to whether they deliver the outcomes we need. We are just playing golf in the dark.

This is why I believe this is the single biggest problem with forecasting in businesses. There is no feedback loop. And as any engineer will tell you, processes without feedback are out of control - by definition.

So, what does a good forecast look like? We need to start with the purpose of forecasting.

We need forecasts to make decisions. To determine whether our current plans and the resources devoted to them are likely to take us to our destination. Do we need perfect precision to make these decisions? I would say not. We just need to be sure that we are moving in the right direction and that we are not heading towards the rocks.

Now let’s translate this common-sense definition into something more precise that we can use to measure and, where necessary, correct forecast performance.

We know that our forecasts will never be perfect predictions. The only thing I know for sure about forecasts is that if they are 100% accurate somebody has been manipulating the actual data to come back to the forecast. So, we are always going to get error, but there is more than one type of error.

First, there is variation. Variation is unsystematic error, which means that sometimes you might over forecast and sometimes under forecast.

Providing the average level of variation will not cause you to change your decision some level of variation is acceptable. To refer back to the sailing analogy I have used in previous blogs, a good forecast will tell you when you are heading towards the rocks – you don’t need to know which particular rock you are going to hit.

The second kind of error is bias. Bias is systematic error, which means that you consistently over or under forecast.

Bias is always bad because it is a sure sign of a bad forecasting process. More to the point you are likely to make bad decisions if you receive forecasts that are persistently too optimistic or too pessimistic. As a rule of thumb if you have 4 or more consecutive forecasts that are either too high or too low then you have a biased forecast.

So, a good forecast has no bias and acceptable levels of variation.

I mentioned that where people do have a definition of ‘good’ it is always wrong. The most common mistake is to adopt an arbitrary limit of, say 10%. This is a poor definition because it doesn’t distinguish between bias and variation and takes no account of level of accuracy needed for decision-making purposes.

It also fails to take account of forecastability. An error of plus or minus 10% might be impossible target for a volatile business but almost impossible to miss for a stable one.

Two other common mistakes I come across are when people attempt to compare errors made over different lags. It is always easier to forecast outcomes for next month than for the next quarter so you shouldn’t attempt to compare the two.

You also should not attempt to measure forecast performance beyond decision making lead times. If you forecast that you are going to hit some rocks but you end up avoiding them would you treat this as evidence of poor forecasting?

No!

The reason that you missed the rocks is because you changed course because of the forecast and in doing so you invalidated its predictions. You can only measure the quality of your forecasts in the short term.

It is, therefore, a fact that you can never know for sure whether forecast beyond the very short term are any good.

Does this make all long term forecasting pointless?

 

This article was originally published in prevero Blog