Error message

Image resize threshold of 10 remote images has been reached. Please use fewer remote images.

Forecasting Quality

Tackling Forecast Bias: Signals and Noise

By Steve Morlidge, Business Forecasting thought leader, author of "Future Ready: How to Master Business Forecasting" and  "The Little Book of Beyond Budgeting" 

The average level of MAPE for your forecast is 25%. So what? Is it good or bad? Difficult to say.

If it is bad, what should you do? Improve…obviously. But how?

 The problem with simple measures of forecast accuracy is that it is sometimes difficult to work out what they mean and even trickier to work out what you need to do.

 Bias, on the other hand, is a much easier thing to grasp. 

Systematic under- or over-forecasting is straightforward to measure – it is simply the average of the errors, including the sign, and is clearly a ‘bad thing’. Whether you are using your forecasts to place orders on suppliers or using them to steer a business, everyone understands that a biased forecast is a bad news. Also, it is relatively easy to fix; find out what is causing you to consistently over or under estimate and stop doing it!

Steve Morlidge is an accountant by background and has 25 years of practical experience in senior operational roles in Unilever, designing, and running performance management systems. He also spent 3 years leading a global change project in Unilever.

He is a former Chairman of the European Beyond Budgeting Round Table and now works as an independent consultant for a range of major companies, specialising in helping companies break out of traditional, top-down ‘command and control’ management practice.

He has recently published ‘Future Ready: How to Master Business Forecasting’ (John Wiley 2010) and has a Ph.D. in Organisational Cybernetics at Hull Business School. He also cofounder of Catchbull, a supplier of forecasting performance management software.

Why Bother with Business Forecasting? From Error and ‘Accuracy’ to Adding Value

By Steve Morlidge, Business Forecasting thought leader, author of "Future Ready: How to Master Business Forecasting" and  "The Little Book of Beyond Budgeting" 

As far as I know we are not legally required to forecast.

So why do we do it?

My sense is that forecasting practitioners rarely stop to ask themselves this question. This might be because they are so focussed on techniques and processes. In practice, unfortunately, often forecasting is such a heavily politicised process, with blame for ‘failure’ being liberally spread around, that forecasters become defensive and focus on avoiding ‘being wrong’ rather than thinking about how they can maximise their contribution to the business.

This is a pity, because asking fundamental question ‘how does what I do add value to the business’ could help forecasters escape the confines of geek ghetto and the dynamics of the blame game and reposition the profession as important business partners.

So why do we forecast? Let’s answer this question by considering the alternative.

Steve Morlidge is an accountant by background and has 25 years of practical experience in senior operational roles in Unilever, designing, and running performance management systems. He also spent 3 years leading a global change project in Unilever.

He is a former Chairman of the European Beyond Budgeting Round Table and now works as an independent consultant for a range of major companies, specialising in helping companies break out of traditional, top down ‘command and control’ management practice.

He has recently published ‘Future Ready: How to Master Business Forecasting’ (John Wiley 2010), and has a PhD in Organisational Cybernetics at Hull Business School. He also cofounder of Catchbull, a supplier of forecasting performance management software.

Quality of Business Forecasting: How to Find the Needle in the Haystack 

By Steve Morlidge, Business Forecasting thought leader, author of "Future Ready: How to Master Business Forecasting" and  "The Little Book of Beyond Budgeting" 

As we know a simple matter of spotting bias – systematic under or over forecasting – can get surprisingly tricky in practice if our actions are to be guided by scientific standards of evidence – which they need to be if we are actually going to improve matters.  

Reliably identifying systematic forecast error requires that we take account of both the pattern and magnitude of bias using approaches that explicitly take account of probabilities. 

How to find the needle in the haystack 

Let’s assume that you have a method for reliably detecting bias in a single forecast. How can this be deployed at scale in a large company where forecast are mass produced? In these types of businesses, a single demand manager will typically be responsible for upwards of a thousand forecasts, every one of which might be reforecast on a weekly basis, any one of which might unexpectedly fail at any time if the pattern of demand suddenly changes. 

Steve Morlidge is an accountant by background and has 25 years of practical experience in senior operational roles in Unilever, designing, and running performance management systems. He also spent 3 years leading a global change project in Unilever.

He is a former Chairman of the European Beyond Budgeting Round Table and now works as an independent consultant for a range of major companies, specialising in helping companies break out of traditional, top-down ‘command and control’ management practice.

He has recently published ‘Future Ready: How to Master Business Forecasting’ (John Wiley 2010), and has a PhD in Organisational Cybernetics at Hull Business School. He also cofounder of Catchbull, a supplier of forecasting performance management software.

Pages