Skip to main content
Home
The Online Resource for Modern FP&A Professionals
Please register to receive the latest FP&A news, updates and tips Login

Main menu

  • Home
  • FP&A Insights
    • FP&A Trends Digest
    • FP&A Trends Research
    • FP&A Trends Insight Paper
    • FP&A Trends Survey
    • Short Videos
    • Our Contributors
  • FP&A Events
    • International FP&A Board
    • FP&A Trends Webinars
    • Digital FP&A Circles
  • AI/ML Committee
    • Introduction
    • Members
    • Resources
    • Meetings
  • FP&A Tools
    • FP&A Trends Maturity Model
  • About Us
    • Company Policy
    • Privacy Policy
    • Editorial Guidelines
    • Our Ambassadors
    • Our Sponsors & Partners
    • Contact Us
image
Quality of Business Forecasting: How to Find the Needle in the Haystack
March 1, 2016

By Steve Morlidge, Business Forecasting thought leader, author of "Future Ready: How to Master Business Forecasting" and  "The Little Book of Beyond Budgeting" 

FP&A Tags
Modelling and Forecasting
Forecasting Quality

As we know a simple matter of spotting bias – systematic under or over forecasting – can get surprisingly tricky in practice if our actions are to be guided by scientific standards of evidence – which they need to be if we are actually going to improve matters.  

Reliably identifying systematic forecast error requires that we take account of both the pattern and magnitude of bias using approaches that explicitly take account of probabilities. 

How to find the needle in the haystack 

Let’s assume that you have a method for reliably detecting bias in a single forecast. How can this be deployed at scale in a large company where forecast are mass produced? In these types of businesses, a single demand manager will typically be responsible for upwards of a thousand forecasts, every one of which might be reforecast on a weekly basis, any one of which might unexpectedly fail at any time if the pattern of demand suddenly changes. 

This kind of forecaster is a master craftsman carefully selecting the right forecasting method and polishing the result until it is ‘perfect’. Instead, they are managers of a forecast factory churning out thousands of ‘items’ at a fast rate, none of which will be as perfect  as those produced by a master craftsman, but all of which need to be fit for purpose; ‘good enough’. 

Clearly it is important that the demand manager continuously reviews the performance of every forecast every period so that defective ‘products’ enter the supply chain. But when they have such a large portfolio is it realistic? 

Probably not. 

The ‘obvious’ solution to the complexity facing practitioners that most companies adopt is to calculate bias at a high level in the hierarchy and investigate further only when there is evidence of a problem.  

The flaw of this approach is that it is extremely unlikely that every forecast in a portfolio or a category is biased in the same way. And when they are not, the errors for those items that are over forecast will be offset the under forecast errors to a greater or lesser degree, with the result that chronic bias at the low level is hidden. And it is the bias at this low level that is important because the replenishment process is driven by these granular forecasts, not high-level aggregates. 

The bottom line is that even if high-level bias measures are calculated in a statistically intelligent way (as described in part 1 of this series) they are a completely unreliable guide to the level of bias at the level where it counts – the lowest level.  

And the degree of the problem can be considerable; in practice, it is very common to find average errors calculated at a high level underrating low-level bias by many orders of magnitude.  For example, it is quite common to find a product category with an average level of bias of say 2%, which most people would consider to be acceptable, being the result of some SKU’s being over forecast by an average of 20% and the rest being under forecast by 18%. 

This is one important reason why companies may experience customer service failure despite having high total inventory levels and apparently good forecast performance metrics.  

The solution 

So how do we reconcile the need to track forecast performance on a very frequent, highly granular level with the apparent impossibility of doing so? 

The answer is to measure low level under and over forecasting separately and to test these measures for evidence of statistically significant high levels of bias, in the manner described in the last post. Then use these alerts along with measures of the scale of the problem will direct the attention of forecasters to the relatively small number of failing forecasts that matter. 

Bias is the most treatable symptom of a failing forecast process. Even if we cannot track the subtle changes in the pattern of the demand signal it should be possible to get the estimate the level reasonably easily. If we do start consistently under or over forecasting it should be straightforward to detect, and correction usually requires no more than a simple recalibration of our models or our judgement.

But like many things that are simple in theory dealing with bias can become a more intractable problem given the scale and pace at which forecasting is conducted in practice.

So when it comes to driving out bias, size matters.

Steve Morlidge is an accountant by background and has 25 years of practical experience in senior operational roles in Unilever, designing, and running performance management systems. He also spent 3 years leading a global change project in Unilever.

He is a former Chairman of the European Beyond Budgeting Round Table and now works as an independent consultant for a range of major companies, specialising in helping companies break out of traditional, top-down ‘command and control’ management practice.

He has recently published ‘Future Ready: How to Master Business Forecasting’ (John Wiley 2010), and has a PhD in Organisational Cybernetics at Hull Business School. He also cofounder of Catchbull, a supplier of forecasting performance management software.

The full text is available for registered users. Please register to view the rest of the article.
  • Log In
  • or
  • Register

Related articles

How Much Bias Is in Your Forecast?
August 5, 2019

Being critical of one’s own work, is even more important for the financial doing the forecast...

Read more
Tackling Forecast Bias: Signals and Noise
March 1, 2016

The average level of MAPE for your forecast is 25%. So what? Is it good or...

Read more
Two Proven Techniques to Minimize Optimism Bias in Financial Planning & Analysis
March 22, 2016

One of the realities that FP&A professionals need to realize is people tend to be too...

Read more
+

Subscribe to
FP&A Trends Digest

We will regularly update you on the latest trends and developments in FP&A. Take the opportunity to have articles written by finance thought leaders delivered directly to your inbox; watch compelling webinars; connect with like-minded professionals; and become a part of our global community.

Create new account

image

Event Calendar

Pagination

  • Previous
  • May 2025
  • Next
Su Mo Tu We Th Fr Sa
27
28
29
30
1
2
3
 
 
 
 
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Transforming FP&A Together: Human & AI Synergy
 
18
19
20
21
22
23
24
Moving from FP&A to Extended Planning and Analysis (xP&A)
 
Five Critical Roles for Building a World-Class FP&A Team
 
25
26
27
28
29
30
31
FP&A Business Partnering and AI: A New Era
 
All events for the year

Future Meetings

The Face-to-Face Amsterdam FP&A Board
The Face-to-Face Amsterdam FP&A Board Transforming FP&A Together: Human & AI Synergy

May 15, 2025

The Face-to-Face Milan FP&A Board
The Face-to-Face Milan FP&A Board Moving from FP&A to Extended Planning and Analysis (xP&A)

May 20, 2025

The Face-to-Face Frankfurt FP&A Board
The Face-to-Face Frankfurt FP&A Board Five Critical Roles for Building a World-Class FP&A Team

May 22, 2025

BPAI
The FP&A Trends Webinar FP&A Business Partnering and AI: A New Era

May 28, 2025

The Face-to-Face London FP&A Board: Data Management & Analytics: Unlocking FP&A Value
The Face-to-Face London FP&A Board Mastering Data in FP&A: Smarter Analytics, Better Decisions

June 5, 2025

FP&A Trends Webinar The Evolving Role of FP&A: From Number Cruncher to Strategic Advisor
The FP&A Trends Webinar Making FP&A Teams Fit for the Future

June 11, 2025

The Face-to-Face New York FP&A Board
The Face-to-Face New York FP&A Board From Insight to Impact: FP&A Business Partnering in Action

June 17, 2025

The Face-to-Face Sydney FP&A Board
The Face-to-Face Sydney FP&A Board Modern Financial Planning and Analysis (FP&A): Latest Trends and Developments

June 26, 2025

The Face-to-Face Singapore FP&A Board: Modern Financial Planning and Analysis (FP&A): Latest Trends and Developments
The Face-to-Face Singapore FP&A Board Modern Financial Planning and Analysis (FP&A): Latest Trends and Developments

July 8, 2025

AI/ML FP&A
AI/ML FP&A
Data and Analytics
Data & Analytics
FP&A Case Studies
FP&A Case Studies
FP&A Research
FP&A Research
General
General
Integrated FP&A
Integrated FP&A
People and Culture
People and Culture
Process
Process
Technology
Technology

Please register to receive the latest FP&A news, updates and tips.

info@fpa-trends.com​

              

Foot menu

  • FP&A Insights
  • FP&A Board
  • FP&A Videos

Footer countries

  • Amsterdam
  • Austin
  • Boston
  • Brisbane
  • Brussels
  • Chicago
  • Copenhagen
  • Dubai
  • Frankfurt
  • Geneva
  • Helsinki
  • Hong Kong
  • Houston
  • Kuala Lumpur
  • London Board
  • London (Circle)
  • Melbourne
  • Miami
  • Milan
  • Munich
  • New York
  • Paris
  • Perth
  • Riyadh
  • San Francisco
  • Seattle
  • Shanghai
  • Singapore
  • Stockholm
  • Sydney
  • Tokyo
  • Toronto
  • Washington D.C.
  • Zurich

Copyright © 2025 fpa-trends.com. All rights reserved.

0